00:00:00.001 Started by upstream project "autotest-nightly" build number 3632 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3014 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.218 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.219 The recommended git tool is: git 00:00:00.219 using credential 00000000-0000-0000-0000-000000000002 00:00:00.221 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.274 Fetching changes from the remote Git repository 00:00:00.276 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.339 Using shallow fetch with depth 1 00:00:00.339 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.339 > git --version # timeout=10 00:00:00.375 > git --version # 'git version 2.39.2' 00:00:00.375 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.376 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.376 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.658 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.673 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.686 Checking out Revision e004de56cb2c6b45ae79dfc6c1e79cfd5c84ce1f (FETCH_HEAD) 00:00:06.686 > git config core.sparsecheckout # timeout=10 00:00:06.700 > git read-tree -mu HEAD # timeout=10 00:00:06.718 > git checkout -f e004de56cb2c6b45ae79dfc6c1e79cfd5c84ce1f # timeout=5 00:00:06.738 Commit message: "jenkins/reset: add APC-C14 and APC-C18" 00:00:06.739 > git rev-list --no-walk e004de56cb2c6b45ae79dfc6c1e79cfd5c84ce1f # timeout=10 00:00:06.830 [Pipeline] Start of Pipeline 00:00:06.846 [Pipeline] library 00:00:06.848 Loading library shm_lib@master 00:00:06.848 Library shm_lib@master is cached. Copying from home. 00:00:06.872 [Pipeline] node 00:00:06.883 Running on CYP12 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:06.885 [Pipeline] { 00:00:06.898 [Pipeline] catchError 00:00:06.900 [Pipeline] { 00:00:06.914 [Pipeline] wrap 00:00:06.921 [Pipeline] { 00:00:06.930 [Pipeline] stage 00:00:06.932 [Pipeline] { (Prologue) 00:00:07.197 [Pipeline] sh 00:00:07.488 + logger -p user.info -t JENKINS-CI 00:00:07.513 [Pipeline] echo 00:00:07.514 Node: CYP12 00:00:07.523 [Pipeline] sh 00:00:07.828 [Pipeline] setCustomBuildProperty 00:00:07.839 [Pipeline] echo 00:00:07.841 Cleanup processes 00:00:07.845 [Pipeline] sh 00:00:08.131 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.131 3632971 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.148 [Pipeline] sh 00:00:08.433 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.433 ++ grep -v 'sudo pgrep' 00:00:08.433 ++ awk '{print $1}' 00:00:08.433 + sudo kill -9 00:00:08.433 + true 00:00:08.447 [Pipeline] cleanWs 00:00:08.456 [WS-CLEANUP] Deleting project workspace... 00:00:08.456 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.463 [WS-CLEANUP] done 00:00:08.467 [Pipeline] setCustomBuildProperty 00:00:08.477 [Pipeline] sh 00:00:08.760 + sudo git config --global --replace-all safe.directory '*' 00:00:08.834 [Pipeline] nodesByLabel 00:00:08.836 Found a total of 1 nodes with the 'sorcerer' label 00:00:08.846 [Pipeline] httpRequest 00:00:08.851 HttpMethod: GET 00:00:08.851 URL: http://10.211.164.96/packages/jbp_e004de56cb2c6b45ae79dfc6c1e79cfd5c84ce1f.tar.gz 00:00:08.857 Sending request to url: http://10.211.164.96/packages/jbp_e004de56cb2c6b45ae79dfc6c1e79cfd5c84ce1f.tar.gz 00:00:08.860 Response Code: HTTP/1.1 200 OK 00:00:08.861 Success: Status code 200 is in the accepted range: 200,404 00:00:08.862 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_e004de56cb2c6b45ae79dfc6c1e79cfd5c84ce1f.tar.gz 00:00:09.537 [Pipeline] sh 00:00:09.824 + tar --no-same-owner -xf jbp_e004de56cb2c6b45ae79dfc6c1e79cfd5c84ce1f.tar.gz 00:00:09.847 [Pipeline] httpRequest 00:00:09.853 HttpMethod: GET 00:00:09.854 URL: http://10.211.164.96/packages/spdk_06472fb6d0c234046253a9989fef790e0cbb219e.tar.gz 00:00:09.854 Sending request to url: http://10.211.164.96/packages/spdk_06472fb6d0c234046253a9989fef790e0cbb219e.tar.gz 00:00:09.866 Response Code: HTTP/1.1 200 OK 00:00:09.867 Success: Status code 200 is in the accepted range: 200,404 00:00:09.868 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_06472fb6d0c234046253a9989fef790e0cbb219e.tar.gz 00:00:30.721 [Pipeline] sh 00:00:31.009 + tar --no-same-owner -xf spdk_06472fb6d0c234046253a9989fef790e0cbb219e.tar.gz 00:00:33.567 [Pipeline] sh 00:00:33.882 + git -C spdk log --oneline -n5 00:00:33.882 06472fb6d lib/idxd: fix batch size in kernel IDXD 00:00:33.882 44dcf4fb9 pkgdep/idxd: Add dependency for accel-config used in kernel IDXD 00:00:33.882 3dbaa93c1 nvmf: pass command dword 12 and 13 for write 00:00:33.882 19327fc3a bdev/nvme: use dtype/dspec for write commands 00:00:33.882 c11e5c113 bdev: introduce bdev_nvme_cdw12 and cdw13, and add them to ext_opts 00:00:33.898 [Pipeline] } 00:00:33.916 [Pipeline] // stage 00:00:33.926 [Pipeline] stage 00:00:33.928 [Pipeline] { (Prepare) 00:00:33.947 [Pipeline] writeFile 00:00:33.964 [Pipeline] sh 00:00:34.252 + logger -p user.info -t JENKINS-CI 00:00:34.266 [Pipeline] sh 00:00:34.552 + logger -p user.info -t JENKINS-CI 00:00:34.566 [Pipeline] sh 00:00:34.852 + cat autorun-spdk.conf 00:00:34.852 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:34.852 SPDK_TEST_NVMF=1 00:00:34.852 SPDK_TEST_NVME_CLI=1 00:00:34.852 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:34.852 SPDK_TEST_NVMF_NICS=e810 00:00:34.852 SPDK_RUN_UBSAN=1 00:00:34.852 NET_TYPE=phy 00:00:34.861 RUN_NIGHTLY=1 00:00:34.867 [Pipeline] readFile 00:00:34.896 [Pipeline] withEnv 00:00:34.898 [Pipeline] { 00:00:34.910 [Pipeline] sh 00:00:35.195 + set -ex 00:00:35.195 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:00:35.195 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:35.195 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:35.195 ++ SPDK_TEST_NVMF=1 00:00:35.195 ++ SPDK_TEST_NVME_CLI=1 00:00:35.195 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:35.195 ++ SPDK_TEST_NVMF_NICS=e810 00:00:35.195 ++ SPDK_RUN_UBSAN=1 00:00:35.195 ++ NET_TYPE=phy 00:00:35.195 ++ RUN_NIGHTLY=1 00:00:35.195 + case $SPDK_TEST_NVMF_NICS in 00:00:35.195 + DRIVERS=ice 00:00:35.195 + [[ tcp == \r\d\m\a ]] 00:00:35.195 + [[ -n ice ]] 00:00:35.195 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:35.195 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:35.195 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:00:35.195 rmmod: ERROR: Module irdma is not currently loaded 00:00:35.195 rmmod: ERROR: Module i40iw is not currently loaded 00:00:35.195 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:35.196 + true 00:00:35.196 + for D in $DRIVERS 00:00:35.196 + sudo modprobe ice 00:00:35.196 + exit 0 00:00:35.207 [Pipeline] } 00:00:35.226 [Pipeline] // withEnv 00:00:35.231 [Pipeline] } 00:00:35.247 [Pipeline] // stage 00:00:35.256 [Pipeline] catchError 00:00:35.258 [Pipeline] { 00:00:35.272 [Pipeline] timeout 00:00:35.272 Timeout set to expire in 40 min 00:00:35.274 [Pipeline] { 00:00:35.287 [Pipeline] stage 00:00:35.289 [Pipeline] { (Tests) 00:00:35.299 [Pipeline] sh 00:00:35.584 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:35.584 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:35.584 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:35.584 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:00:35.584 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:35.584 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:35.584 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:00:35.584 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:35.584 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:35.584 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:35.584 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:35.584 + source /etc/os-release 00:00:35.584 ++ NAME='Fedora Linux' 00:00:35.584 ++ VERSION='38 (Cloud Edition)' 00:00:35.584 ++ ID=fedora 00:00:35.584 ++ VERSION_ID=38 00:00:35.584 ++ VERSION_CODENAME= 00:00:35.584 ++ PLATFORM_ID=platform:f38 00:00:35.584 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:35.584 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:35.584 ++ LOGO=fedora-logo-icon 00:00:35.584 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:35.584 ++ HOME_URL=https://fedoraproject.org/ 00:00:35.584 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:35.584 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:35.584 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:35.584 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:35.584 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:35.584 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:35.584 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:35.584 ++ SUPPORT_END=2024-05-14 00:00:35.584 ++ VARIANT='Cloud Edition' 00:00:35.584 ++ VARIANT_ID=cloud 00:00:35.584 + uname -a 00:00:35.584 Linux spdk-cyp-12 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:00:35.584 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:00:38.134 Hugepages 00:00:38.134 node hugesize free / total 00:00:38.134 node0 1048576kB 0 / 0 00:00:38.134 node0 2048kB 0 / 0 00:00:38.134 node1 1048576kB 0 / 0 00:00:38.134 node1 2048kB 0 / 0 00:00:38.134 00:00:38.134 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:38.134 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:00:38.134 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:00:38.134 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:00:38.134 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:00:38.134 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:00:38.134 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:00:38.134 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:00:38.134 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:00:38.395 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:00:38.395 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:00:38.395 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:00:38.395 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:00:38.395 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:00:38.395 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:00:38.395 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:00:38.395 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:00:38.395 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:00:38.395 + rm -f /tmp/spdk-ld-path 00:00:38.395 + source autorun-spdk.conf 00:00:38.395 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:38.395 ++ SPDK_TEST_NVMF=1 00:00:38.395 ++ SPDK_TEST_NVME_CLI=1 00:00:38.395 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:38.395 ++ SPDK_TEST_NVMF_NICS=e810 00:00:38.395 ++ SPDK_RUN_UBSAN=1 00:00:38.395 ++ NET_TYPE=phy 00:00:38.395 ++ RUN_NIGHTLY=1 00:00:38.395 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:38.395 + [[ -n '' ]] 00:00:38.395 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:38.395 + for M in /var/spdk/build-*-manifest.txt 00:00:38.395 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:38.395 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:38.395 + for M in /var/spdk/build-*-manifest.txt 00:00:38.395 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:38.395 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:38.395 ++ uname 00:00:38.395 + [[ Linux == \L\i\n\u\x ]] 00:00:38.395 + sudo dmesg -T 00:00:38.395 + sudo dmesg --clear 00:00:38.395 + dmesg_pid=3633964 00:00:38.395 + [[ Fedora Linux == FreeBSD ]] 00:00:38.395 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:38.395 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:38.395 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:38.395 + [[ -x /usr/src/fio-static/fio ]] 00:00:38.395 + export FIO_BIN=/usr/src/fio-static/fio 00:00:38.395 + FIO_BIN=/usr/src/fio-static/fio 00:00:38.395 + sudo dmesg -Tw 00:00:38.395 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:38.395 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:38.395 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:38.395 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:38.395 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:38.395 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:38.395 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:38.395 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:38.395 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:38.395 Test configuration: 00:00:38.395 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:38.395 SPDK_TEST_NVMF=1 00:00:38.395 SPDK_TEST_NVME_CLI=1 00:00:38.395 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:38.395 SPDK_TEST_NVMF_NICS=e810 00:00:38.395 SPDK_RUN_UBSAN=1 00:00:38.395 NET_TYPE=phy 00:00:38.657 RUN_NIGHTLY=1 12:43:43 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:00:38.657 12:43:43 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:38.657 12:43:43 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:38.657 12:43:43 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:38.657 12:43:43 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:38.657 12:43:43 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:38.657 12:43:43 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:38.657 12:43:43 -- paths/export.sh@5 -- $ export PATH 00:00:38.657 12:43:43 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:38.657 12:43:43 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:00:38.657 12:43:43 -- common/autobuild_common.sh@435 -- $ date +%s 00:00:38.657 12:43:43 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1714128223.XXXXXX 00:00:38.657 12:43:43 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1714128223.SWvxVN 00:00:38.657 12:43:43 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:00:38.657 12:43:43 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:00:38.657 12:43:43 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:00:38.657 12:43:43 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:38.657 12:43:43 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:38.657 12:43:43 -- common/autobuild_common.sh@451 -- $ get_config_params 00:00:38.657 12:43:43 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:00:38.657 12:43:43 -- common/autotest_common.sh@10 -- $ set +x 00:00:38.657 12:43:43 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk' 00:00:38.657 12:43:43 -- common/autobuild_common.sh@453 -- $ start_monitor_resources 00:00:38.657 12:43:43 -- pm/common@17 -- $ local monitor 00:00:38.657 12:43:43 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:38.657 12:43:43 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=3634000 00:00:38.657 12:43:43 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:38.657 12:43:43 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=3634002 00:00:38.657 12:43:43 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:38.657 12:43:43 -- pm/common@21 -- $ date +%s 00:00:38.657 12:43:43 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=3634004 00:00:38.657 12:43:43 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:38.657 12:43:43 -- pm/common@21 -- $ date +%s 00:00:38.657 12:43:43 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=3634007 00:00:38.657 12:43:43 -- pm/common@26 -- $ sleep 1 00:00:38.657 12:43:43 -- pm/common@21 -- $ date +%s 00:00:38.657 12:43:43 -- pm/common@21 -- $ date +%s 00:00:38.657 12:43:43 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1714128223 00:00:38.657 12:43:43 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1714128223 00:00:38.657 12:43:43 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1714128223 00:00:38.657 12:43:43 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1714128223 00:00:38.657 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1714128223_collect-vmstat.pm.log 00:00:38.657 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1714128223_collect-cpu-load.pm.log 00:00:38.657 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1714128223_collect-bmc-pm.bmc.pm.log 00:00:38.657 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1714128223_collect-cpu-temp.pm.log 00:00:39.601 12:43:44 -- common/autobuild_common.sh@454 -- $ trap stop_monitor_resources EXIT 00:00:39.601 12:43:44 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:39.601 12:43:44 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:39.601 12:43:44 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:39.601 12:43:44 -- spdk/autobuild.sh@16 -- $ date -u 00:00:39.601 Fri Apr 26 10:43:44 AM UTC 2024 00:00:39.601 12:43:44 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:39.601 v24.05-pre-448-g06472fb6d 00:00:39.601 12:43:44 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:39.601 12:43:44 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:39.601 12:43:44 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:39.601 12:43:44 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:00:39.601 12:43:44 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:00:39.601 12:43:44 -- common/autotest_common.sh@10 -- $ set +x 00:00:39.862 ************************************ 00:00:39.862 START TEST ubsan 00:00:39.862 ************************************ 00:00:39.862 12:43:44 -- common/autotest_common.sh@1111 -- $ echo 'using ubsan' 00:00:39.862 using ubsan 00:00:39.862 00:00:39.862 real 0m0.001s 00:00:39.862 user 0m0.000s 00:00:39.862 sys 0m0.001s 00:00:39.862 12:43:44 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:00:39.862 12:43:44 -- common/autotest_common.sh@10 -- $ set +x 00:00:39.862 ************************************ 00:00:39.862 END TEST ubsan 00:00:39.862 ************************************ 00:00:39.862 12:43:44 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:00:39.862 12:43:44 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:00:39.862 12:43:44 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:00:39.862 12:43:44 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:00:39.862 12:43:44 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:00:39.862 12:43:44 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:00:39.862 12:43:44 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:00:39.862 12:43:44 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:00:39.863 12:43:44 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-shared 00:00:39.863 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:00:39.863 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:00:40.123 Using 'verbs' RDMA provider 00:00:55.975 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:08.214 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:08.214 Creating mk/config.mk...done. 00:01:08.214 Creating mk/cc.flags.mk...done. 00:01:08.214 Type 'make' to build. 00:01:08.214 12:44:12 -- spdk/autobuild.sh@69 -- $ run_test make make -j144 00:01:08.214 12:44:12 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:01:08.214 12:44:12 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:01:08.214 12:44:12 -- common/autotest_common.sh@10 -- $ set +x 00:01:08.214 ************************************ 00:01:08.214 START TEST make 00:01:08.214 ************************************ 00:01:08.214 12:44:12 -- common/autotest_common.sh@1111 -- $ make -j144 00:01:08.214 make[1]: Nothing to be done for 'all'. 00:01:16.350 The Meson build system 00:01:16.350 Version: 1.3.1 00:01:16.350 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:16.350 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:16.350 Build type: native build 00:01:16.350 Program cat found: YES (/usr/bin/cat) 00:01:16.350 Project name: DPDK 00:01:16.350 Project version: 23.11.0 00:01:16.350 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:16.350 C linker for the host machine: cc ld.bfd 2.39-16 00:01:16.350 Host machine cpu family: x86_64 00:01:16.350 Host machine cpu: x86_64 00:01:16.350 Message: ## Building in Developer Mode ## 00:01:16.350 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:16.350 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:16.350 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:16.350 Program python3 found: YES (/usr/bin/python3) 00:01:16.350 Program cat found: YES (/usr/bin/cat) 00:01:16.350 Compiler for C supports arguments -march=native: YES 00:01:16.350 Checking for size of "void *" : 8 00:01:16.350 Checking for size of "void *" : 8 (cached) 00:01:16.350 Library m found: YES 00:01:16.350 Library numa found: YES 00:01:16.350 Has header "numaif.h" : YES 00:01:16.350 Library fdt found: NO 00:01:16.350 Library execinfo found: NO 00:01:16.350 Has header "execinfo.h" : YES 00:01:16.350 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:16.350 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:16.350 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:16.350 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:16.350 Run-time dependency openssl found: YES 3.0.9 00:01:16.350 Run-time dependency libpcap found: YES 1.10.4 00:01:16.350 Has header "pcap.h" with dependency libpcap: YES 00:01:16.350 Compiler for C supports arguments -Wcast-qual: YES 00:01:16.350 Compiler for C supports arguments -Wdeprecated: YES 00:01:16.350 Compiler for C supports arguments -Wformat: YES 00:01:16.350 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:16.350 Compiler for C supports arguments -Wformat-security: NO 00:01:16.350 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:16.350 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:16.350 Compiler for C supports arguments -Wnested-externs: YES 00:01:16.350 Compiler for C supports arguments -Wold-style-definition: YES 00:01:16.350 Compiler for C supports arguments -Wpointer-arith: YES 00:01:16.350 Compiler for C supports arguments -Wsign-compare: YES 00:01:16.350 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:16.350 Compiler for C supports arguments -Wundef: YES 00:01:16.350 Compiler for C supports arguments -Wwrite-strings: YES 00:01:16.350 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:16.350 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:16.350 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:16.350 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:16.350 Program objdump found: YES (/usr/bin/objdump) 00:01:16.350 Compiler for C supports arguments -mavx512f: YES 00:01:16.350 Checking if "AVX512 checking" compiles: YES 00:01:16.350 Fetching value of define "__SSE4_2__" : 1 00:01:16.350 Fetching value of define "__AES__" : 1 00:01:16.350 Fetching value of define "__AVX__" : 1 00:01:16.350 Fetching value of define "__AVX2__" : 1 00:01:16.350 Fetching value of define "__AVX512BW__" : 1 00:01:16.350 Fetching value of define "__AVX512CD__" : 1 00:01:16.350 Fetching value of define "__AVX512DQ__" : 1 00:01:16.350 Fetching value of define "__AVX512F__" : 1 00:01:16.350 Fetching value of define "__AVX512VL__" : 1 00:01:16.350 Fetching value of define "__PCLMUL__" : 1 00:01:16.350 Fetching value of define "__RDRND__" : 1 00:01:16.350 Fetching value of define "__RDSEED__" : 1 00:01:16.350 Fetching value of define "__VPCLMULQDQ__" : 1 00:01:16.350 Fetching value of define "__znver1__" : (undefined) 00:01:16.350 Fetching value of define "__znver2__" : (undefined) 00:01:16.350 Fetching value of define "__znver3__" : (undefined) 00:01:16.350 Fetching value of define "__znver4__" : (undefined) 00:01:16.350 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:16.350 Message: lib/log: Defining dependency "log" 00:01:16.350 Message: lib/kvargs: Defining dependency "kvargs" 00:01:16.350 Message: lib/telemetry: Defining dependency "telemetry" 00:01:16.350 Checking for function "getentropy" : NO 00:01:16.350 Message: lib/eal: Defining dependency "eal" 00:01:16.350 Message: lib/ring: Defining dependency "ring" 00:01:16.350 Message: lib/rcu: Defining dependency "rcu" 00:01:16.350 Message: lib/mempool: Defining dependency "mempool" 00:01:16.350 Message: lib/mbuf: Defining dependency "mbuf" 00:01:16.350 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:16.350 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:16.350 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:16.350 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:16.350 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:16.350 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:01:16.350 Compiler for C supports arguments -mpclmul: YES 00:01:16.350 Compiler for C supports arguments -maes: YES 00:01:16.350 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:16.350 Compiler for C supports arguments -mavx512bw: YES 00:01:16.350 Compiler for C supports arguments -mavx512dq: YES 00:01:16.350 Compiler for C supports arguments -mavx512vl: YES 00:01:16.350 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:16.350 Compiler for C supports arguments -mavx2: YES 00:01:16.350 Compiler for C supports arguments -mavx: YES 00:01:16.350 Message: lib/net: Defining dependency "net" 00:01:16.350 Message: lib/meter: Defining dependency "meter" 00:01:16.350 Message: lib/ethdev: Defining dependency "ethdev" 00:01:16.350 Message: lib/pci: Defining dependency "pci" 00:01:16.350 Message: lib/cmdline: Defining dependency "cmdline" 00:01:16.350 Message: lib/hash: Defining dependency "hash" 00:01:16.350 Message: lib/timer: Defining dependency "timer" 00:01:16.350 Message: lib/compressdev: Defining dependency "compressdev" 00:01:16.350 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:16.350 Message: lib/dmadev: Defining dependency "dmadev" 00:01:16.350 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:16.350 Message: lib/power: Defining dependency "power" 00:01:16.350 Message: lib/reorder: Defining dependency "reorder" 00:01:16.350 Message: lib/security: Defining dependency "security" 00:01:16.351 Has header "linux/userfaultfd.h" : YES 00:01:16.351 Has header "linux/vduse.h" : YES 00:01:16.351 Message: lib/vhost: Defining dependency "vhost" 00:01:16.351 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:16.351 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:16.351 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:16.351 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:16.351 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:16.351 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:16.351 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:16.351 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:16.351 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:16.351 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:16.351 Program doxygen found: YES (/usr/bin/doxygen) 00:01:16.351 Configuring doxy-api-html.conf using configuration 00:01:16.351 Configuring doxy-api-man.conf using configuration 00:01:16.351 Program mandb found: YES (/usr/bin/mandb) 00:01:16.351 Program sphinx-build found: NO 00:01:16.351 Configuring rte_build_config.h using configuration 00:01:16.351 Message: 00:01:16.351 ================= 00:01:16.351 Applications Enabled 00:01:16.351 ================= 00:01:16.351 00:01:16.351 apps: 00:01:16.351 00:01:16.351 00:01:16.351 Message: 00:01:16.351 ================= 00:01:16.351 Libraries Enabled 00:01:16.351 ================= 00:01:16.351 00:01:16.351 libs: 00:01:16.351 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:16.351 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:16.351 cryptodev, dmadev, power, reorder, security, vhost, 00:01:16.351 00:01:16.351 Message: 00:01:16.351 =============== 00:01:16.351 Drivers Enabled 00:01:16.351 =============== 00:01:16.351 00:01:16.351 common: 00:01:16.351 00:01:16.351 bus: 00:01:16.351 pci, vdev, 00:01:16.351 mempool: 00:01:16.351 ring, 00:01:16.351 dma: 00:01:16.351 00:01:16.351 net: 00:01:16.351 00:01:16.351 crypto: 00:01:16.351 00:01:16.351 compress: 00:01:16.351 00:01:16.351 vdpa: 00:01:16.351 00:01:16.351 00:01:16.351 Message: 00:01:16.351 ================= 00:01:16.351 Content Skipped 00:01:16.351 ================= 00:01:16.351 00:01:16.351 apps: 00:01:16.351 dumpcap: explicitly disabled via build config 00:01:16.351 graph: explicitly disabled via build config 00:01:16.351 pdump: explicitly disabled via build config 00:01:16.351 proc-info: explicitly disabled via build config 00:01:16.351 test-acl: explicitly disabled via build config 00:01:16.351 test-bbdev: explicitly disabled via build config 00:01:16.351 test-cmdline: explicitly disabled via build config 00:01:16.351 test-compress-perf: explicitly disabled via build config 00:01:16.351 test-crypto-perf: explicitly disabled via build config 00:01:16.351 test-dma-perf: explicitly disabled via build config 00:01:16.351 test-eventdev: explicitly disabled via build config 00:01:16.351 test-fib: explicitly disabled via build config 00:01:16.351 test-flow-perf: explicitly disabled via build config 00:01:16.351 test-gpudev: explicitly disabled via build config 00:01:16.351 test-mldev: explicitly disabled via build config 00:01:16.351 test-pipeline: explicitly disabled via build config 00:01:16.351 test-pmd: explicitly disabled via build config 00:01:16.351 test-regex: explicitly disabled via build config 00:01:16.351 test-sad: explicitly disabled via build config 00:01:16.351 test-security-perf: explicitly disabled via build config 00:01:16.351 00:01:16.351 libs: 00:01:16.351 metrics: explicitly disabled via build config 00:01:16.351 acl: explicitly disabled via build config 00:01:16.351 bbdev: explicitly disabled via build config 00:01:16.351 bitratestats: explicitly disabled via build config 00:01:16.351 bpf: explicitly disabled via build config 00:01:16.351 cfgfile: explicitly disabled via build config 00:01:16.351 distributor: explicitly disabled via build config 00:01:16.351 efd: explicitly disabled via build config 00:01:16.351 eventdev: explicitly disabled via build config 00:01:16.351 dispatcher: explicitly disabled via build config 00:01:16.351 gpudev: explicitly disabled via build config 00:01:16.351 gro: explicitly disabled via build config 00:01:16.351 gso: explicitly disabled via build config 00:01:16.351 ip_frag: explicitly disabled via build config 00:01:16.351 jobstats: explicitly disabled via build config 00:01:16.351 latencystats: explicitly disabled via build config 00:01:16.351 lpm: explicitly disabled via build config 00:01:16.351 member: explicitly disabled via build config 00:01:16.351 pcapng: explicitly disabled via build config 00:01:16.351 rawdev: explicitly disabled via build config 00:01:16.351 regexdev: explicitly disabled via build config 00:01:16.351 mldev: explicitly disabled via build config 00:01:16.351 rib: explicitly disabled via build config 00:01:16.351 sched: explicitly disabled via build config 00:01:16.351 stack: explicitly disabled via build config 00:01:16.351 ipsec: explicitly disabled via build config 00:01:16.351 pdcp: explicitly disabled via build config 00:01:16.351 fib: explicitly disabled via build config 00:01:16.351 port: explicitly disabled via build config 00:01:16.351 pdump: explicitly disabled via build config 00:01:16.351 table: explicitly disabled via build config 00:01:16.351 pipeline: explicitly disabled via build config 00:01:16.351 graph: explicitly disabled via build config 00:01:16.351 node: explicitly disabled via build config 00:01:16.351 00:01:16.351 drivers: 00:01:16.351 common/cpt: not in enabled drivers build config 00:01:16.351 common/dpaax: not in enabled drivers build config 00:01:16.351 common/iavf: not in enabled drivers build config 00:01:16.351 common/idpf: not in enabled drivers build config 00:01:16.351 common/mvep: not in enabled drivers build config 00:01:16.351 common/octeontx: not in enabled drivers build config 00:01:16.351 bus/auxiliary: not in enabled drivers build config 00:01:16.351 bus/cdx: not in enabled drivers build config 00:01:16.351 bus/dpaa: not in enabled drivers build config 00:01:16.351 bus/fslmc: not in enabled drivers build config 00:01:16.351 bus/ifpga: not in enabled drivers build config 00:01:16.351 bus/platform: not in enabled drivers build config 00:01:16.351 bus/vmbus: not in enabled drivers build config 00:01:16.351 common/cnxk: not in enabled drivers build config 00:01:16.351 common/mlx5: not in enabled drivers build config 00:01:16.351 common/nfp: not in enabled drivers build config 00:01:16.351 common/qat: not in enabled drivers build config 00:01:16.351 common/sfc_efx: not in enabled drivers build config 00:01:16.351 mempool/bucket: not in enabled drivers build config 00:01:16.351 mempool/cnxk: not in enabled drivers build config 00:01:16.351 mempool/dpaa: not in enabled drivers build config 00:01:16.351 mempool/dpaa2: not in enabled drivers build config 00:01:16.351 mempool/octeontx: not in enabled drivers build config 00:01:16.351 mempool/stack: not in enabled drivers build config 00:01:16.351 dma/cnxk: not in enabled drivers build config 00:01:16.351 dma/dpaa: not in enabled drivers build config 00:01:16.351 dma/dpaa2: not in enabled drivers build config 00:01:16.351 dma/hisilicon: not in enabled drivers build config 00:01:16.351 dma/idxd: not in enabled drivers build config 00:01:16.351 dma/ioat: not in enabled drivers build config 00:01:16.351 dma/skeleton: not in enabled drivers build config 00:01:16.351 net/af_packet: not in enabled drivers build config 00:01:16.351 net/af_xdp: not in enabled drivers build config 00:01:16.351 net/ark: not in enabled drivers build config 00:01:16.351 net/atlantic: not in enabled drivers build config 00:01:16.351 net/avp: not in enabled drivers build config 00:01:16.351 net/axgbe: not in enabled drivers build config 00:01:16.351 net/bnx2x: not in enabled drivers build config 00:01:16.351 net/bnxt: not in enabled drivers build config 00:01:16.351 net/bonding: not in enabled drivers build config 00:01:16.351 net/cnxk: not in enabled drivers build config 00:01:16.351 net/cpfl: not in enabled drivers build config 00:01:16.351 net/cxgbe: not in enabled drivers build config 00:01:16.351 net/dpaa: not in enabled drivers build config 00:01:16.351 net/dpaa2: not in enabled drivers build config 00:01:16.351 net/e1000: not in enabled drivers build config 00:01:16.351 net/ena: not in enabled drivers build config 00:01:16.351 net/enetc: not in enabled drivers build config 00:01:16.351 net/enetfec: not in enabled drivers build config 00:01:16.351 net/enic: not in enabled drivers build config 00:01:16.351 net/failsafe: not in enabled drivers build config 00:01:16.351 net/fm10k: not in enabled drivers build config 00:01:16.351 net/gve: not in enabled drivers build config 00:01:16.351 net/hinic: not in enabled drivers build config 00:01:16.351 net/hns3: not in enabled drivers build config 00:01:16.351 net/i40e: not in enabled drivers build config 00:01:16.351 net/iavf: not in enabled drivers build config 00:01:16.351 net/ice: not in enabled drivers build config 00:01:16.351 net/idpf: not in enabled drivers build config 00:01:16.351 net/igc: not in enabled drivers build config 00:01:16.351 net/ionic: not in enabled drivers build config 00:01:16.351 net/ipn3ke: not in enabled drivers build config 00:01:16.351 net/ixgbe: not in enabled drivers build config 00:01:16.351 net/mana: not in enabled drivers build config 00:01:16.351 net/memif: not in enabled drivers build config 00:01:16.351 net/mlx4: not in enabled drivers build config 00:01:16.351 net/mlx5: not in enabled drivers build config 00:01:16.351 net/mvneta: not in enabled drivers build config 00:01:16.351 net/mvpp2: not in enabled drivers build config 00:01:16.351 net/netvsc: not in enabled drivers build config 00:01:16.351 net/nfb: not in enabled drivers build config 00:01:16.351 net/nfp: not in enabled drivers build config 00:01:16.351 net/ngbe: not in enabled drivers build config 00:01:16.351 net/null: not in enabled drivers build config 00:01:16.351 net/octeontx: not in enabled drivers build config 00:01:16.351 net/octeon_ep: not in enabled drivers build config 00:01:16.351 net/pcap: not in enabled drivers build config 00:01:16.351 net/pfe: not in enabled drivers build config 00:01:16.351 net/qede: not in enabled drivers build config 00:01:16.351 net/ring: not in enabled drivers build config 00:01:16.351 net/sfc: not in enabled drivers build config 00:01:16.351 net/softnic: not in enabled drivers build config 00:01:16.351 net/tap: not in enabled drivers build config 00:01:16.351 net/thunderx: not in enabled drivers build config 00:01:16.352 net/txgbe: not in enabled drivers build config 00:01:16.352 net/vdev_netvsc: not in enabled drivers build config 00:01:16.352 net/vhost: not in enabled drivers build config 00:01:16.352 net/virtio: not in enabled drivers build config 00:01:16.352 net/vmxnet3: not in enabled drivers build config 00:01:16.352 raw/*: missing internal dependency, "rawdev" 00:01:16.352 crypto/armv8: not in enabled drivers build config 00:01:16.352 crypto/bcmfs: not in enabled drivers build config 00:01:16.352 crypto/caam_jr: not in enabled drivers build config 00:01:16.352 crypto/ccp: not in enabled drivers build config 00:01:16.352 crypto/cnxk: not in enabled drivers build config 00:01:16.352 crypto/dpaa_sec: not in enabled drivers build config 00:01:16.352 crypto/dpaa2_sec: not in enabled drivers build config 00:01:16.352 crypto/ipsec_mb: not in enabled drivers build config 00:01:16.352 crypto/mlx5: not in enabled drivers build config 00:01:16.352 crypto/mvsam: not in enabled drivers build config 00:01:16.352 crypto/nitrox: not in enabled drivers build config 00:01:16.352 crypto/null: not in enabled drivers build config 00:01:16.352 crypto/octeontx: not in enabled drivers build config 00:01:16.352 crypto/openssl: not in enabled drivers build config 00:01:16.352 crypto/scheduler: not in enabled drivers build config 00:01:16.352 crypto/uadk: not in enabled drivers build config 00:01:16.352 crypto/virtio: not in enabled drivers build config 00:01:16.352 compress/isal: not in enabled drivers build config 00:01:16.352 compress/mlx5: not in enabled drivers build config 00:01:16.352 compress/octeontx: not in enabled drivers build config 00:01:16.352 compress/zlib: not in enabled drivers build config 00:01:16.352 regex/*: missing internal dependency, "regexdev" 00:01:16.352 ml/*: missing internal dependency, "mldev" 00:01:16.352 vdpa/ifc: not in enabled drivers build config 00:01:16.352 vdpa/mlx5: not in enabled drivers build config 00:01:16.352 vdpa/nfp: not in enabled drivers build config 00:01:16.352 vdpa/sfc: not in enabled drivers build config 00:01:16.352 event/*: missing internal dependency, "eventdev" 00:01:16.352 baseband/*: missing internal dependency, "bbdev" 00:01:16.352 gpu/*: missing internal dependency, "gpudev" 00:01:16.352 00:01:16.352 00:01:16.352 Build targets in project: 84 00:01:16.352 00:01:16.352 DPDK 23.11.0 00:01:16.352 00:01:16.352 User defined options 00:01:16.352 buildtype : debug 00:01:16.352 default_library : shared 00:01:16.352 libdir : lib 00:01:16.352 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:16.352 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:16.352 c_link_args : 00:01:16.352 cpu_instruction_set: native 00:01:16.352 disable_apps : test-acl,test-bbdev,test-crypto-perf,test-fib,test-pipeline,test-gpudev,test-flow-perf,pdump,dumpcap,test-sad,test-cmdline,test-eventdev,proc-info,test,test-dma-perf,test-pmd,test-mldev,test-compress-perf,test-security-perf,graph,test-regex 00:01:16.352 disable_libs : pipeline,member,eventdev,efd,bbdev,cfgfile,rib,sched,mldev,metrics,lpm,latencystats,pdump,pdcp,bpf,ipsec,fib,ip_frag,table,port,stack,gro,jobstats,regexdev,rawdev,pcapng,dispatcher,node,bitratestats,acl,gpudev,distributor,graph,gso 00:01:16.352 enable_docs : false 00:01:16.352 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:16.352 enable_kmods : false 00:01:16.352 tests : false 00:01:16.352 00:01:16.352 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:16.624 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:16.624 [1/264] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:16.624 [2/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:16.624 [3/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:16.624 [4/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:16.624 [5/264] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:16.624 [6/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:16.624 [7/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:16.885 [8/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:16.886 [9/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:16.886 [10/264] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:16.886 [11/264] Linking static target lib/librte_kvargs.a 00:01:16.886 [12/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:16.886 [13/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:16.886 [14/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:16.886 [15/264] Linking static target lib/librte_log.a 00:01:16.886 [16/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:16.886 [17/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:16.886 [18/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:16.886 [19/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:16.886 [20/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:16.886 [21/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:16.886 [22/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:16.886 [23/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:16.886 [24/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:16.886 [25/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:16.886 [26/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:16.886 [27/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:16.886 [28/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:16.886 [29/264] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:16.886 [30/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:16.886 [31/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:16.886 [32/264] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:16.886 [33/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:16.886 [34/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:16.886 [35/264] Linking static target lib/librte_pci.a 00:01:16.886 [36/264] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:16.886 [37/264] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:16.886 [38/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:16.886 [39/264] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:17.145 [40/264] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:17.145 [41/264] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:17.145 [42/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:17.145 [43/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:17.145 [44/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:17.145 [45/264] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:17.145 [46/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:17.146 [47/264] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:17.146 [48/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:17.146 [49/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:17.146 [50/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:17.146 [51/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:17.146 [52/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:17.146 [53/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:17.146 [54/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:17.146 [55/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:17.146 [56/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:17.146 [57/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:17.146 [58/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:17.146 [59/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:17.146 [60/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:17.406 [61/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:17.406 [62/264] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:01:17.406 [63/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:17.406 [64/264] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:17.406 [65/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:17.406 [66/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:17.406 [67/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:17.406 [68/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:17.406 [69/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:17.406 [70/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:17.406 [71/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:17.407 [72/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:17.407 [73/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:17.407 [74/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:17.407 [75/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:17.407 [76/264] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:17.407 [77/264] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:17.407 [78/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:17.407 [79/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:17.407 [80/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:17.407 [81/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:17.407 [82/264] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:17.407 [83/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:17.407 [84/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:17.407 [85/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:17.407 [86/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:17.407 [87/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:17.407 [88/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:17.407 [89/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:17.407 [90/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:17.407 [91/264] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:17.407 [92/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:17.407 [93/264] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:17.407 [94/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:17.407 [95/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:17.407 [96/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:17.407 [97/264] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:17.407 [98/264] Linking static target lib/librte_ring.a 00:01:17.407 [99/264] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:17.407 [100/264] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:17.407 [101/264] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:17.407 [102/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:17.407 [103/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:17.407 [104/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:17.407 [105/264] Linking static target lib/librte_meter.a 00:01:17.407 [106/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:17.407 [107/264] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:17.407 [108/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:17.407 [109/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:17.407 [110/264] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:17.407 [111/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:17.407 [112/264] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:17.407 [113/264] Linking static target lib/librte_telemetry.a 00:01:17.407 [114/264] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:17.407 [115/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:17.407 [116/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:17.407 [117/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:17.407 [118/264] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:17.407 [119/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:17.407 [120/264] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:17.407 [121/264] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:17.407 [122/264] Linking static target lib/librte_timer.a 00:01:17.407 [123/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:17.407 [124/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:17.407 [125/264] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:17.407 [126/264] Linking static target lib/librte_cmdline.a 00:01:17.407 [127/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:17.407 [128/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:17.407 [129/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:17.407 [130/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:17.407 [131/264] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:17.407 [132/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:17.407 [133/264] Linking target lib/librte_log.so.24.0 00:01:17.407 [134/264] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:17.407 [135/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:17.407 [136/264] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:17.407 [137/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:17.407 [138/264] Linking static target lib/librte_rcu.a 00:01:17.407 [139/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:17.407 [140/264] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:17.407 [141/264] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:17.407 [142/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:17.407 [143/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:17.407 [144/264] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:17.407 [145/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:17.407 [146/264] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:17.407 [147/264] Linking static target lib/librte_dmadev.a 00:01:17.407 [148/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:17.407 [149/264] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:17.407 [150/264] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:17.407 [151/264] Linking static target lib/librte_power.a 00:01:17.407 [152/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:17.407 [153/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:17.407 [154/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:17.407 [155/264] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:17.407 [156/264] Linking static target lib/librte_hash.a 00:01:17.407 [157/264] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:17.407 [158/264] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:17.407 [159/264] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:17.407 [160/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:17.407 [161/264] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:17.407 [162/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:17.407 [163/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:17.407 [164/264] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:17.407 [165/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:17.407 [166/264] Linking static target lib/librte_security.a 00:01:17.407 [167/264] Linking static target lib/librte_mempool.a 00:01:17.407 [168/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:17.407 [169/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:17.407 [170/264] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:17.670 [171/264] Linking static target lib/librte_compressdev.a 00:01:17.670 [172/264] Linking static target lib/librte_net.a 00:01:17.670 [173/264] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:17.670 [174/264] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:17.670 [175/264] Linking static target lib/librte_eal.a 00:01:17.670 [176/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:17.670 [177/264] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:17.670 [178/264] Linking static target lib/librte_reorder.a 00:01:17.670 [179/264] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:01:17.670 [180/264] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:17.670 [181/264] Linking target lib/librte_kvargs.so.24.0 00:01:17.670 [182/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:17.670 [183/264] Linking static target lib/librte_mbuf.a 00:01:17.670 [184/264] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:17.670 [185/264] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:17.670 [186/264] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:17.670 [187/264] Linking static target drivers/librte_bus_vdev.a 00:01:17.670 [188/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:17.670 [189/264] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:17.670 [190/264] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:17.670 [191/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:17.670 [192/264] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:17.670 [193/264] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:17.670 [194/264] Linking static target drivers/librte_bus_pci.a 00:01:17.670 [195/264] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:01:17.670 [196/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:17.670 [197/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:17.670 [198/264] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:17.932 [199/264] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:17.932 [200/264] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:17.932 [201/264] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:17.932 [202/264] Linking static target drivers/librte_mempool_ring.a 00:01:17.932 [203/264] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:17.932 [204/264] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:17.932 [205/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:17.932 [206/264] Linking static target lib/librte_cryptodev.a 00:01:17.932 [207/264] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:17.932 [208/264] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:17.932 [209/264] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:17.932 [210/264] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:17.932 [211/264] Linking target lib/librte_telemetry.so.24.0 00:01:18.193 [212/264] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:18.193 [213/264] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:01:18.193 [214/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:18.193 [215/264] Linking static target lib/librte_ethdev.a 00:01:18.193 [216/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:18.193 [217/264] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:18.454 [218/264] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:18.454 [219/264] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:18.454 [220/264] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:18.454 [221/264] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:18.454 [222/264] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:18.715 [223/264] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:18.715 [224/264] Linking static target lib/librte_vhost.a 00:01:18.716 [225/264] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:20.103 [226/264] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:21.045 [227/264] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.736 [228/264] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:29.120 [229/264] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:29.120 [230/264] Linking target lib/librte_eal.so.24.0 00:01:29.120 [231/264] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:01:29.120 [232/264] Linking target lib/librte_ring.so.24.0 00:01:29.120 [233/264] Linking target lib/librte_meter.so.24.0 00:01:29.120 [234/264] Linking target lib/librte_pci.so.24.0 00:01:29.120 [235/264] Linking target lib/librte_timer.so.24.0 00:01:29.120 [236/264] Linking target lib/librte_dmadev.so.24.0 00:01:29.120 [237/264] Linking target drivers/librte_bus_vdev.so.24.0 00:01:29.381 [238/264] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:01:29.381 [239/264] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:01:29.381 [240/264] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:01:29.381 [241/264] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:01:29.381 [242/264] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:01:29.381 [243/264] Linking target lib/librte_rcu.so.24.0 00:01:29.381 [244/264] Linking target lib/librte_mempool.so.24.0 00:01:29.381 [245/264] Linking target drivers/librte_bus_pci.so.24.0 00:01:29.642 [246/264] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:01:29.642 [247/264] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:01:29.642 [248/264] Linking target lib/librte_mbuf.so.24.0 00:01:29.642 [249/264] Linking target drivers/librte_mempool_ring.so.24.0 00:01:29.903 [250/264] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:01:29.903 [251/264] Linking target lib/librte_net.so.24.0 00:01:29.903 [252/264] Linking target lib/librte_compressdev.so.24.0 00:01:29.903 [253/264] Linking target lib/librte_reorder.so.24.0 00:01:29.903 [254/264] Linking target lib/librte_cryptodev.so.24.0 00:01:29.903 [255/264] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:01:29.903 [256/264] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:01:30.164 [257/264] Linking target lib/librte_security.so.24.0 00:01:30.164 [258/264] Linking target lib/librte_hash.so.24.0 00:01:30.164 [259/264] Linking target lib/librte_cmdline.so.24.0 00:01:30.164 [260/264] Linking target lib/librte_ethdev.so.24.0 00:01:30.164 [261/264] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:01:30.164 [262/264] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:01:30.164 [263/264] Linking target lib/librte_power.so.24.0 00:01:30.425 [264/264] Linking target lib/librte_vhost.so.24.0 00:01:30.425 INFO: autodetecting backend as ninja 00:01:30.425 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 144 00:01:31.368 CC lib/log/log.o 00:01:31.368 CC lib/log/log_flags.o 00:01:31.368 CC lib/log/log_deprecated.o 00:01:31.368 CC lib/ut_mock/mock.o 00:01:31.368 CC lib/ut/ut.o 00:01:31.629 LIB libspdk_ut_mock.a 00:01:31.629 LIB libspdk_log.a 00:01:31.629 LIB libspdk_ut.a 00:01:31.629 SO libspdk_ut_mock.so.6.0 00:01:31.629 SO libspdk_ut.so.2.0 00:01:31.629 SO libspdk_log.so.7.0 00:01:31.629 SYMLINK libspdk_ut_mock.so 00:01:31.629 SYMLINK libspdk_ut.so 00:01:31.629 SYMLINK libspdk_log.so 00:01:32.203 CC lib/dma/dma.o 00:01:32.203 CXX lib/trace_parser/trace.o 00:01:32.203 CC lib/ioat/ioat.o 00:01:32.203 CC lib/util/base64.o 00:01:32.203 CC lib/util/bit_array.o 00:01:32.203 CC lib/util/cpuset.o 00:01:32.203 CC lib/util/crc16.o 00:01:32.203 CC lib/util/crc32.o 00:01:32.203 CC lib/util/crc32c.o 00:01:32.203 CC lib/util/crc32_ieee.o 00:01:32.203 CC lib/util/crc64.o 00:01:32.203 CC lib/util/dif.o 00:01:32.203 CC lib/util/hexlify.o 00:01:32.203 CC lib/util/fd.o 00:01:32.203 CC lib/util/file.o 00:01:32.203 CC lib/util/pipe.o 00:01:32.203 CC lib/util/iov.o 00:01:32.203 CC lib/util/math.o 00:01:32.203 CC lib/util/strerror_tls.o 00:01:32.203 CC lib/util/string.o 00:01:32.203 CC lib/util/uuid.o 00:01:32.203 CC lib/util/fd_group.o 00:01:32.203 CC lib/util/xor.o 00:01:32.203 CC lib/util/zipf.o 00:01:32.203 CC lib/vfio_user/host/vfio_user_pci.o 00:01:32.203 CC lib/vfio_user/host/vfio_user.o 00:01:32.203 LIB libspdk_dma.a 00:01:32.203 SO libspdk_dma.so.4.0 00:01:32.203 LIB libspdk_ioat.a 00:01:32.464 SYMLINK libspdk_dma.so 00:01:32.464 SO libspdk_ioat.so.7.0 00:01:32.464 SYMLINK libspdk_ioat.so 00:01:32.464 LIB libspdk_vfio_user.a 00:01:32.464 SO libspdk_vfio_user.so.5.0 00:01:32.464 LIB libspdk_util.a 00:01:32.464 SYMLINK libspdk_vfio_user.so 00:01:32.726 SO libspdk_util.so.9.0 00:01:32.726 SYMLINK libspdk_util.so 00:01:32.987 LIB libspdk_trace_parser.a 00:01:32.987 SO libspdk_trace_parser.so.5.0 00:01:32.987 SYMLINK libspdk_trace_parser.so 00:01:32.987 CC lib/vmd/vmd.o 00:01:32.987 CC lib/vmd/led.o 00:01:32.987 CC lib/env_dpdk/env.o 00:01:32.987 CC lib/json/json_parse.o 00:01:32.987 CC lib/idxd/idxd.o 00:01:33.249 CC lib/env_dpdk/memory.o 00:01:33.249 CC lib/json/json_util.o 00:01:33.249 CC lib/idxd/idxd_user.o 00:01:33.249 CC lib/json/json_write.o 00:01:33.249 CC lib/env_dpdk/pci.o 00:01:33.249 CC lib/env_dpdk/init.o 00:01:33.249 CC lib/env_dpdk/threads.o 00:01:33.249 CC lib/env_dpdk/pci_ioat.o 00:01:33.249 CC lib/env_dpdk/pci_virtio.o 00:01:33.249 CC lib/env_dpdk/pci_vmd.o 00:01:33.249 CC lib/rdma/common.o 00:01:33.249 CC lib/conf/conf.o 00:01:33.249 CC lib/env_dpdk/sigbus_handler.o 00:01:33.249 CC lib/rdma/rdma_verbs.o 00:01:33.249 CC lib/env_dpdk/pci_idxd.o 00:01:33.249 CC lib/env_dpdk/pci_event.o 00:01:33.249 CC lib/env_dpdk/pci_dpdk.o 00:01:33.249 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:33.249 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:33.249 LIB libspdk_conf.a 00:01:33.510 SO libspdk_conf.so.6.0 00:01:33.510 LIB libspdk_json.a 00:01:33.510 LIB libspdk_rdma.a 00:01:33.510 SO libspdk_json.so.6.0 00:01:33.510 SO libspdk_rdma.so.6.0 00:01:33.510 SYMLINK libspdk_conf.so 00:01:33.510 SYMLINK libspdk_json.so 00:01:33.510 SYMLINK libspdk_rdma.so 00:01:33.510 LIB libspdk_idxd.a 00:01:33.771 SO libspdk_idxd.so.12.0 00:01:33.771 LIB libspdk_vmd.a 00:01:33.771 SYMLINK libspdk_idxd.so 00:01:33.771 SO libspdk_vmd.so.6.0 00:01:33.771 SYMLINK libspdk_vmd.so 00:01:33.771 CC lib/jsonrpc/jsonrpc_server.o 00:01:33.771 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:33.771 CC lib/jsonrpc/jsonrpc_client.o 00:01:33.771 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:34.032 LIB libspdk_jsonrpc.a 00:01:34.293 SO libspdk_jsonrpc.so.6.0 00:01:34.293 SYMLINK libspdk_jsonrpc.so 00:01:34.293 LIB libspdk_env_dpdk.a 00:01:34.293 SO libspdk_env_dpdk.so.14.0 00:01:34.554 SYMLINK libspdk_env_dpdk.so 00:01:34.554 CC lib/rpc/rpc.o 00:01:34.815 LIB libspdk_rpc.a 00:01:34.815 SO libspdk_rpc.so.6.0 00:01:34.815 SYMLINK libspdk_rpc.so 00:01:35.386 CC lib/trace/trace.o 00:01:35.386 CC lib/trace/trace_flags.o 00:01:35.386 CC lib/trace/trace_rpc.o 00:01:35.386 CC lib/notify/notify.o 00:01:35.386 CC lib/notify/notify_rpc.o 00:01:35.386 CC lib/keyring/keyring.o 00:01:35.386 CC lib/keyring/keyring_rpc.o 00:01:35.386 LIB libspdk_notify.a 00:01:35.386 LIB libspdk_trace.a 00:01:35.386 SO libspdk_notify.so.6.0 00:01:35.648 LIB libspdk_keyring.a 00:01:35.648 SO libspdk_trace.so.10.0 00:01:35.648 SO libspdk_keyring.so.1.0 00:01:35.648 SYMLINK libspdk_notify.so 00:01:35.648 SYMLINK libspdk_keyring.so 00:01:35.648 SYMLINK libspdk_trace.so 00:01:35.909 CC lib/thread/thread.o 00:01:35.909 CC lib/thread/iobuf.o 00:01:35.909 CC lib/sock/sock.o 00:01:35.909 CC lib/sock/sock_rpc.o 00:01:36.480 LIB libspdk_sock.a 00:01:36.480 SO libspdk_sock.so.9.0 00:01:36.480 SYMLINK libspdk_sock.so 00:01:36.741 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:36.741 CC lib/nvme/nvme_ctrlr.o 00:01:36.741 CC lib/nvme/nvme_ns_cmd.o 00:01:36.741 CC lib/nvme/nvme_fabric.o 00:01:36.741 CC lib/nvme/nvme_ns.o 00:01:36.741 CC lib/nvme/nvme_pcie_common.o 00:01:36.741 CC lib/nvme/nvme_pcie.o 00:01:36.741 CC lib/nvme/nvme_qpair.o 00:01:36.741 CC lib/nvme/nvme.o 00:01:36.741 CC lib/nvme/nvme_quirks.o 00:01:36.741 CC lib/nvme/nvme_transport.o 00:01:36.741 CC lib/nvme/nvme_discovery.o 00:01:36.741 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:36.741 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:36.741 CC lib/nvme/nvme_tcp.o 00:01:36.741 CC lib/nvme/nvme_opal.o 00:01:36.741 CC lib/nvme/nvme_io_msg.o 00:01:36.741 CC lib/nvme/nvme_poll_group.o 00:01:36.741 CC lib/nvme/nvme_zns.o 00:01:36.741 CC lib/nvme/nvme_stubs.o 00:01:36.741 CC lib/nvme/nvme_auth.o 00:01:36.741 CC lib/nvme/nvme_cuse.o 00:01:36.741 CC lib/nvme/nvme_rdma.o 00:01:37.312 LIB libspdk_thread.a 00:01:37.312 SO libspdk_thread.so.10.0 00:01:37.312 SYMLINK libspdk_thread.so 00:01:37.573 CC lib/blob/blobstore.o 00:01:37.573 CC lib/blob/request.o 00:01:37.573 CC lib/blob/zeroes.o 00:01:37.573 CC lib/blob/blob_bs_dev.o 00:01:37.573 CC lib/accel/accel.o 00:01:37.573 CC lib/virtio/virtio.o 00:01:37.573 CC lib/virtio/virtio_vhost_user.o 00:01:37.573 CC lib/accel/accel_rpc.o 00:01:37.573 CC lib/virtio/virtio_vfio_user.o 00:01:37.573 CC lib/accel/accel_sw.o 00:01:37.573 CC lib/virtio/virtio_pci.o 00:01:37.835 CC lib/init/json_config.o 00:01:37.835 CC lib/init/subsystem.o 00:01:37.835 CC lib/init/subsystem_rpc.o 00:01:37.835 CC lib/init/rpc.o 00:01:37.835 LIB libspdk_init.a 00:01:38.096 SO libspdk_init.so.5.0 00:01:38.096 LIB libspdk_virtio.a 00:01:38.096 SYMLINK libspdk_init.so 00:01:38.096 SO libspdk_virtio.so.7.0 00:01:38.096 SYMLINK libspdk_virtio.so 00:01:38.358 CC lib/event/app.o 00:01:38.358 CC lib/event/reactor.o 00:01:38.358 CC lib/event/log_rpc.o 00:01:38.358 CC lib/event/app_rpc.o 00:01:38.358 CC lib/event/scheduler_static.o 00:01:38.618 LIB libspdk_accel.a 00:01:38.618 SO libspdk_accel.so.15.0 00:01:38.618 LIB libspdk_nvme.a 00:01:38.618 SYMLINK libspdk_accel.so 00:01:38.618 SO libspdk_nvme.so.13.0 00:01:38.878 LIB libspdk_event.a 00:01:38.878 SO libspdk_event.so.13.0 00:01:38.878 SYMLINK libspdk_event.so 00:01:39.139 SYMLINK libspdk_nvme.so 00:01:39.139 CC lib/bdev/bdev.o 00:01:39.139 CC lib/bdev/bdev_rpc.o 00:01:39.139 CC lib/bdev/bdev_zone.o 00:01:39.139 CC lib/bdev/part.o 00:01:39.139 CC lib/bdev/scsi_nvme.o 00:01:40.082 LIB libspdk_blob.a 00:01:40.082 SO libspdk_blob.so.11.0 00:01:40.343 SYMLINK libspdk_blob.so 00:01:40.605 CC lib/blobfs/blobfs.o 00:01:40.605 CC lib/blobfs/tree.o 00:01:40.605 CC lib/lvol/lvol.o 00:01:41.179 LIB libspdk_blobfs.a 00:01:41.179 LIB libspdk_bdev.a 00:01:41.441 SO libspdk_blobfs.so.10.0 00:01:41.441 LIB libspdk_lvol.a 00:01:41.441 SO libspdk_bdev.so.15.0 00:01:41.441 SO libspdk_lvol.so.10.0 00:01:41.441 SYMLINK libspdk_blobfs.so 00:01:41.442 SYMLINK libspdk_bdev.so 00:01:41.442 SYMLINK libspdk_lvol.so 00:01:41.704 CC lib/nvmf/ctrlr.o 00:01:41.704 CC lib/nvmf/ctrlr_discovery.o 00:01:41.704 CC lib/nvmf/ctrlr_bdev.o 00:01:41.704 CC lib/nvmf/subsystem.o 00:01:41.704 CC lib/nvmf/nvmf_rpc.o 00:01:41.704 CC lib/nvmf/nvmf.o 00:01:41.704 CC lib/nvmf/transport.o 00:01:41.704 CC lib/nvmf/tcp.o 00:01:41.704 CC lib/nvmf/rdma.o 00:01:41.704 CC lib/ftl/ftl_core.o 00:01:41.704 CC lib/ftl/ftl_init.o 00:01:41.704 CC lib/scsi/dev.o 00:01:41.704 CC lib/nbd/nbd.o 00:01:41.704 CC lib/ftl/ftl_layout.o 00:01:41.704 CC lib/ftl/ftl_debug.o 00:01:41.704 CC lib/scsi/lun.o 00:01:41.704 CC lib/nbd/nbd_rpc.o 00:01:41.704 CC lib/ftl/ftl_io.o 00:01:41.704 CC lib/scsi/port.o 00:01:41.704 CC lib/ftl/ftl_sb.o 00:01:41.704 CC lib/ftl/ftl_l2p.o 00:01:41.704 CC lib/scsi/scsi.o 00:01:41.704 CC lib/scsi/scsi_bdev.o 00:01:41.704 CC lib/ftl/ftl_l2p_flat.o 00:01:41.704 CC lib/scsi/scsi_pr.o 00:01:41.704 CC lib/ftl/ftl_nv_cache.o 00:01:41.704 CC lib/ftl/ftl_band.o 00:01:41.704 CC lib/scsi/scsi_rpc.o 00:01:41.704 CC lib/ublk/ublk.o 00:01:41.704 CC lib/ftl/ftl_band_ops.o 00:01:41.704 CC lib/ublk/ublk_rpc.o 00:01:41.704 CC lib/scsi/task.o 00:01:41.704 CC lib/ftl/ftl_writer.o 00:01:41.704 CC lib/ftl/ftl_rq.o 00:01:41.704 CC lib/ftl/ftl_reloc.o 00:01:41.704 CC lib/ftl/ftl_l2p_cache.o 00:01:41.704 CC lib/ftl/ftl_p2l.o 00:01:41.704 CC lib/ftl/mngt/ftl_mngt.o 00:01:41.704 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:01:41.704 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:01:41.704 CC lib/ftl/mngt/ftl_mngt_startup.o 00:01:41.704 CC lib/ftl/mngt/ftl_mngt_md.o 00:01:41.704 CC lib/ftl/mngt/ftl_mngt_misc.o 00:01:41.704 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:01:41.704 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:01:41.704 CC lib/ftl/mngt/ftl_mngt_band.o 00:01:41.963 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:01:41.963 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:01:41.963 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:01:41.963 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:01:41.963 CC lib/ftl/utils/ftl_conf.o 00:01:41.963 CC lib/ftl/utils/ftl_md.o 00:01:41.963 CC lib/ftl/utils/ftl_mempool.o 00:01:41.963 CC lib/ftl/utils/ftl_bitmap.o 00:01:41.963 CC lib/ftl/utils/ftl_property.o 00:01:41.963 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:01:41.963 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:01:41.963 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:01:41.963 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:01:41.963 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:01:41.963 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:01:41.963 CC lib/ftl/upgrade/ftl_sb_v3.o 00:01:41.963 CC lib/ftl/upgrade/ftl_sb_v5.o 00:01:41.963 CC lib/ftl/nvc/ftl_nvc_dev.o 00:01:41.963 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:01:41.963 CC lib/ftl/base/ftl_base_bdev.o 00:01:41.963 CC lib/ftl/base/ftl_base_dev.o 00:01:41.963 CC lib/ftl/ftl_trace.o 00:01:42.224 LIB libspdk_nbd.a 00:01:42.224 SO libspdk_nbd.so.7.0 00:01:42.224 LIB libspdk_scsi.a 00:01:42.485 SYMLINK libspdk_nbd.so 00:01:42.485 SO libspdk_scsi.so.9.0 00:01:42.485 LIB libspdk_ublk.a 00:01:42.485 SO libspdk_ublk.so.3.0 00:01:42.485 SYMLINK libspdk_scsi.so 00:01:42.485 SYMLINK libspdk_ublk.so 00:01:42.746 LIB libspdk_ftl.a 00:01:42.746 CC lib/vhost/vhost_rpc.o 00:01:42.746 CC lib/vhost/vhost.o 00:01:42.746 CC lib/vhost/vhost_scsi.o 00:01:42.746 CC lib/vhost/vhost_blk.o 00:01:42.746 CC lib/vhost/rte_vhost_user.o 00:01:42.746 SO libspdk_ftl.so.9.0 00:01:42.747 CC lib/iscsi/conn.o 00:01:42.747 CC lib/iscsi/init_grp.o 00:01:42.747 CC lib/iscsi/iscsi.o 00:01:42.747 CC lib/iscsi/md5.o 00:01:42.747 CC lib/iscsi/param.o 00:01:42.747 CC lib/iscsi/portal_grp.o 00:01:42.747 CC lib/iscsi/tgt_node.o 00:01:43.008 CC lib/iscsi/iscsi_subsystem.o 00:01:43.008 CC lib/iscsi/iscsi_rpc.o 00:01:43.008 CC lib/iscsi/task.o 00:01:43.269 SYMLINK libspdk_ftl.so 00:01:43.532 LIB libspdk_nvmf.a 00:01:43.532 SO libspdk_nvmf.so.18.0 00:01:43.792 LIB libspdk_vhost.a 00:01:43.792 SYMLINK libspdk_nvmf.so 00:01:43.792 SO libspdk_vhost.so.8.0 00:01:44.051 SYMLINK libspdk_vhost.so 00:01:44.051 LIB libspdk_iscsi.a 00:01:44.051 SO libspdk_iscsi.so.8.0 00:01:44.311 SYMLINK libspdk_iscsi.so 00:01:44.882 CC module/env_dpdk/env_dpdk_rpc.o 00:01:44.882 CC module/sock/posix/posix.o 00:01:44.882 LIB libspdk_env_dpdk_rpc.a 00:01:44.882 CC module/keyring/file/keyring.o 00:01:44.882 CC module/keyring/file/keyring_rpc.o 00:01:44.882 CC module/blob/bdev/blob_bdev.o 00:01:44.882 CC module/accel/ioat/accel_ioat.o 00:01:44.882 CC module/accel/error/accel_error.o 00:01:44.882 CC module/accel/error/accel_error_rpc.o 00:01:44.882 CC module/accel/ioat/accel_ioat_rpc.o 00:01:44.882 CC module/accel/iaa/accel_iaa.o 00:01:44.882 CC module/scheduler/gscheduler/gscheduler.o 00:01:44.882 CC module/accel/dsa/accel_dsa.o 00:01:44.882 CC module/accel/iaa/accel_iaa_rpc.o 00:01:45.143 CC module/accel/dsa/accel_dsa_rpc.o 00:01:45.143 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:01:45.143 CC module/scheduler/dynamic/scheduler_dynamic.o 00:01:45.143 SO libspdk_env_dpdk_rpc.so.6.0 00:01:45.143 SYMLINK libspdk_env_dpdk_rpc.so 00:01:45.143 LIB libspdk_keyring_file.a 00:01:45.143 LIB libspdk_accel_error.a 00:01:45.143 LIB libspdk_scheduler_gscheduler.a 00:01:45.143 LIB libspdk_scheduler_dpdk_governor.a 00:01:45.143 SO libspdk_keyring_file.so.1.0 00:01:45.143 SO libspdk_scheduler_gscheduler.so.4.0 00:01:45.143 SO libspdk_scheduler_dpdk_governor.so.4.0 00:01:45.143 LIB libspdk_accel_ioat.a 00:01:45.143 SO libspdk_accel_error.so.2.0 00:01:45.143 LIB libspdk_accel_iaa.a 00:01:45.143 LIB libspdk_scheduler_dynamic.a 00:01:45.143 LIB libspdk_blob_bdev.a 00:01:45.143 SO libspdk_accel_ioat.so.6.0 00:01:45.143 SYMLINK libspdk_keyring_file.so 00:01:45.143 LIB libspdk_accel_dsa.a 00:01:45.405 SO libspdk_accel_iaa.so.3.0 00:01:45.405 SO libspdk_scheduler_dynamic.so.4.0 00:01:45.405 SYMLINK libspdk_scheduler_gscheduler.so 00:01:45.405 SYMLINK libspdk_scheduler_dpdk_governor.so 00:01:45.405 SO libspdk_blob_bdev.so.11.0 00:01:45.405 SYMLINK libspdk_accel_error.so 00:01:45.405 SO libspdk_accel_dsa.so.5.0 00:01:45.405 SYMLINK libspdk_accel_ioat.so 00:01:45.405 SYMLINK libspdk_scheduler_dynamic.so 00:01:45.405 SYMLINK libspdk_accel_iaa.so 00:01:45.405 SYMLINK libspdk_blob_bdev.so 00:01:45.405 SYMLINK libspdk_accel_dsa.so 00:01:45.668 LIB libspdk_sock_posix.a 00:01:45.668 SO libspdk_sock_posix.so.6.0 00:01:45.668 SYMLINK libspdk_sock_posix.so 00:01:45.929 CC module/blobfs/bdev/blobfs_bdev.o 00:01:45.929 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:01:45.929 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:01:45.929 CC module/bdev/null/bdev_null.o 00:01:45.929 CC module/bdev/iscsi/bdev_iscsi.o 00:01:45.929 CC module/bdev/null/bdev_null_rpc.o 00:01:45.929 CC module/bdev/gpt/gpt.o 00:01:45.929 CC module/bdev/gpt/vbdev_gpt.o 00:01:45.929 CC module/bdev/split/vbdev_split.o 00:01:45.929 CC module/bdev/split/vbdev_split_rpc.o 00:01:45.929 CC module/bdev/lvol/vbdev_lvol.o 00:01:45.929 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:01:45.929 CC module/bdev/error/vbdev_error.o 00:01:45.929 CC module/bdev/delay/vbdev_delay.o 00:01:45.929 CC module/bdev/error/vbdev_error_rpc.o 00:01:45.929 CC module/bdev/delay/vbdev_delay_rpc.o 00:01:45.929 CC module/bdev/zone_block/vbdev_zone_block.o 00:01:45.929 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:01:45.929 CC module/bdev/passthru/vbdev_passthru.o 00:01:45.929 CC module/bdev/raid/bdev_raid.o 00:01:45.929 CC module/bdev/raid/bdev_raid_sb.o 00:01:45.929 CC module/bdev/malloc/bdev_malloc.o 00:01:45.929 CC module/bdev/raid/bdev_raid_rpc.o 00:01:45.929 CC module/bdev/malloc/bdev_malloc_rpc.o 00:01:45.929 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:01:45.929 CC module/bdev/virtio/bdev_virtio_scsi.o 00:01:45.929 CC module/bdev/raid/raid0.o 00:01:45.929 CC module/bdev/raid/raid1.o 00:01:45.929 CC module/bdev/virtio/bdev_virtio_blk.o 00:01:45.929 CC module/bdev/aio/bdev_aio.o 00:01:45.929 CC module/bdev/nvme/bdev_nvme.o 00:01:45.929 CC module/bdev/aio/bdev_aio_rpc.o 00:01:45.929 CC module/bdev/raid/concat.o 00:01:45.929 CC module/bdev/virtio/bdev_virtio_rpc.o 00:01:45.929 CC module/bdev/nvme/bdev_nvme_rpc.o 00:01:45.929 CC module/bdev/nvme/nvme_rpc.o 00:01:45.929 CC module/bdev/nvme/bdev_mdns_client.o 00:01:45.929 CC module/bdev/nvme/vbdev_opal.o 00:01:45.929 CC module/bdev/nvme/vbdev_opal_rpc.o 00:01:45.929 CC module/bdev/ftl/bdev_ftl.o 00:01:45.929 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:01:45.929 CC module/bdev/ftl/bdev_ftl_rpc.o 00:01:46.188 LIB libspdk_blobfs_bdev.a 00:01:46.188 SO libspdk_blobfs_bdev.so.6.0 00:01:46.188 LIB libspdk_bdev_split.a 00:01:46.188 SO libspdk_bdev_split.so.6.0 00:01:46.188 LIB libspdk_bdev_null.a 00:01:46.188 LIB libspdk_bdev_gpt.a 00:01:46.188 LIB libspdk_bdev_error.a 00:01:46.188 SYMLINK libspdk_blobfs_bdev.so 00:01:46.188 SO libspdk_bdev_gpt.so.6.0 00:01:46.188 LIB libspdk_bdev_ftl.a 00:01:46.188 SO libspdk_bdev_null.so.6.0 00:01:46.188 SO libspdk_bdev_error.so.6.0 00:01:46.188 LIB libspdk_bdev_passthru.a 00:01:46.188 LIB libspdk_bdev_zone_block.a 00:01:46.188 SYMLINK libspdk_bdev_split.so 00:01:46.188 LIB libspdk_bdev_aio.a 00:01:46.188 LIB libspdk_bdev_iscsi.a 00:01:46.188 SO libspdk_bdev_ftl.so.6.0 00:01:46.188 SO libspdk_bdev_passthru.so.6.0 00:01:46.188 SYMLINK libspdk_bdev_gpt.so 00:01:46.188 LIB libspdk_bdev_malloc.a 00:01:46.188 LIB libspdk_bdev_delay.a 00:01:46.188 SYMLINK libspdk_bdev_null.so 00:01:46.188 SO libspdk_bdev_aio.so.6.0 00:01:46.188 SO libspdk_bdev_zone_block.so.6.0 00:01:46.188 SO libspdk_bdev_iscsi.so.6.0 00:01:46.188 SYMLINK libspdk_bdev_error.so 00:01:46.448 SO libspdk_bdev_malloc.so.6.0 00:01:46.448 SO libspdk_bdev_delay.so.6.0 00:01:46.448 SYMLINK libspdk_bdev_ftl.so 00:01:46.448 SYMLINK libspdk_bdev_passthru.so 00:01:46.448 SYMLINK libspdk_bdev_aio.so 00:01:46.448 SYMLINK libspdk_bdev_zone_block.so 00:01:46.448 LIB libspdk_bdev_lvol.a 00:01:46.448 SYMLINK libspdk_bdev_iscsi.so 00:01:46.448 SYMLINK libspdk_bdev_malloc.so 00:01:46.448 SYMLINK libspdk_bdev_delay.so 00:01:46.448 LIB libspdk_bdev_virtio.a 00:01:46.448 SO libspdk_bdev_lvol.so.6.0 00:01:46.448 SO libspdk_bdev_virtio.so.6.0 00:01:46.448 SYMLINK libspdk_bdev_lvol.so 00:01:46.448 SYMLINK libspdk_bdev_virtio.so 00:01:46.708 LIB libspdk_bdev_raid.a 00:01:46.708 SO libspdk_bdev_raid.so.6.0 00:01:46.967 SYMLINK libspdk_bdev_raid.so 00:01:47.911 LIB libspdk_bdev_nvme.a 00:01:47.911 SO libspdk_bdev_nvme.so.7.0 00:01:47.911 SYMLINK libspdk_bdev_nvme.so 00:01:48.853 CC module/event/subsystems/sock/sock.o 00:01:48.853 CC module/event/subsystems/iobuf/iobuf.o 00:01:48.853 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:01:48.853 CC module/event/subsystems/scheduler/scheduler.o 00:01:48.853 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:01:48.853 CC module/event/subsystems/vmd/vmd.o 00:01:48.853 CC module/event/subsystems/vmd/vmd_rpc.o 00:01:48.853 CC module/event/subsystems/keyring/keyring.o 00:01:48.853 LIB libspdk_event_sock.a 00:01:48.853 LIB libspdk_event_keyring.a 00:01:48.853 LIB libspdk_event_vhost_blk.a 00:01:48.853 LIB libspdk_event_scheduler.a 00:01:48.853 LIB libspdk_event_iobuf.a 00:01:48.853 LIB libspdk_event_vmd.a 00:01:48.853 SO libspdk_event_sock.so.5.0 00:01:48.853 SO libspdk_event_keyring.so.1.0 00:01:48.853 SO libspdk_event_vhost_blk.so.3.0 00:01:48.853 SO libspdk_event_scheduler.so.4.0 00:01:48.853 SO libspdk_event_iobuf.so.3.0 00:01:48.853 SO libspdk_event_vmd.so.6.0 00:01:48.853 SYMLINK libspdk_event_keyring.so 00:01:48.853 SYMLINK libspdk_event_sock.so 00:01:48.853 SYMLINK libspdk_event_vhost_blk.so 00:01:49.112 SYMLINK libspdk_event_scheduler.so 00:01:49.112 SYMLINK libspdk_event_vmd.so 00:01:49.112 SYMLINK libspdk_event_iobuf.so 00:01:49.372 CC module/event/subsystems/accel/accel.o 00:01:49.372 LIB libspdk_event_accel.a 00:01:49.636 SO libspdk_event_accel.so.6.0 00:01:49.636 SYMLINK libspdk_event_accel.so 00:01:49.898 CC module/event/subsystems/bdev/bdev.o 00:01:50.157 LIB libspdk_event_bdev.a 00:01:50.157 SO libspdk_event_bdev.so.6.0 00:01:50.157 SYMLINK libspdk_event_bdev.so 00:01:50.417 CC module/event/subsystems/scsi/scsi.o 00:01:50.417 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:01:50.417 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:01:50.417 CC module/event/subsystems/nbd/nbd.o 00:01:50.740 CC module/event/subsystems/ublk/ublk.o 00:01:50.740 LIB libspdk_event_scsi.a 00:01:50.740 LIB libspdk_event_nbd.a 00:01:50.740 LIB libspdk_event_ublk.a 00:01:50.740 SO libspdk_event_scsi.so.6.0 00:01:50.740 SO libspdk_event_nbd.so.6.0 00:01:50.740 SO libspdk_event_ublk.so.3.0 00:01:50.740 LIB libspdk_event_nvmf.a 00:01:50.740 SYMLINK libspdk_event_scsi.so 00:01:50.740 SYMLINK libspdk_event_nbd.so 00:01:50.740 SO libspdk_event_nvmf.so.6.0 00:01:50.740 SYMLINK libspdk_event_ublk.so 00:01:51.019 SYMLINK libspdk_event_nvmf.so 00:01:51.019 CC module/event/subsystems/iscsi/iscsi.o 00:01:51.019 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:01:51.280 LIB libspdk_event_vhost_scsi.a 00:01:51.280 LIB libspdk_event_iscsi.a 00:01:51.280 SO libspdk_event_vhost_scsi.so.3.0 00:01:51.280 SO libspdk_event_iscsi.so.6.0 00:01:51.541 SYMLINK libspdk_event_vhost_scsi.so 00:01:51.541 SYMLINK libspdk_event_iscsi.so 00:01:51.541 SO libspdk.so.6.0 00:01:51.541 SYMLINK libspdk.so 00:01:52.118 CC test/rpc_client/rpc_client_test.o 00:01:52.118 CC app/spdk_lspci/spdk_lspci.o 00:01:52.118 CXX app/trace/trace.o 00:01:52.118 CC app/spdk_top/spdk_top.o 00:01:52.118 TEST_HEADER include/spdk/accel_module.h 00:01:52.118 TEST_HEADER include/spdk/assert.h 00:01:52.118 TEST_HEADER include/spdk/accel.h 00:01:52.118 TEST_HEADER include/spdk/barrier.h 00:01:52.118 CC app/trace_record/trace_record.o 00:01:52.118 TEST_HEADER include/spdk/bdev.h 00:01:52.118 TEST_HEADER include/spdk/base64.h 00:01:52.118 TEST_HEADER include/spdk/bdev_zone.h 00:01:52.118 TEST_HEADER include/spdk/bit_array.h 00:01:52.118 TEST_HEADER include/spdk/bdev_module.h 00:01:52.118 CC app/spdk_nvme_discover/discovery_aer.o 00:01:52.118 TEST_HEADER include/spdk/bit_pool.h 00:01:52.118 CC app/spdk_nvme_identify/identify.o 00:01:52.118 TEST_HEADER include/spdk/blob_bdev.h 00:01:52.118 CC app/spdk_nvme_perf/perf.o 00:01:52.118 TEST_HEADER include/spdk/blobfs.h 00:01:52.118 TEST_HEADER include/spdk/blob.h 00:01:52.118 TEST_HEADER include/spdk/blobfs_bdev.h 00:01:52.118 TEST_HEADER include/spdk/conf.h 00:01:52.118 TEST_HEADER include/spdk/config.h 00:01:52.118 TEST_HEADER include/spdk/cpuset.h 00:01:52.118 TEST_HEADER include/spdk/crc32.h 00:01:52.118 TEST_HEADER include/spdk/crc16.h 00:01:52.118 TEST_HEADER include/spdk/dif.h 00:01:52.118 TEST_HEADER include/spdk/crc64.h 00:01:52.118 TEST_HEADER include/spdk/dma.h 00:01:52.118 CC examples/interrupt_tgt/interrupt_tgt.o 00:01:52.118 TEST_HEADER include/spdk/endian.h 00:01:52.118 TEST_HEADER include/spdk/env_dpdk.h 00:01:52.118 TEST_HEADER include/spdk/env.h 00:01:52.118 TEST_HEADER include/spdk/event.h 00:01:52.118 TEST_HEADER include/spdk/fd_group.h 00:01:52.118 CC app/iscsi_tgt/iscsi_tgt.o 00:01:52.118 TEST_HEADER include/spdk/file.h 00:01:52.118 TEST_HEADER include/spdk/ftl.h 00:01:52.118 TEST_HEADER include/spdk/fd.h 00:01:52.118 TEST_HEADER include/spdk/gpt_spec.h 00:01:52.118 TEST_HEADER include/spdk/hexlify.h 00:01:52.118 CC app/spdk_dd/spdk_dd.o 00:01:52.118 TEST_HEADER include/spdk/histogram_data.h 00:01:52.118 TEST_HEADER include/spdk/idxd.h 00:01:52.118 TEST_HEADER include/spdk/init.h 00:01:52.118 TEST_HEADER include/spdk/idxd_spec.h 00:01:52.118 CC app/nvmf_tgt/nvmf_main.o 00:01:52.118 TEST_HEADER include/spdk/ioat.h 00:01:52.118 TEST_HEADER include/spdk/ioat_spec.h 00:01:52.118 CC app/vhost/vhost.o 00:01:52.118 TEST_HEADER include/spdk/iscsi_spec.h 00:01:52.118 TEST_HEADER include/spdk/json.h 00:01:52.118 TEST_HEADER include/spdk/keyring.h 00:01:52.118 TEST_HEADER include/spdk/jsonrpc.h 00:01:52.118 TEST_HEADER include/spdk/keyring_module.h 00:01:52.118 TEST_HEADER include/spdk/likely.h 00:01:52.118 TEST_HEADER include/spdk/log.h 00:01:52.118 TEST_HEADER include/spdk/lvol.h 00:01:52.118 TEST_HEADER include/spdk/memory.h 00:01:52.118 TEST_HEADER include/spdk/mmio.h 00:01:52.118 TEST_HEADER include/spdk/notify.h 00:01:52.118 TEST_HEADER include/spdk/nbd.h 00:01:52.118 TEST_HEADER include/spdk/nvme.h 00:01:52.118 TEST_HEADER include/spdk/nvme_ocssd.h 00:01:52.118 TEST_HEADER include/spdk/nvme_intel.h 00:01:52.118 TEST_HEADER include/spdk/nvme_spec.h 00:01:52.118 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:01:52.118 TEST_HEADER include/spdk/nvme_zns.h 00:01:52.118 TEST_HEADER include/spdk/nvmf_cmd.h 00:01:52.118 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:01:52.118 TEST_HEADER include/spdk/nvmf.h 00:01:52.118 TEST_HEADER include/spdk/nvmf_spec.h 00:01:52.118 TEST_HEADER include/spdk/opal.h 00:01:52.118 TEST_HEADER include/spdk/nvmf_transport.h 00:01:52.118 TEST_HEADER include/spdk/opal_spec.h 00:01:52.118 CC app/spdk_tgt/spdk_tgt.o 00:01:52.118 TEST_HEADER include/spdk/pci_ids.h 00:01:52.118 TEST_HEADER include/spdk/pipe.h 00:01:52.118 TEST_HEADER include/spdk/queue.h 00:01:52.118 TEST_HEADER include/spdk/reduce.h 00:01:52.118 TEST_HEADER include/spdk/rpc.h 00:01:52.118 TEST_HEADER include/spdk/scheduler.h 00:01:52.118 TEST_HEADER include/spdk/scsi.h 00:01:52.118 TEST_HEADER include/spdk/sock.h 00:01:52.118 TEST_HEADER include/spdk/stdinc.h 00:01:52.118 TEST_HEADER include/spdk/scsi_spec.h 00:01:52.118 TEST_HEADER include/spdk/string.h 00:01:52.118 TEST_HEADER include/spdk/thread.h 00:01:52.118 TEST_HEADER include/spdk/trace.h 00:01:52.118 TEST_HEADER include/spdk/tree.h 00:01:52.118 TEST_HEADER include/spdk/ublk.h 00:01:52.118 TEST_HEADER include/spdk/trace_parser.h 00:01:52.118 TEST_HEADER include/spdk/util.h 00:01:52.118 TEST_HEADER include/spdk/uuid.h 00:01:52.118 TEST_HEADER include/spdk/version.h 00:01:52.118 TEST_HEADER include/spdk/vfio_user_pci.h 00:01:52.118 TEST_HEADER include/spdk/vfio_user_spec.h 00:01:52.118 TEST_HEADER include/spdk/vhost.h 00:01:52.118 TEST_HEADER include/spdk/vmd.h 00:01:52.118 TEST_HEADER include/spdk/zipf.h 00:01:52.118 TEST_HEADER include/spdk/xor.h 00:01:52.118 CXX test/cpp_headers/accel.o 00:01:52.118 CXX test/cpp_headers/assert.o 00:01:52.118 CXX test/cpp_headers/accel_module.o 00:01:52.118 CXX test/cpp_headers/bdev.o 00:01:52.118 CXX test/cpp_headers/barrier.o 00:01:52.118 CXX test/cpp_headers/base64.o 00:01:52.118 CXX test/cpp_headers/bdev_module.o 00:01:52.118 CXX test/cpp_headers/bdev_zone.o 00:01:52.118 CXX test/cpp_headers/bit_array.o 00:01:52.118 CXX test/cpp_headers/bit_pool.o 00:01:52.118 CXX test/cpp_headers/blob_bdev.o 00:01:52.118 CXX test/cpp_headers/blobfs_bdev.o 00:01:52.118 CXX test/cpp_headers/conf.o 00:01:52.118 CXX test/cpp_headers/blob.o 00:01:52.118 CXX test/cpp_headers/blobfs.o 00:01:52.118 CXX test/cpp_headers/config.o 00:01:52.118 CXX test/cpp_headers/cpuset.o 00:01:52.118 CXX test/cpp_headers/crc16.o 00:01:52.118 CXX test/cpp_headers/dif.o 00:01:52.118 CXX test/cpp_headers/crc32.o 00:01:52.118 CXX test/cpp_headers/crc64.o 00:01:52.118 CXX test/cpp_headers/dma.o 00:01:52.118 CXX test/cpp_headers/env_dpdk.o 00:01:52.118 CXX test/cpp_headers/endian.o 00:01:52.118 CXX test/cpp_headers/env.o 00:01:52.118 CXX test/cpp_headers/event.o 00:01:52.118 CXX test/cpp_headers/fd_group.o 00:01:52.118 CXX test/cpp_headers/fd.o 00:01:52.118 CXX test/cpp_headers/file.o 00:01:52.118 CXX test/cpp_headers/ftl.o 00:01:52.118 CXX test/cpp_headers/histogram_data.o 00:01:52.118 CXX test/cpp_headers/hexlify.o 00:01:52.118 CXX test/cpp_headers/gpt_spec.o 00:01:52.118 CXX test/cpp_headers/idxd_spec.o 00:01:52.118 CXX test/cpp_headers/idxd.o 00:01:52.118 CXX test/cpp_headers/init.o 00:01:52.118 CXX test/cpp_headers/ioat.o 00:01:52.118 CXX test/cpp_headers/ioat_spec.o 00:01:52.118 CXX test/cpp_headers/iscsi_spec.o 00:01:52.118 CXX test/cpp_headers/json.o 00:01:52.118 CXX test/cpp_headers/jsonrpc.o 00:01:52.118 CXX test/cpp_headers/keyring.o 00:01:52.118 CXX test/cpp_headers/likely.o 00:01:52.118 CXX test/cpp_headers/keyring_module.o 00:01:52.118 CXX test/cpp_headers/log.o 00:01:52.118 CXX test/cpp_headers/lvol.o 00:01:52.118 CXX test/cpp_headers/memory.o 00:01:52.118 CXX test/cpp_headers/mmio.o 00:01:52.118 CXX test/cpp_headers/nbd.o 00:01:52.118 CXX test/cpp_headers/notify.o 00:01:52.118 CXX test/cpp_headers/nvme_intel.o 00:01:52.118 CXX test/cpp_headers/nvme.o 00:01:52.118 CXX test/cpp_headers/nvme_ocssd.o 00:01:52.118 CXX test/cpp_headers/nvme_ocssd_spec.o 00:01:52.118 CXX test/cpp_headers/nvme_spec.o 00:01:52.118 CXX test/cpp_headers/nvme_zns.o 00:01:52.118 CXX test/cpp_headers/nvmf_cmd.o 00:01:52.119 CXX test/cpp_headers/nvmf_fc_spec.o 00:01:52.119 CXX test/cpp_headers/nvmf_spec.o 00:01:52.119 CXX test/cpp_headers/nvmf.o 00:01:52.119 CXX test/cpp_headers/nvmf_transport.o 00:01:52.388 CC test/event/event_perf/event_perf.o 00:01:52.388 CXX test/cpp_headers/opal.o 00:01:52.388 CXX test/cpp_headers/opal_spec.o 00:01:52.388 CXX test/cpp_headers/pci_ids.o 00:01:52.388 CXX test/cpp_headers/pipe.o 00:01:52.388 CXX test/cpp_headers/queue.o 00:01:52.388 CXX test/cpp_headers/reduce.o 00:01:52.389 CC test/event/reactor/reactor.o 00:01:52.389 CC test/app/jsoncat/jsoncat.o 00:01:52.389 CC examples/nvme/arbitration/arbitration.o 00:01:52.389 CC examples/nvme/hello_world/hello_world.o 00:01:52.389 CC test/nvme/sgl/sgl.o 00:01:52.389 CXX test/cpp_headers/rpc.o 00:01:52.389 CC examples/accel/perf/accel_perf.o 00:01:52.389 CC examples/nvme/hotplug/hotplug.o 00:01:52.389 CC test/nvme/overhead/overhead.o 00:01:52.389 CC examples/nvme/nvme_manage/nvme_manage.o 00:01:52.389 CC test/env/memory/memory_ut.o 00:01:52.389 CC test/event/reactor_perf/reactor_perf.o 00:01:52.389 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:01:52.389 CC test/env/vtophys/vtophys.o 00:01:52.389 CC examples/idxd/perf/perf.o 00:01:52.389 CC test/nvme/aer/aer.o 00:01:52.389 CC test/app/histogram_perf/histogram_perf.o 00:01:52.389 CC test/nvme/simple_copy/simple_copy.o 00:01:52.389 CC test/nvme/connect_stress/connect_stress.o 00:01:52.389 CC test/nvme/startup/startup.o 00:01:52.389 CC test/app/stub/stub.o 00:01:52.389 CC test/nvme/reserve/reserve.o 00:01:52.389 CC test/nvme/err_injection/err_injection.o 00:01:52.389 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:01:52.389 CC app/fio/nvme/fio_plugin.o 00:01:52.389 CC test/nvme/fused_ordering/fused_ordering.o 00:01:52.389 CC test/nvme/doorbell_aers/doorbell_aers.o 00:01:52.389 CC test/event/app_repeat/app_repeat.o 00:01:52.389 CC examples/ioat/perf/perf.o 00:01:52.389 CC examples/nvme/abort/abort.o 00:01:52.389 CC test/nvme/cuse/cuse.o 00:01:52.389 CC test/thread/poller_perf/poller_perf.o 00:01:52.389 CC examples/nvme/reconnect/reconnect.o 00:01:52.389 CC examples/vmd/lsvmd/lsvmd.o 00:01:52.389 CC test/nvme/compliance/nvme_compliance.o 00:01:52.389 CC examples/ioat/verify/verify.o 00:01:52.389 CXX test/cpp_headers/scheduler.o 00:01:52.389 CC test/nvme/reset/reset.o 00:01:52.389 CC test/nvme/e2edp/nvme_dp.o 00:01:52.389 CC test/nvme/boot_partition/boot_partition.o 00:01:52.389 CC examples/nvme/cmb_copy/cmb_copy.o 00:01:52.389 CC test/env/pci/pci_ut.o 00:01:52.389 CC examples/util/zipf/zipf.o 00:01:52.389 CC test/nvme/fdp/fdp.o 00:01:52.389 CC examples/thread/thread/thread_ex.o 00:01:52.389 CC test/accel/dif/dif.o 00:01:52.389 CC examples/vmd/led/led.o 00:01:52.389 CC examples/sock/hello_world/hello_sock.o 00:01:52.389 CC test/bdev/bdevio/bdevio.o 00:01:52.389 CC test/dma/test_dma/test_dma.o 00:01:52.389 CC test/event/scheduler/scheduler.o 00:01:52.389 CC test/app/bdev_svc/bdev_svc.o 00:01:52.389 CXX test/cpp_headers/scsi.o 00:01:52.389 CC examples/blob/cli/blobcli.o 00:01:52.389 CC examples/nvmf/nvmf/nvmf.o 00:01:52.389 CC examples/bdev/hello_world/hello_bdev.o 00:01:52.389 CC test/blobfs/mkfs/mkfs.o 00:01:52.389 CC app/fio/bdev/fio_plugin.o 00:01:52.389 CC examples/blob/hello_world/hello_blob.o 00:01:52.389 CC examples/bdev/bdevperf/bdevperf.o 00:01:52.389 LINK spdk_lspci 00:01:52.389 LINK rpc_client_test 00:01:52.659 LINK interrupt_tgt 00:01:52.659 LINK nvmf_tgt 00:01:52.659 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:01:52.659 LINK spdk_nvme_discover 00:01:52.659 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:01:52.659 CC test/lvol/esnap/esnap.o 00:01:52.659 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:01:52.659 CC test/env/mem_callbacks/mem_callbacks.o 00:01:52.923 LINK iscsi_tgt 00:01:52.923 LINK spdk_trace_record 00:01:52.923 LINK vhost 00:01:52.923 LINK spdk_tgt 00:01:52.923 LINK event_perf 00:01:52.923 LINK jsoncat 00:01:52.923 LINK reactor 00:01:52.923 LINK vtophys 00:01:52.923 LINK poller_perf 00:01:52.923 LINK reactor_perf 00:01:52.923 LINK lsvmd 00:01:52.923 LINK histogram_perf 00:01:52.923 LINK led 00:01:52.923 LINK app_repeat 00:01:52.923 LINK env_dpdk_post_init 00:01:52.923 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:01:52.923 CXX test/cpp_headers/scsi_spec.o 00:01:52.923 LINK pmr_persistence 00:01:52.923 LINK zipf 00:01:52.923 CXX test/cpp_headers/sock.o 00:01:52.923 LINK sgl 00:01:52.923 LINK err_injection 00:01:52.923 LINK connect_stress 00:01:52.923 CXX test/cpp_headers/stdinc.o 00:01:52.923 CXX test/cpp_headers/string.o 00:01:52.923 LINK hello_world 00:01:52.923 LINK verify 00:01:52.923 CXX test/cpp_headers/thread.o 00:01:52.923 LINK overhead 00:01:52.923 LINK startup 00:01:52.923 LINK doorbell_aers 00:01:52.923 LINK fused_ordering 00:01:52.923 LINK boot_partition 00:01:52.923 LINK reserve 00:01:52.923 CXX test/cpp_headers/trace.o 00:01:52.923 LINK ioat_perf 00:01:52.923 CXX test/cpp_headers/trace_parser.o 00:01:52.923 CXX test/cpp_headers/tree.o 00:01:52.923 LINK cmb_copy 00:01:52.923 CXX test/cpp_headers/ublk.o 00:01:52.923 LINK bdev_svc 00:01:52.923 CXX test/cpp_headers/uuid.o 00:01:52.923 CXX test/cpp_headers/util.o 00:01:52.923 LINK mkfs 00:01:53.183 LINK stub 00:01:53.183 LINK spdk_dd 00:01:53.183 CXX test/cpp_headers/version.o 00:01:53.183 CXX test/cpp_headers/vfio_user_pci.o 00:01:53.183 CXX test/cpp_headers/vfio_user_spec.o 00:01:53.183 CXX test/cpp_headers/vhost.o 00:01:53.183 CXX test/cpp_headers/vmd.o 00:01:53.183 LINK simple_copy 00:01:53.183 CXX test/cpp_headers/xor.o 00:01:53.183 CXX test/cpp_headers/zipf.o 00:01:53.183 LINK thread 00:01:53.183 LINK hotplug 00:01:53.183 LINK hello_bdev 00:01:53.183 LINK arbitration 00:01:53.183 LINK aer 00:01:53.183 LINK scheduler 00:01:53.183 LINK reset 00:01:53.183 LINK nvme_dp 00:01:53.183 LINK hello_sock 00:01:53.183 LINK nvmf 00:01:53.183 LINK nvme_compliance 00:01:53.183 LINK fdp 00:01:53.183 LINK idxd_perf 00:01:53.183 LINK hello_blob 00:01:53.183 LINK reconnect 00:01:53.183 LINK abort 00:01:53.183 LINK dif 00:01:53.183 LINK test_dma 00:01:53.183 LINK pci_ut 00:01:53.183 LINK bdevio 00:01:53.183 LINK spdk_trace 00:01:53.183 LINK accel_perf 00:01:53.445 LINK nvme_manage 00:01:53.445 LINK spdk_nvme_perf 00:01:53.445 LINK nvme_fuzz 00:01:53.445 LINK spdk_bdev 00:01:53.445 LINK spdk_nvme 00:01:53.445 LINK blobcli 00:01:53.445 LINK vhost_fuzz 00:01:53.707 LINK spdk_top 00:01:53.707 LINK mem_callbacks 00:01:53.707 LINK spdk_nvme_identify 00:01:53.707 LINK bdevperf 00:01:53.707 LINK memory_ut 00:01:53.968 LINK cuse 00:01:54.542 LINK iscsi_fuzz 00:01:56.456 LINK esnap 00:01:56.717 00:01:56.717 real 0m49.182s 00:01:56.717 user 6m23.990s 00:01:56.717 sys 4m25.501s 00:01:56.717 12:45:01 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:01:56.717 12:45:01 -- common/autotest_common.sh@10 -- $ set +x 00:01:56.717 ************************************ 00:01:56.717 END TEST make 00:01:56.717 ************************************ 00:01:56.717 12:45:01 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:01:56.717 12:45:01 -- pm/common@30 -- $ signal_monitor_resources TERM 00:01:56.717 12:45:01 -- pm/common@41 -- $ local monitor pid pids signal=TERM 00:01:56.717 12:45:01 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:56.717 12:45:01 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:01:56.717 12:45:01 -- pm/common@45 -- $ pid=3634015 00:01:56.717 12:45:01 -- pm/common@52 -- $ sudo kill -TERM 3634015 00:01:56.717 12:45:01 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:56.717 12:45:01 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:01:56.717 12:45:01 -- pm/common@45 -- $ pid=3634017 00:01:56.979 12:45:01 -- pm/common@52 -- $ sudo kill -TERM 3634017 00:01:56.979 12:45:01 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:56.979 12:45:01 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:01:56.979 12:45:01 -- pm/common@45 -- $ pid=3634018 00:01:56.979 12:45:01 -- pm/common@52 -- $ sudo kill -TERM 3634018 00:01:56.979 12:45:01 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:56.979 12:45:01 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:01:56.979 12:45:01 -- pm/common@45 -- $ pid=3634020 00:01:56.979 12:45:01 -- pm/common@52 -- $ sudo kill -TERM 3634020 00:01:56.979 12:45:02 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:01:56.979 12:45:02 -- nvmf/common.sh@7 -- # uname -s 00:01:56.979 12:45:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:01:56.979 12:45:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:01:56.979 12:45:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:01:56.979 12:45:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:01:56.979 12:45:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:01:56.979 12:45:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:01:56.979 12:45:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:01:56.979 12:45:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:01:56.979 12:45:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:01:56.979 12:45:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:01:56.979 12:45:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:01:56.979 12:45:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:01:56.979 12:45:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:01:56.979 12:45:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:01:56.979 12:45:02 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:01:56.979 12:45:02 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:01:56.979 12:45:02 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:56.979 12:45:02 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:01:56.979 12:45:02 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:56.979 12:45:02 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:56.979 12:45:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:56.979 12:45:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:56.979 12:45:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:56.979 12:45:02 -- paths/export.sh@5 -- # export PATH 00:01:56.979 12:45:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:56.979 12:45:02 -- nvmf/common.sh@47 -- # : 0 00:01:56.979 12:45:02 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:01:56.979 12:45:02 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:01:56.979 12:45:02 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:01:56.979 12:45:02 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:01:56.979 12:45:02 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:01:56.979 12:45:02 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:01:56.979 12:45:02 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:01:56.979 12:45:02 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:01:56.979 12:45:02 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:01:56.979 12:45:02 -- spdk/autotest.sh@32 -- # uname -s 00:01:57.241 12:45:02 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:01:57.241 12:45:02 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:01:57.241 12:45:02 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:01:57.241 12:45:02 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:01:57.241 12:45:02 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:01:57.241 12:45:02 -- spdk/autotest.sh@44 -- # modprobe nbd 00:01:57.241 12:45:02 -- spdk/autotest.sh@46 -- # type -P udevadm 00:01:57.241 12:45:02 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:01:57.241 12:45:02 -- spdk/autotest.sh@48 -- # udevadm_pid=3695791 00:01:57.241 12:45:02 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:01:57.241 12:45:02 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:01:57.241 12:45:02 -- pm/common@17 -- # local monitor 00:01:57.241 12:45:02 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:57.241 12:45:02 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=3695792 00:01:57.241 12:45:02 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:57.241 12:45:02 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=3695795 00:01:57.241 12:45:02 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:57.241 12:45:02 -- pm/common@21 -- # date +%s 00:01:57.241 12:45:02 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=3695798 00:01:57.241 12:45:02 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:57.241 12:45:02 -- pm/common@21 -- # date +%s 00:01:57.241 12:45:02 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=3695801 00:01:57.241 12:45:02 -- pm/common@26 -- # sleep 1 00:01:57.241 12:45:02 -- pm/common@21 -- # date +%s 00:01:57.241 12:45:02 -- pm/common@21 -- # date +%s 00:01:57.241 12:45:02 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1714128302 00:01:57.241 12:45:02 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1714128302 00:01:57.241 12:45:02 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1714128302 00:01:57.241 12:45:02 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1714128302 00:01:57.241 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1714128302_collect-vmstat.pm.log 00:01:57.241 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1714128302_collect-bmc-pm.bmc.pm.log 00:01:57.241 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1714128302_collect-cpu-load.pm.log 00:01:57.241 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1714128302_collect-cpu-temp.pm.log 00:01:58.187 12:45:03 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:01:58.187 12:45:03 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:01:58.187 12:45:03 -- common/autotest_common.sh@710 -- # xtrace_disable 00:01:58.187 12:45:03 -- common/autotest_common.sh@10 -- # set +x 00:01:58.187 12:45:03 -- spdk/autotest.sh@59 -- # create_test_list 00:01:58.187 12:45:03 -- common/autotest_common.sh@734 -- # xtrace_disable 00:01:58.187 12:45:03 -- common/autotest_common.sh@10 -- # set +x 00:01:58.188 12:45:03 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:01:58.188 12:45:03 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:58.188 12:45:03 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:58.188 12:45:03 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:58.188 12:45:03 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:58.188 12:45:03 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:01:58.188 12:45:03 -- common/autotest_common.sh@1441 -- # uname 00:01:58.188 12:45:03 -- common/autotest_common.sh@1441 -- # '[' Linux = FreeBSD ']' 00:01:58.188 12:45:03 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:01:58.188 12:45:03 -- common/autotest_common.sh@1461 -- # uname 00:01:58.188 12:45:03 -- common/autotest_common.sh@1461 -- # [[ Linux = FreeBSD ]] 00:01:58.188 12:45:03 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:01:58.188 12:45:03 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:01:58.188 12:45:03 -- spdk/autotest.sh@72 -- # hash lcov 00:01:58.188 12:45:03 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:01:58.188 12:45:03 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:01:58.188 --rc lcov_branch_coverage=1 00:01:58.188 --rc lcov_function_coverage=1 00:01:58.188 --rc genhtml_branch_coverage=1 00:01:58.188 --rc genhtml_function_coverage=1 00:01:58.188 --rc genhtml_legend=1 00:01:58.188 --rc geninfo_all_blocks=1 00:01:58.188 ' 00:01:58.188 12:45:03 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:01:58.188 --rc lcov_branch_coverage=1 00:01:58.188 --rc lcov_function_coverage=1 00:01:58.188 --rc genhtml_branch_coverage=1 00:01:58.188 --rc genhtml_function_coverage=1 00:01:58.188 --rc genhtml_legend=1 00:01:58.188 --rc geninfo_all_blocks=1 00:01:58.188 ' 00:01:58.188 12:45:03 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:01:58.188 --rc lcov_branch_coverage=1 00:01:58.188 --rc lcov_function_coverage=1 00:01:58.188 --rc genhtml_branch_coverage=1 00:01:58.188 --rc genhtml_function_coverage=1 00:01:58.188 --rc genhtml_legend=1 00:01:58.188 --rc geninfo_all_blocks=1 00:01:58.188 --no-external' 00:01:58.188 12:45:03 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:01:58.188 --rc lcov_branch_coverage=1 00:01:58.188 --rc lcov_function_coverage=1 00:01:58.188 --rc genhtml_branch_coverage=1 00:01:58.188 --rc genhtml_function_coverage=1 00:01:58.188 --rc genhtml_legend=1 00:01:58.188 --rc geninfo_all_blocks=1 00:01:58.188 --no-external' 00:01:58.188 12:45:03 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:01:58.188 lcov: LCOV version 1.14 00:01:58.188 12:45:03 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:06.442 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:06.442 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:06.442 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:06.442 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:06.442 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:06.442 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:06.442 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:06.442 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:06.442 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:06.442 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:06.442 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:06.442 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:06.442 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:06.442 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:06.442 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:06.442 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:06.442 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:06.442 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:06.442 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:06.442 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:06.442 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:06.442 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:06.442 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:06.442 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:06.442 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:06.442 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:06.442 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:06.442 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:06.442 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:06.442 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:06.442 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:06.442 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:06.442 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:06.442 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:06.442 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:06.442 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:06.442 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:06.442 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:06.442 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:06.442 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:06.442 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:06.442 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:06.442 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:06.442 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:06.442 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:06.442 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:06.442 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:06.442 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:06.442 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:06.442 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:06.442 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:06.442 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:06.442 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:06.442 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:06.442 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:06.442 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:06.442 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:06.442 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:06.442 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:06.442 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:06.442 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:06.442 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:06.442 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:06.442 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:06.442 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:06.442 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:06.442 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:06.442 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:06.442 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:06.442 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:06.442 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:06.442 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:06.442 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:06.442 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:06.442 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:06.442 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:06.442 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:06.442 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:06.442 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:06.442 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:06.442 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:02:06.442 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:02:06.442 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:06.442 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:06.442 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:06.443 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:06.443 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:06.443 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:06.443 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:02:06.443 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:02:06.443 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:06.443 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:06.443 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:06.443 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:06.443 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:06.443 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:06.443 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:06.443 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:06.443 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:06.443 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:06.443 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:06.443 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:06.443 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:06.443 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:06.443 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:06.443 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:06.443 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:06.443 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:06.443 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:06.443 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:06.443 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:06.443 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:06.443 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:06.443 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:06.443 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:06.443 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:06.443 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:06.443 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:06.443 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:06.443 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:06.443 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:06.443 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:06.443 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:06.443 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:06.443 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:06.443 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:06.443 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:06.443 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:06.443 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:06.443 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:06.443 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:06.443 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:06.443 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:06.443 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:06.443 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:06.443 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:06.443 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:06.443 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:06.443 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:06.443 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:06.443 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:06.443 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:06.443 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:06.443 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:06.443 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:06.443 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:06.443 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:06.443 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:06.443 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:06.443 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:06.443 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:06.443 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:06.443 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:06.443 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:06.443 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:06.443 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:06.443 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:06.443 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:06.443 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:06.443 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:06.443 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:06.443 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:06.443 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:06.443 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:06.443 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:06.443 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:06.443 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:06.443 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:06.443 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:06.443 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:06.443 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:06.443 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:06.443 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:06.443 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:06.443 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:06.443 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:09.749 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:09.749 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:19.753 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:02:19.753 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:02:19.753 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:02:19.753 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:02:19.753 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:02:19.753 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:02:26.342 12:45:30 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:02:26.342 12:45:30 -- common/autotest_common.sh@710 -- # xtrace_disable 00:02:26.342 12:45:30 -- common/autotest_common.sh@10 -- # set +x 00:02:26.342 12:45:30 -- spdk/autotest.sh@91 -- # rm -f 00:02:26.342 12:45:30 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:28.892 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:02:28.892 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:02:29.154 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:02:29.154 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:02:29.154 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:02:29.154 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:02:29.154 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:02:29.154 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:02:29.154 0000:65:00.0 (144d a80a): Already using the nvme driver 00:02:29.154 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:02:29.154 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:02:29.154 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:02:29.154 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:02:29.415 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:02:29.415 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:02:29.415 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:02:29.415 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:02:29.677 12:45:34 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:02:29.677 12:45:34 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:02:29.677 12:45:34 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:02:29.677 12:45:34 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:02:29.677 12:45:34 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:02:29.677 12:45:34 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:02:29.677 12:45:34 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:02:29.677 12:45:34 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:29.677 12:45:34 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:02:29.677 12:45:34 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:02:29.677 12:45:34 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:29.677 12:45:34 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:29.677 12:45:34 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:02:29.677 12:45:34 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:02:29.677 12:45:34 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:29.677 No valid GPT data, bailing 00:02:29.677 12:45:34 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:29.677 12:45:34 -- scripts/common.sh@391 -- # pt= 00:02:29.677 12:45:34 -- scripts/common.sh@392 -- # return 1 00:02:29.677 12:45:34 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:29.677 1+0 records in 00:02:29.677 1+0 records out 00:02:29.678 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00195772 s, 536 MB/s 00:02:29.678 12:45:34 -- spdk/autotest.sh@118 -- # sync 00:02:29.678 12:45:34 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:29.678 12:45:34 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:29.678 12:45:34 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:37.825 12:45:42 -- spdk/autotest.sh@124 -- # uname -s 00:02:37.825 12:45:42 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:02:37.825 12:45:42 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:37.825 12:45:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:37.825 12:45:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:37.825 12:45:42 -- common/autotest_common.sh@10 -- # set +x 00:02:37.825 ************************************ 00:02:37.825 START TEST setup.sh 00:02:37.825 ************************************ 00:02:37.825 12:45:42 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:37.825 * Looking for test storage... 00:02:37.825 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:37.825 12:45:42 -- setup/test-setup.sh@10 -- # uname -s 00:02:37.825 12:45:42 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:37.825 12:45:42 -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:37.825 12:45:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:37.825 12:45:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:37.825 12:45:42 -- common/autotest_common.sh@10 -- # set +x 00:02:37.825 ************************************ 00:02:37.825 START TEST acl 00:02:37.825 ************************************ 00:02:37.825 12:45:42 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:38.086 * Looking for test storage... 00:02:38.086 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:38.086 12:45:42 -- setup/acl.sh@10 -- # get_zoned_devs 00:02:38.086 12:45:42 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:02:38.086 12:45:42 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:02:38.086 12:45:42 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:02:38.086 12:45:42 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:02:38.086 12:45:42 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:02:38.086 12:45:42 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:02:38.086 12:45:42 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:38.086 12:45:42 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:02:38.086 12:45:42 -- setup/acl.sh@12 -- # devs=() 00:02:38.086 12:45:42 -- setup/acl.sh@12 -- # declare -a devs 00:02:38.086 12:45:42 -- setup/acl.sh@13 -- # drivers=() 00:02:38.086 12:45:42 -- setup/acl.sh@13 -- # declare -A drivers 00:02:38.086 12:45:42 -- setup/acl.sh@51 -- # setup reset 00:02:38.086 12:45:42 -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:38.086 12:45:42 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:42.313 12:45:46 -- setup/acl.sh@52 -- # collect_setup_devs 00:02:42.313 12:45:46 -- setup/acl.sh@16 -- # local dev driver 00:02:42.313 12:45:46 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:42.313 12:45:46 -- setup/acl.sh@15 -- # setup output status 00:02:42.313 12:45:46 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:42.313 12:45:46 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:45.613 Hugepages 00:02:45.613 node hugesize free / total 00:02:45.613 12:45:50 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:45.613 12:45:50 -- setup/acl.sh@19 -- # continue 00:02:45.613 12:45:50 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:45.613 12:45:50 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:45.613 12:45:50 -- setup/acl.sh@19 -- # continue 00:02:45.613 12:45:50 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:45.613 12:45:50 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:45.613 12:45:50 -- setup/acl.sh@19 -- # continue 00:02:45.613 12:45:50 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:45.613 00:02:45.613 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:45.613 12:45:50 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:45.613 12:45:50 -- setup/acl.sh@19 -- # continue 00:02:45.613 12:45:50 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:45.613 12:45:50 -- setup/acl.sh@19 -- # [[ 0000:00:01.0 == *:*:*.* ]] 00:02:45.613 12:45:50 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:45.613 12:45:50 -- setup/acl.sh@20 -- # continue 00:02:45.613 12:45:50 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:45.613 12:45:50 -- setup/acl.sh@19 -- # [[ 0000:00:01.1 == *:*:*.* ]] 00:02:45.613 12:45:50 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:45.613 12:45:50 -- setup/acl.sh@20 -- # continue 00:02:45.613 12:45:50 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:45.613 12:45:50 -- setup/acl.sh@19 -- # [[ 0000:00:01.2 == *:*:*.* ]] 00:02:45.613 12:45:50 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:45.613 12:45:50 -- setup/acl.sh@20 -- # continue 00:02:45.613 12:45:50 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:45.613 12:45:50 -- setup/acl.sh@19 -- # [[ 0000:00:01.3 == *:*:*.* ]] 00:02:45.613 12:45:50 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:45.613 12:45:50 -- setup/acl.sh@20 -- # continue 00:02:45.613 12:45:50 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:45.613 12:45:50 -- setup/acl.sh@19 -- # [[ 0000:00:01.4 == *:*:*.* ]] 00:02:45.613 12:45:50 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:45.613 12:45:50 -- setup/acl.sh@20 -- # continue 00:02:45.613 12:45:50 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:45.613 12:45:50 -- setup/acl.sh@19 -- # [[ 0000:00:01.5 == *:*:*.* ]] 00:02:45.613 12:45:50 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:45.613 12:45:50 -- setup/acl.sh@20 -- # continue 00:02:45.613 12:45:50 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:45.613 12:45:50 -- setup/acl.sh@19 -- # [[ 0000:00:01.6 == *:*:*.* ]] 00:02:45.613 12:45:50 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:45.613 12:45:50 -- setup/acl.sh@20 -- # continue 00:02:45.613 12:45:50 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:45.613 12:45:50 -- setup/acl.sh@19 -- # [[ 0000:00:01.7 == *:*:*.* ]] 00:02:45.613 12:45:50 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:45.613 12:45:50 -- setup/acl.sh@20 -- # continue 00:02:45.613 12:45:50 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:45.613 12:45:50 -- setup/acl.sh@19 -- # [[ 0000:65:00.0 == *:*:*.* ]] 00:02:45.613 12:45:50 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:45.613 12:45:50 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:02:45.613 12:45:50 -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:45.613 12:45:50 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:45.613 12:45:50 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:45.613 12:45:50 -- setup/acl.sh@19 -- # [[ 0000:80:01.0 == *:*:*.* ]] 00:02:45.613 12:45:50 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:45.613 12:45:50 -- setup/acl.sh@20 -- # continue 00:02:45.613 12:45:50 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:45.614 12:45:50 -- setup/acl.sh@19 -- # [[ 0000:80:01.1 == *:*:*.* ]] 00:02:45.614 12:45:50 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:45.614 12:45:50 -- setup/acl.sh@20 -- # continue 00:02:45.614 12:45:50 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:45.614 12:45:50 -- setup/acl.sh@19 -- # [[ 0000:80:01.2 == *:*:*.* ]] 00:02:45.614 12:45:50 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:45.614 12:45:50 -- setup/acl.sh@20 -- # continue 00:02:45.614 12:45:50 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:45.614 12:45:50 -- setup/acl.sh@19 -- # [[ 0000:80:01.3 == *:*:*.* ]] 00:02:45.614 12:45:50 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:45.614 12:45:50 -- setup/acl.sh@20 -- # continue 00:02:45.614 12:45:50 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:45.614 12:45:50 -- setup/acl.sh@19 -- # [[ 0000:80:01.4 == *:*:*.* ]] 00:02:45.614 12:45:50 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:45.614 12:45:50 -- setup/acl.sh@20 -- # continue 00:02:45.614 12:45:50 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:45.614 12:45:50 -- setup/acl.sh@19 -- # [[ 0000:80:01.5 == *:*:*.* ]] 00:02:45.614 12:45:50 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:45.614 12:45:50 -- setup/acl.sh@20 -- # continue 00:02:45.614 12:45:50 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:45.614 12:45:50 -- setup/acl.sh@19 -- # [[ 0000:80:01.6 == *:*:*.* ]] 00:02:45.614 12:45:50 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:45.614 12:45:50 -- setup/acl.sh@20 -- # continue 00:02:45.614 12:45:50 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:45.614 12:45:50 -- setup/acl.sh@19 -- # [[ 0000:80:01.7 == *:*:*.* ]] 00:02:45.614 12:45:50 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:45.614 12:45:50 -- setup/acl.sh@20 -- # continue 00:02:45.614 12:45:50 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:45.614 12:45:50 -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:02:45.614 12:45:50 -- setup/acl.sh@54 -- # run_test denied denied 00:02:45.614 12:45:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:45.614 12:45:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:45.614 12:45:50 -- common/autotest_common.sh@10 -- # set +x 00:02:45.614 ************************************ 00:02:45.614 START TEST denied 00:02:45.614 ************************************ 00:02:45.614 12:45:50 -- common/autotest_common.sh@1111 -- # denied 00:02:45.614 12:45:50 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:65:00.0' 00:02:45.614 12:45:50 -- setup/acl.sh@38 -- # setup output config 00:02:45.614 12:45:50 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:65:00.0' 00:02:45.614 12:45:50 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:45.614 12:45:50 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:49.820 0000:65:00.0 (144d a80a): Skipping denied controller at 0000:65:00.0 00:02:49.820 12:45:54 -- setup/acl.sh@40 -- # verify 0000:65:00.0 00:02:49.820 12:45:54 -- setup/acl.sh@28 -- # local dev driver 00:02:49.820 12:45:54 -- setup/acl.sh@30 -- # for dev in "$@" 00:02:49.820 12:45:54 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:65:00.0 ]] 00:02:49.820 12:45:54 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:65:00.0/driver 00:02:49.820 12:45:54 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:02:49.820 12:45:54 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:02:49.820 12:45:54 -- setup/acl.sh@41 -- # setup reset 00:02:49.820 12:45:54 -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:49.820 12:45:54 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:55.105 00:02:55.105 real 0m8.722s 00:02:55.105 user 0m2.900s 00:02:55.105 sys 0m5.080s 00:02:55.105 12:45:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:55.105 12:45:59 -- common/autotest_common.sh@10 -- # set +x 00:02:55.105 ************************************ 00:02:55.105 END TEST denied 00:02:55.105 ************************************ 00:02:55.105 12:45:59 -- setup/acl.sh@55 -- # run_test allowed allowed 00:02:55.105 12:45:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:55.105 12:45:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:55.105 12:45:59 -- common/autotest_common.sh@10 -- # set +x 00:02:55.105 ************************************ 00:02:55.105 START TEST allowed 00:02:55.105 ************************************ 00:02:55.105 12:45:59 -- common/autotest_common.sh@1111 -- # allowed 00:02:55.105 12:45:59 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:65:00.0 00:02:55.105 12:45:59 -- setup/acl.sh@45 -- # setup output config 00:02:55.105 12:45:59 -- setup/acl.sh@46 -- # grep -E '0000:65:00.0 .*: nvme -> .*' 00:02:55.105 12:45:59 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:55.106 12:45:59 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:00.393 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:00.393 12:46:05 -- setup/acl.sh@47 -- # verify 00:03:00.393 12:46:05 -- setup/acl.sh@28 -- # local dev driver 00:03:00.393 12:46:05 -- setup/acl.sh@48 -- # setup reset 00:03:00.393 12:46:05 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:00.393 12:46:05 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:04.597 00:03:04.598 real 0m9.637s 00:03:04.598 user 0m2.906s 00:03:04.598 sys 0m5.025s 00:03:04.598 12:46:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:04.598 12:46:09 -- common/autotest_common.sh@10 -- # set +x 00:03:04.598 ************************************ 00:03:04.598 END TEST allowed 00:03:04.598 ************************************ 00:03:04.598 00:03:04.598 real 0m26.248s 00:03:04.598 user 0m8.670s 00:03:04.598 sys 0m15.245s 00:03:04.598 12:46:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:04.598 12:46:09 -- common/autotest_common.sh@10 -- # set +x 00:03:04.598 ************************************ 00:03:04.598 END TEST acl 00:03:04.598 ************************************ 00:03:04.598 12:46:09 -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:04.598 12:46:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:04.598 12:46:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:04.598 12:46:09 -- common/autotest_common.sh@10 -- # set +x 00:03:04.598 ************************************ 00:03:04.598 START TEST hugepages 00:03:04.598 ************************************ 00:03:04.598 12:46:09 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:04.598 * Looking for test storage... 00:03:04.598 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:04.598 12:46:09 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:04.598 12:46:09 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:04.598 12:46:09 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:04.598 12:46:09 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:04.598 12:46:09 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:04.598 12:46:09 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:04.598 12:46:09 -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:04.598 12:46:09 -- setup/common.sh@18 -- # local node= 00:03:04.598 12:46:09 -- setup/common.sh@19 -- # local var val 00:03:04.598 12:46:09 -- setup/common.sh@20 -- # local mem_f mem 00:03:04.598 12:46:09 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:04.598 12:46:09 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:04.598 12:46:09 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:04.598 12:46:09 -- setup/common.sh@28 -- # mapfile -t mem 00:03:04.598 12:46:09 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:04.598 12:46:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.598 12:46:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.598 12:46:09 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 107415908 kB' 'MemAvailable: 110942308 kB' 'Buffers: 4124 kB' 'Cached: 10261876 kB' 'SwapCached: 0 kB' 'Active: 7354148 kB' 'Inactive: 3515708 kB' 'Active(anon): 6663752 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515708 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 607320 kB' 'Mapped: 182984 kB' 'Shmem: 6059896 kB' 'KReclaimable: 289252 kB' 'Slab: 1049400 kB' 'SReclaimable: 289252 kB' 'SUnreclaim: 760148 kB' 'KernelStack: 26928 kB' 'PageTables: 8704 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69460884 kB' 'Committed_AS: 8032444 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234652 kB' 'VmallocChunk: 0 kB' 'Percpu: 103104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 3585396 kB' 'DirectMap2M: 42231808 kB' 'DirectMap1G: 90177536 kB' 00:03:04.598 12:46:09 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.598 12:46:09 -- setup/common.sh@32 -- # continue 00:03:04.598 12:46:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.598 12:46:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.598 12:46:09 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.598 12:46:09 -- setup/common.sh@32 -- # continue 00:03:04.598 12:46:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.598 12:46:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.598 12:46:09 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.598 12:46:09 -- setup/common.sh@32 -- # continue 00:03:04.598 12:46:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.598 12:46:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.598 12:46:09 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.598 12:46:09 -- setup/common.sh@32 -- # continue 00:03:04.598 12:46:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.598 12:46:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.598 12:46:09 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.598 12:46:09 -- setup/common.sh@32 -- # continue 00:03:04.598 12:46:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.598 12:46:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.598 12:46:09 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.598 12:46:09 -- setup/common.sh@32 -- # continue 00:03:04.598 12:46:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.598 12:46:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.598 12:46:09 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.598 12:46:09 -- setup/common.sh@32 -- # continue 00:03:04.598 12:46:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.598 12:46:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.598 12:46:09 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.598 12:46:09 -- setup/common.sh@32 -- # continue 00:03:04.598 12:46:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.598 12:46:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.598 12:46:09 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.598 12:46:09 -- setup/common.sh@32 -- # continue 00:03:04.598 12:46:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.598 12:46:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.598 12:46:09 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.598 12:46:09 -- setup/common.sh@32 -- # continue 00:03:04.598 12:46:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.598 12:46:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.598 12:46:09 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.598 12:46:09 -- setup/common.sh@32 -- # continue 00:03:04.598 12:46:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.598 12:46:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.598 12:46:09 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.598 12:46:09 -- setup/common.sh@32 -- # continue 00:03:04.598 12:46:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.598 12:46:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.598 12:46:09 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.598 12:46:09 -- setup/common.sh@32 -- # continue 00:03:04.598 12:46:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.598 12:46:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.598 12:46:09 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.598 12:46:09 -- setup/common.sh@32 -- # continue 00:03:04.598 12:46:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.598 12:46:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.598 12:46:09 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.598 12:46:09 -- setup/common.sh@32 -- # continue 00:03:04.598 12:46:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.598 12:46:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.598 12:46:09 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.598 12:46:09 -- setup/common.sh@32 -- # continue 00:03:04.598 12:46:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.598 12:46:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.598 12:46:09 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.598 12:46:09 -- setup/common.sh@32 -- # continue 00:03:04.598 12:46:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.598 12:46:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.598 12:46:09 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.598 12:46:09 -- setup/common.sh@32 -- # continue 00:03:04.598 12:46:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.598 12:46:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.598 12:46:09 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.598 12:46:09 -- setup/common.sh@32 -- # continue 00:03:04.598 12:46:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.598 12:46:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.598 12:46:09 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.598 12:46:09 -- setup/common.sh@32 -- # continue 00:03:04.598 12:46:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.598 12:46:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.598 12:46:09 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.598 12:46:09 -- setup/common.sh@32 -- # continue 00:03:04.598 12:46:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.598 12:46:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.598 12:46:09 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.598 12:46:09 -- setup/common.sh@32 -- # continue 00:03:04.598 12:46:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.598 12:46:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.598 12:46:09 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.598 12:46:09 -- setup/common.sh@32 -- # continue 00:03:04.598 12:46:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.598 12:46:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.598 12:46:09 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.598 12:46:09 -- setup/common.sh@32 -- # continue 00:03:04.598 12:46:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.598 12:46:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.598 12:46:09 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.598 12:46:09 -- setup/common.sh@32 -- # continue 00:03:04.598 12:46:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.598 12:46:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.598 12:46:09 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.598 12:46:09 -- setup/common.sh@32 -- # continue 00:03:04.598 12:46:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.599 12:46:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.599 12:46:09 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.599 12:46:09 -- setup/common.sh@32 -- # continue 00:03:04.599 12:46:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.599 12:46:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.599 12:46:09 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.599 12:46:09 -- setup/common.sh@32 -- # continue 00:03:04.599 12:46:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.599 12:46:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.599 12:46:09 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.599 12:46:09 -- setup/common.sh@32 -- # continue 00:03:04.599 12:46:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.599 12:46:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.599 12:46:09 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.599 12:46:09 -- setup/common.sh@32 -- # continue 00:03:04.599 12:46:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.599 12:46:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.599 12:46:09 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.599 12:46:09 -- setup/common.sh@32 -- # continue 00:03:04.599 12:46:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.599 12:46:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.599 12:46:09 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.599 12:46:09 -- setup/common.sh@32 -- # continue 00:03:04.599 12:46:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.599 12:46:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.599 12:46:09 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.599 12:46:09 -- setup/common.sh@32 -- # continue 00:03:04.599 12:46:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.599 12:46:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.599 12:46:09 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.599 12:46:09 -- setup/common.sh@32 -- # continue 00:03:04.599 12:46:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.599 12:46:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.599 12:46:09 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.599 12:46:09 -- setup/common.sh@32 -- # continue 00:03:04.599 12:46:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.599 12:46:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.599 12:46:09 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.599 12:46:09 -- setup/common.sh@32 -- # continue 00:03:04.599 12:46:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.599 12:46:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.599 12:46:09 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.599 12:46:09 -- setup/common.sh@32 -- # continue 00:03:04.599 12:46:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.599 12:46:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.599 12:46:09 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.599 12:46:09 -- setup/common.sh@32 -- # continue 00:03:04.599 12:46:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.599 12:46:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.599 12:46:09 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.599 12:46:09 -- setup/common.sh@32 -- # continue 00:03:04.599 12:46:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.599 12:46:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.599 12:46:09 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.599 12:46:09 -- setup/common.sh@32 -- # continue 00:03:04.599 12:46:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.599 12:46:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.599 12:46:09 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.599 12:46:09 -- setup/common.sh@32 -- # continue 00:03:04.599 12:46:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.599 12:46:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.599 12:46:09 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.599 12:46:09 -- setup/common.sh@32 -- # continue 00:03:04.599 12:46:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.599 12:46:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.599 12:46:09 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.599 12:46:09 -- setup/common.sh@32 -- # continue 00:03:04.599 12:46:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.599 12:46:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.599 12:46:09 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.599 12:46:09 -- setup/common.sh@32 -- # continue 00:03:04.599 12:46:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.599 12:46:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.599 12:46:09 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.599 12:46:09 -- setup/common.sh@32 -- # continue 00:03:04.599 12:46:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.599 12:46:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.599 12:46:09 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.599 12:46:09 -- setup/common.sh@32 -- # continue 00:03:04.599 12:46:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.599 12:46:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.599 12:46:09 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.599 12:46:09 -- setup/common.sh@32 -- # continue 00:03:04.599 12:46:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.599 12:46:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.599 12:46:09 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.599 12:46:09 -- setup/common.sh@32 -- # continue 00:03:04.599 12:46:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.599 12:46:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.599 12:46:09 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.599 12:46:09 -- setup/common.sh@32 -- # continue 00:03:04.599 12:46:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.599 12:46:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.599 12:46:09 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.599 12:46:09 -- setup/common.sh@32 -- # continue 00:03:04.599 12:46:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.599 12:46:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.599 12:46:09 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.599 12:46:09 -- setup/common.sh@32 -- # continue 00:03:04.599 12:46:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.599 12:46:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.599 12:46:09 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.599 12:46:09 -- setup/common.sh@32 -- # continue 00:03:04.599 12:46:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.599 12:46:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.599 12:46:09 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.599 12:46:09 -- setup/common.sh@33 -- # echo 2048 00:03:04.599 12:46:09 -- setup/common.sh@33 -- # return 0 00:03:04.599 12:46:09 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:04.599 12:46:09 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:04.599 12:46:09 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:04.599 12:46:09 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:04.599 12:46:09 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:04.599 12:46:09 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:04.599 12:46:09 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:04.599 12:46:09 -- setup/hugepages.sh@207 -- # get_nodes 00:03:04.599 12:46:09 -- setup/hugepages.sh@27 -- # local node 00:03:04.599 12:46:09 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:04.599 12:46:09 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:04.599 12:46:09 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:04.599 12:46:09 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:04.599 12:46:09 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:04.599 12:46:09 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:04.599 12:46:09 -- setup/hugepages.sh@208 -- # clear_hp 00:03:04.599 12:46:09 -- setup/hugepages.sh@37 -- # local node hp 00:03:04.599 12:46:09 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:04.599 12:46:09 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:04.599 12:46:09 -- setup/hugepages.sh@41 -- # echo 0 00:03:04.599 12:46:09 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:04.599 12:46:09 -- setup/hugepages.sh@41 -- # echo 0 00:03:04.599 12:46:09 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:04.599 12:46:09 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:04.599 12:46:09 -- setup/hugepages.sh@41 -- # echo 0 00:03:04.599 12:46:09 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:04.599 12:46:09 -- setup/hugepages.sh@41 -- # echo 0 00:03:04.599 12:46:09 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:04.599 12:46:09 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:04.599 12:46:09 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:04.599 12:46:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:04.599 12:46:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:04.599 12:46:09 -- common/autotest_common.sh@10 -- # set +x 00:03:04.599 ************************************ 00:03:04.599 START TEST default_setup 00:03:04.599 ************************************ 00:03:04.599 12:46:09 -- common/autotest_common.sh@1111 -- # default_setup 00:03:04.599 12:46:09 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:04.599 12:46:09 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:04.599 12:46:09 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:04.599 12:46:09 -- setup/hugepages.sh@51 -- # shift 00:03:04.599 12:46:09 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:04.599 12:46:09 -- setup/hugepages.sh@52 -- # local node_ids 00:03:04.599 12:46:09 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:04.599 12:46:09 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:04.599 12:46:09 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:04.599 12:46:09 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:04.599 12:46:09 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:04.599 12:46:09 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:04.599 12:46:09 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:04.599 12:46:09 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:04.599 12:46:09 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:04.599 12:46:09 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:04.600 12:46:09 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:04.600 12:46:09 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:04.600 12:46:09 -- setup/hugepages.sh@73 -- # return 0 00:03:04.600 12:46:09 -- setup/hugepages.sh@137 -- # setup output 00:03:04.600 12:46:09 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:04.600 12:46:09 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:08.809 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:08.809 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:08.809 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:08.809 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:08.809 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:08.809 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:08.809 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:08.809 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:08.809 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:08.809 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:08.809 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:08.809 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:08.809 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:08.810 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:08.810 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:08.810 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:08.810 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:08.810 12:46:13 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:08.810 12:46:13 -- setup/hugepages.sh@89 -- # local node 00:03:08.810 12:46:13 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:08.810 12:46:13 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:08.810 12:46:13 -- setup/hugepages.sh@92 -- # local surp 00:03:08.810 12:46:13 -- setup/hugepages.sh@93 -- # local resv 00:03:08.810 12:46:13 -- setup/hugepages.sh@94 -- # local anon 00:03:08.810 12:46:13 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:08.810 12:46:13 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:08.810 12:46:13 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:08.810 12:46:13 -- setup/common.sh@18 -- # local node= 00:03:08.810 12:46:13 -- setup/common.sh@19 -- # local var val 00:03:08.810 12:46:13 -- setup/common.sh@20 -- # local mem_f mem 00:03:08.810 12:46:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:08.810 12:46:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:08.810 12:46:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:08.810 12:46:13 -- setup/common.sh@28 -- # mapfile -t mem 00:03:08.810 12:46:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:08.810 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.810 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.810 12:46:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 109605012 kB' 'MemAvailable: 113131412 kB' 'Buffers: 4124 kB' 'Cached: 10261992 kB' 'SwapCached: 0 kB' 'Active: 7370068 kB' 'Inactive: 3515708 kB' 'Active(anon): 6679672 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515708 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 623076 kB' 'Mapped: 183896 kB' 'Shmem: 6060012 kB' 'KReclaimable: 289252 kB' 'Slab: 1047096 kB' 'SReclaimable: 289252 kB' 'SUnreclaim: 757844 kB' 'KernelStack: 27312 kB' 'PageTables: 9508 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 8050092 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234684 kB' 'VmallocChunk: 0 kB' 'Percpu: 103104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3585396 kB' 'DirectMap2M: 42231808 kB' 'DirectMap1G: 90177536 kB' 00:03:08.810 12:46:13 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.810 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.810 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.810 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.810 12:46:13 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.810 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.810 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.810 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.810 12:46:13 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.810 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.810 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.810 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.810 12:46:13 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.810 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.810 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.810 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.810 12:46:13 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.810 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.810 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.810 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.810 12:46:13 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.810 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.810 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.810 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.810 12:46:13 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.810 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.810 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.810 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.810 12:46:13 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.810 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.810 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.810 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.810 12:46:13 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.810 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.810 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.810 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.810 12:46:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.810 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.810 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.810 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.810 12:46:13 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.810 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.810 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.810 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.810 12:46:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.810 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.810 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.810 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.810 12:46:13 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.810 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.810 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.810 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.810 12:46:13 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.810 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.810 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.810 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.810 12:46:13 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.810 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.810 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.810 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.810 12:46:13 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.810 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.810 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.810 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.810 12:46:13 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.810 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.810 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.810 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.810 12:46:13 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.810 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.810 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.810 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.810 12:46:13 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.810 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.810 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.810 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.810 12:46:13 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.810 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.810 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.810 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.810 12:46:13 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.810 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.810 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.810 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.810 12:46:13 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.810 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.810 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.810 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.810 12:46:13 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.810 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.810 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.810 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.810 12:46:13 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.810 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.810 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.810 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.810 12:46:13 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.810 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.810 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.810 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.810 12:46:13 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.810 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.810 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.810 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.810 12:46:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.810 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.810 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.810 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.810 12:46:13 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.810 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.810 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.810 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.810 12:46:13 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.810 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.810 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.810 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.810 12:46:13 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.810 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.810 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.810 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.810 12:46:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.810 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.810 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.810 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.810 12:46:13 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.810 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.810 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.810 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.810 12:46:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.810 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.810 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.810 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.810 12:46:13 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.810 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.810 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.810 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.810 12:46:13 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.810 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.810 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.810 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.810 12:46:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.810 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.810 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.810 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.810 12:46:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.811 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.811 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.811 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.811 12:46:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.811 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.811 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.811 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.811 12:46:13 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.811 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.811 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.811 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.811 12:46:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.811 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.811 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.811 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.811 12:46:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.811 12:46:13 -- setup/common.sh@33 -- # echo 0 00:03:08.811 12:46:13 -- setup/common.sh@33 -- # return 0 00:03:08.811 12:46:13 -- setup/hugepages.sh@97 -- # anon=0 00:03:08.811 12:46:13 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:08.811 12:46:13 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:08.811 12:46:13 -- setup/common.sh@18 -- # local node= 00:03:08.811 12:46:13 -- setup/common.sh@19 -- # local var val 00:03:08.811 12:46:13 -- setup/common.sh@20 -- # local mem_f mem 00:03:08.811 12:46:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:08.811 12:46:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:08.811 12:46:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:08.811 12:46:13 -- setup/common.sh@28 -- # mapfile -t mem 00:03:08.811 12:46:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:08.811 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.811 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.811 12:46:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 109604276 kB' 'MemAvailable: 113130676 kB' 'Buffers: 4124 kB' 'Cached: 10261992 kB' 'SwapCached: 0 kB' 'Active: 7369600 kB' 'Inactive: 3515708 kB' 'Active(anon): 6679204 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515708 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 622616 kB' 'Mapped: 183352 kB' 'Shmem: 6060012 kB' 'KReclaimable: 289252 kB' 'Slab: 1047088 kB' 'SReclaimable: 289252 kB' 'SUnreclaim: 757836 kB' 'KernelStack: 27200 kB' 'PageTables: 8904 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 8047664 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234716 kB' 'VmallocChunk: 0 kB' 'Percpu: 103104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3585396 kB' 'DirectMap2M: 42231808 kB' 'DirectMap1G: 90177536 kB' 00:03:08.811 12:46:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.811 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.811 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.811 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.811 12:46:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.811 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.811 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.811 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.811 12:46:13 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.811 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.811 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.811 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.811 12:46:13 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.811 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.811 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.811 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.811 12:46:13 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.811 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.811 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.811 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.811 12:46:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.811 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.811 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.811 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.811 12:46:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.811 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.811 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.811 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.811 12:46:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.811 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.811 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.811 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.811 12:46:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.811 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.811 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.811 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.811 12:46:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.811 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.811 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.811 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.811 12:46:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.811 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.811 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.811 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.811 12:46:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.811 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.811 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.811 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.811 12:46:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.811 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.811 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.811 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.811 12:46:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.811 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.811 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.811 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.811 12:46:13 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.811 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.811 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.811 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.811 12:46:13 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.811 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.811 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.811 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.811 12:46:13 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.811 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.811 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.811 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.811 12:46:13 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.811 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.811 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.811 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.811 12:46:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.811 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.811 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.811 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.811 12:46:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.811 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.811 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.811 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.811 12:46:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.811 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.811 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.811 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.811 12:46:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.811 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.811 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.811 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.811 12:46:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.811 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.811 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.811 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.811 12:46:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.811 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.811 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.811 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.811 12:46:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.811 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.811 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.811 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.811 12:46:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.811 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.811 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.811 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.811 12:46:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.811 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.811 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.811 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.811 12:46:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.811 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.811 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.811 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.811 12:46:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.811 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.811 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.811 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.811 12:46:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.812 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.812 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.812 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.812 12:46:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.812 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.812 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.812 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.812 12:46:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.812 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.812 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.812 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.812 12:46:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.812 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.812 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.812 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.812 12:46:13 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.812 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.812 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.812 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.812 12:46:13 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.812 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.812 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.812 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.812 12:46:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.812 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.812 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.812 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.812 12:46:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.812 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.812 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.812 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.812 12:46:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.812 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.812 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.812 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.812 12:46:13 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.812 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.812 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.812 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.812 12:46:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.812 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.812 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.812 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.812 12:46:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.812 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.812 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.812 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.812 12:46:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.812 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.812 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.812 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.812 12:46:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.812 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.812 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.812 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.812 12:46:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.812 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.812 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.812 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.812 12:46:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.812 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.812 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.812 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.812 12:46:13 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.812 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.812 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.812 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.812 12:46:13 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.812 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.812 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.812 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.812 12:46:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.812 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.812 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.812 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.812 12:46:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.812 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.812 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.812 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.812 12:46:13 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.812 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.812 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.812 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.812 12:46:13 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.812 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.812 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.812 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.812 12:46:13 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.812 12:46:13 -- setup/common.sh@33 -- # echo 0 00:03:08.812 12:46:13 -- setup/common.sh@33 -- # return 0 00:03:08.812 12:46:13 -- setup/hugepages.sh@99 -- # surp=0 00:03:08.812 12:46:13 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:08.812 12:46:13 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:08.812 12:46:13 -- setup/common.sh@18 -- # local node= 00:03:08.812 12:46:13 -- setup/common.sh@19 -- # local var val 00:03:08.812 12:46:13 -- setup/common.sh@20 -- # local mem_f mem 00:03:08.812 12:46:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:08.812 12:46:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:08.812 12:46:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:08.812 12:46:13 -- setup/common.sh@28 -- # mapfile -t mem 00:03:08.812 12:46:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:08.812 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.812 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.812 12:46:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 109603664 kB' 'MemAvailable: 113130064 kB' 'Buffers: 4124 kB' 'Cached: 10262004 kB' 'SwapCached: 0 kB' 'Active: 7368752 kB' 'Inactive: 3515708 kB' 'Active(anon): 6678356 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515708 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 621688 kB' 'Mapped: 183352 kB' 'Shmem: 6060024 kB' 'KReclaimable: 289252 kB' 'Slab: 1047208 kB' 'SReclaimable: 289252 kB' 'SUnreclaim: 757956 kB' 'KernelStack: 26944 kB' 'PageTables: 8652 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 8047676 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234620 kB' 'VmallocChunk: 0 kB' 'Percpu: 103104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3585396 kB' 'DirectMap2M: 42231808 kB' 'DirectMap1G: 90177536 kB' 00:03:08.812 12:46:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.812 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.812 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.812 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.812 12:46:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.812 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.812 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.812 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.812 12:46:13 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.812 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.812 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.812 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.812 12:46:13 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.812 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.812 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.812 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.812 12:46:13 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.812 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.812 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.812 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.812 12:46:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.812 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.812 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.812 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.812 12:46:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.812 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.812 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.812 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.812 12:46:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.812 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.812 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.812 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.812 12:46:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.812 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.812 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.812 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.812 12:46:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.812 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.812 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.812 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.812 12:46:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.812 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.812 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.812 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.813 12:46:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.813 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.813 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.813 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.813 12:46:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.813 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.813 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.813 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.813 12:46:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.813 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.813 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.813 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.813 12:46:13 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.813 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.813 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.813 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.813 12:46:13 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.813 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.813 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.813 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.813 12:46:13 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.813 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.813 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.813 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.813 12:46:13 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.813 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.813 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.813 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.813 12:46:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.813 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.813 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.813 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.813 12:46:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.813 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.813 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.813 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.813 12:46:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.813 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.813 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.813 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.813 12:46:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.813 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.813 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.813 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.813 12:46:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.813 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.813 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.813 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.813 12:46:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.813 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.813 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.813 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.813 12:46:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.813 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.813 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.813 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.813 12:46:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.813 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.813 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.813 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.813 12:46:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.813 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.813 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.813 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.813 12:46:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.813 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.813 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.813 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.813 12:46:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.813 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.813 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.813 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.813 12:46:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.813 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.813 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.813 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.813 12:46:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.813 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.813 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.813 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.813 12:46:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.813 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.813 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.813 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.813 12:46:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.813 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.813 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.813 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.813 12:46:13 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.813 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.813 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.813 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.813 12:46:13 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.813 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.813 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.813 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.813 12:46:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.813 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.813 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.813 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.813 12:46:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.813 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.813 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.813 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.813 12:46:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.813 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.813 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.813 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.813 12:46:13 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.813 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.813 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.813 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.813 12:46:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.813 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.813 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.813 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.813 12:46:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.813 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.813 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.813 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.813 12:46:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.813 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.813 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.813 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.813 12:46:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.813 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.813 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.813 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.813 12:46:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.813 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.813 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.813 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.813 12:46:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.813 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.813 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.813 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.813 12:46:13 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.813 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.813 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.813 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.813 12:46:13 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.813 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.813 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.813 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.813 12:46:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.813 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.813 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.813 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.813 12:46:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.813 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.813 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.813 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.813 12:46:13 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.813 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.813 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.813 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.813 12:46:13 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.813 12:46:13 -- setup/common.sh@33 -- # echo 0 00:03:08.813 12:46:13 -- setup/common.sh@33 -- # return 0 00:03:08.813 12:46:13 -- setup/hugepages.sh@100 -- # resv=0 00:03:08.813 12:46:13 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:08.813 nr_hugepages=1024 00:03:08.813 12:46:13 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:08.813 resv_hugepages=0 00:03:08.813 12:46:13 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:08.813 surplus_hugepages=0 00:03:08.813 12:46:13 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:08.814 anon_hugepages=0 00:03:08.814 12:46:13 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:08.814 12:46:13 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:08.814 12:46:13 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:08.814 12:46:13 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:08.814 12:46:13 -- setup/common.sh@18 -- # local node= 00:03:08.814 12:46:13 -- setup/common.sh@19 -- # local var val 00:03:08.814 12:46:13 -- setup/common.sh@20 -- # local mem_f mem 00:03:08.814 12:46:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:08.814 12:46:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:08.814 12:46:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:08.814 12:46:13 -- setup/common.sh@28 -- # mapfile -t mem 00:03:08.814 12:46:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:08.814 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.814 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.814 12:46:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 109603748 kB' 'MemAvailable: 113130148 kB' 'Buffers: 4124 kB' 'Cached: 10262020 kB' 'SwapCached: 0 kB' 'Active: 7369344 kB' 'Inactive: 3515708 kB' 'Active(anon): 6678948 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515708 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 622268 kB' 'Mapped: 183352 kB' 'Shmem: 6060040 kB' 'KReclaimable: 289252 kB' 'Slab: 1047164 kB' 'SReclaimable: 289252 kB' 'SUnreclaim: 757912 kB' 'KernelStack: 27152 kB' 'PageTables: 8628 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 8046048 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234700 kB' 'VmallocChunk: 0 kB' 'Percpu: 103104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3585396 kB' 'DirectMap2M: 42231808 kB' 'DirectMap1G: 90177536 kB' 00:03:08.814 12:46:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.814 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.814 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.814 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.814 12:46:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.814 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.814 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.814 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.814 12:46:13 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.814 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.814 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.814 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.814 12:46:13 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.814 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.814 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.814 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.814 12:46:13 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.814 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.814 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.814 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.814 12:46:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.814 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.814 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.814 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.814 12:46:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.814 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.814 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.814 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.814 12:46:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.814 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.814 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.814 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.814 12:46:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.814 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.814 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.814 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.814 12:46:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.814 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.814 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.814 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.814 12:46:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.814 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.814 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.814 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.814 12:46:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.814 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.814 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.814 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.814 12:46:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.814 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.814 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.814 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.814 12:46:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.814 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.814 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.814 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.814 12:46:13 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.814 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.814 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.814 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.814 12:46:13 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.814 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.814 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.814 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.814 12:46:13 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.814 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.814 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.814 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.814 12:46:13 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.814 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.814 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.814 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.814 12:46:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.814 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.814 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.814 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.814 12:46:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.814 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.814 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.814 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.814 12:46:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.814 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.814 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.814 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.814 12:46:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.814 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.814 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.814 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.814 12:46:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.814 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.814 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.814 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.814 12:46:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.814 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.814 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.814 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.814 12:46:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.814 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.814 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.815 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.815 12:46:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.815 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.815 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.815 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.815 12:46:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.815 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.815 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.815 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.815 12:46:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.815 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.815 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.815 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.815 12:46:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.815 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.815 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.815 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.815 12:46:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.815 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.815 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.815 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.815 12:46:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.815 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.815 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.815 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.815 12:46:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.815 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.815 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.815 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.815 12:46:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.815 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.815 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.815 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.815 12:46:13 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.815 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.815 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.815 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.815 12:46:13 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.815 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.815 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.815 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.815 12:46:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.815 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.815 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.815 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.815 12:46:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.815 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.815 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.815 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.815 12:46:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.815 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.815 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.815 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.815 12:46:13 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.815 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.815 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.815 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.815 12:46:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.815 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.815 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.815 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.815 12:46:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.815 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.815 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.815 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.815 12:46:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.815 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.815 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.815 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.815 12:46:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.815 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.815 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.815 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.815 12:46:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.815 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.815 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.815 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.815 12:46:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.815 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.815 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.815 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.815 12:46:13 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.815 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.815 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.815 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.815 12:46:13 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.815 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.815 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.815 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.815 12:46:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.815 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.815 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.815 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.815 12:46:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.815 12:46:13 -- setup/common.sh@33 -- # echo 1024 00:03:08.815 12:46:13 -- setup/common.sh@33 -- # return 0 00:03:08.815 12:46:13 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:08.815 12:46:13 -- setup/hugepages.sh@112 -- # get_nodes 00:03:08.815 12:46:13 -- setup/hugepages.sh@27 -- # local node 00:03:08.815 12:46:13 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:08.815 12:46:13 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:08.815 12:46:13 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:08.815 12:46:13 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:08.815 12:46:13 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:08.815 12:46:13 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:08.815 12:46:13 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:08.815 12:46:13 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:08.815 12:46:13 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:08.815 12:46:13 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:08.815 12:46:13 -- setup/common.sh@18 -- # local node=0 00:03:08.815 12:46:13 -- setup/common.sh@19 -- # local var val 00:03:08.815 12:46:13 -- setup/common.sh@20 -- # local mem_f mem 00:03:08.815 12:46:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:08.815 12:46:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:08.815 12:46:13 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:08.815 12:46:13 -- setup/common.sh@28 -- # mapfile -t mem 00:03:08.815 12:46:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:08.815 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.815 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.815 12:46:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 58996536 kB' 'MemUsed: 6662472 kB' 'SwapCached: 0 kB' 'Active: 2392208 kB' 'Inactive: 107576 kB' 'Active(anon): 2082688 kB' 'Inactive(anon): 0 kB' 'Active(file): 309520 kB' 'Inactive(file): 107576 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2391632 kB' 'Mapped: 110264 kB' 'AnonPages: 111296 kB' 'Shmem: 1974536 kB' 'KernelStack: 12072 kB' 'PageTables: 3900 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 158032 kB' 'Slab: 537988 kB' 'SReclaimable: 158032 kB' 'SUnreclaim: 379956 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:08.815 12:46:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.815 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.815 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.815 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.815 12:46:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.815 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.815 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.815 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.815 12:46:13 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.815 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.815 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.815 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.815 12:46:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.815 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.815 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.815 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.815 12:46:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.815 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.815 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.815 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.815 12:46:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.815 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.815 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.815 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.815 12:46:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.815 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.815 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.815 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.815 12:46:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.815 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.815 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.815 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.815 12:46:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.816 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.816 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.816 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.816 12:46:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.816 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.816 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.816 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.816 12:46:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.816 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.816 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.816 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.816 12:46:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.816 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.816 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.816 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.816 12:46:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.816 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.816 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.816 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.816 12:46:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.816 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.816 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.816 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.816 12:46:13 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.816 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.816 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.816 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.816 12:46:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.816 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.816 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.816 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.816 12:46:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.816 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.816 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.816 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.816 12:46:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.816 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.816 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.816 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.816 12:46:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.816 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.816 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.816 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.816 12:46:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.816 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.816 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.816 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.816 12:46:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.816 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.816 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.816 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.816 12:46:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.816 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.816 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.816 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.816 12:46:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.816 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.816 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.816 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.816 12:46:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.816 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.816 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.816 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.816 12:46:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.816 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.816 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.816 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.816 12:46:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.816 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.816 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.816 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.816 12:46:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.816 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.816 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.816 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.816 12:46:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.816 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.816 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.816 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.816 12:46:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.816 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.816 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.816 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.816 12:46:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.816 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.816 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.816 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.816 12:46:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.816 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.816 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.816 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.816 12:46:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.816 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.816 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.816 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.816 12:46:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.816 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.816 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.816 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.816 12:46:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.816 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.816 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.816 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.816 12:46:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.816 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.816 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.816 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.816 12:46:13 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.816 12:46:13 -- setup/common.sh@32 -- # continue 00:03:08.816 12:46:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.816 12:46:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.816 12:46:13 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.816 12:46:13 -- setup/common.sh@33 -- # echo 0 00:03:08.816 12:46:13 -- setup/common.sh@33 -- # return 0 00:03:08.816 12:46:13 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:08.816 12:46:13 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:08.816 12:46:13 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:08.816 12:46:13 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:08.816 12:46:13 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:08.816 node0=1024 expecting 1024 00:03:08.816 12:46:13 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:08.816 00:03:08.816 real 0m4.101s 00:03:08.816 user 0m1.612s 00:03:08.816 sys 0m2.501s 00:03:08.816 12:46:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:08.816 12:46:13 -- common/autotest_common.sh@10 -- # set +x 00:03:08.816 ************************************ 00:03:08.816 END TEST default_setup 00:03:08.816 ************************************ 00:03:08.816 12:46:13 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:08.816 12:46:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:08.816 12:46:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:08.816 12:46:13 -- common/autotest_common.sh@10 -- # set +x 00:03:09.077 ************************************ 00:03:09.077 START TEST per_node_1G_alloc 00:03:09.077 ************************************ 00:03:09.077 12:46:13 -- common/autotest_common.sh@1111 -- # per_node_1G_alloc 00:03:09.077 12:46:13 -- setup/hugepages.sh@143 -- # local IFS=, 00:03:09.077 12:46:13 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:09.077 12:46:13 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:09.077 12:46:13 -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:09.077 12:46:13 -- setup/hugepages.sh@51 -- # shift 00:03:09.077 12:46:13 -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:09.077 12:46:13 -- setup/hugepages.sh@52 -- # local node_ids 00:03:09.077 12:46:13 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:09.077 12:46:13 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:09.077 12:46:13 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:09.077 12:46:13 -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:09.077 12:46:13 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:09.077 12:46:13 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:09.077 12:46:13 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:09.077 12:46:13 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:09.077 12:46:13 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:09.077 12:46:13 -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:09.077 12:46:13 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:09.077 12:46:13 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:09.077 12:46:13 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:09.077 12:46:13 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:09.077 12:46:13 -- setup/hugepages.sh@73 -- # return 0 00:03:09.077 12:46:13 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:09.077 12:46:13 -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:09.077 12:46:13 -- setup/hugepages.sh@146 -- # setup output 00:03:09.077 12:46:13 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:09.077 12:46:13 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:12.377 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:12.377 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:12.377 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:12.377 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:12.377 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:12.377 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:12.377 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:12.377 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:12.377 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:12.377 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:12.377 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:12.377 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:12.377 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:12.377 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:12.377 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:12.377 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:12.377 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:12.641 12:46:17 -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:12.641 12:46:17 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:12.641 12:46:17 -- setup/hugepages.sh@89 -- # local node 00:03:12.641 12:46:17 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:12.641 12:46:17 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:12.641 12:46:17 -- setup/hugepages.sh@92 -- # local surp 00:03:12.641 12:46:17 -- setup/hugepages.sh@93 -- # local resv 00:03:12.641 12:46:17 -- setup/hugepages.sh@94 -- # local anon 00:03:12.641 12:46:17 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:12.641 12:46:17 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:12.641 12:46:17 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:12.641 12:46:17 -- setup/common.sh@18 -- # local node= 00:03:12.641 12:46:17 -- setup/common.sh@19 -- # local var val 00:03:12.641 12:46:17 -- setup/common.sh@20 -- # local mem_f mem 00:03:12.641 12:46:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:12.641 12:46:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:12.641 12:46:17 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:12.641 12:46:17 -- setup/common.sh@28 -- # mapfile -t mem 00:03:12.641 12:46:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:12.641 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.641 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.641 12:46:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 109601464 kB' 'MemAvailable: 113127864 kB' 'Buffers: 4124 kB' 'Cached: 10262136 kB' 'SwapCached: 0 kB' 'Active: 7368608 kB' 'Inactive: 3515708 kB' 'Active(anon): 6678212 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515708 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 621264 kB' 'Mapped: 182248 kB' 'Shmem: 6060156 kB' 'KReclaimable: 289252 kB' 'Slab: 1047404 kB' 'SReclaimable: 289252 kB' 'SUnreclaim: 758152 kB' 'KernelStack: 27072 kB' 'PageTables: 8760 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 8038312 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234748 kB' 'VmallocChunk: 0 kB' 'Percpu: 103104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3585396 kB' 'DirectMap2M: 42231808 kB' 'DirectMap1G: 90177536 kB' 00:03:12.641 12:46:17 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.641 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.642 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.642 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.642 12:46:17 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.642 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.642 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.642 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.642 12:46:17 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.642 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.642 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.642 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.642 12:46:17 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.642 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.642 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.642 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.642 12:46:17 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.642 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.642 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.642 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.642 12:46:17 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.642 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.642 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.642 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.642 12:46:17 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.642 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.642 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.642 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.642 12:46:17 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.642 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.642 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.642 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.642 12:46:17 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.642 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.642 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.642 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.642 12:46:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.642 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.642 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.642 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.642 12:46:17 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.642 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.642 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.642 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.642 12:46:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.642 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.642 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.642 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.642 12:46:17 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.642 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.642 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.642 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.642 12:46:17 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.642 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.642 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.642 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.642 12:46:17 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.642 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.642 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.642 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.642 12:46:17 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.642 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.642 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.642 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.642 12:46:17 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.642 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.642 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.642 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.642 12:46:17 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.642 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.642 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.642 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.642 12:46:17 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.642 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.642 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.642 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.642 12:46:17 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.642 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.642 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.642 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.642 12:46:17 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.642 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.642 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.642 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.642 12:46:17 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.642 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.642 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.642 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.642 12:46:17 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.642 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.642 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.642 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.642 12:46:17 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.642 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.642 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.642 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.642 12:46:17 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.642 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.642 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.642 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.642 12:46:17 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.642 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.642 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.642 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.642 12:46:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.642 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.642 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.642 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.642 12:46:17 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.642 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.642 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.642 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.642 12:46:17 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.642 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.642 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.642 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.642 12:46:17 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.642 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.642 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.642 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.642 12:46:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.642 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.642 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.642 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.642 12:46:17 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.642 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.642 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.642 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.642 12:46:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.642 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.642 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.642 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.642 12:46:17 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.642 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.642 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.642 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.642 12:46:17 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.642 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.642 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.642 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.642 12:46:17 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.642 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.642 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.643 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.643 12:46:17 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.643 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.643 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.643 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.643 12:46:17 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.643 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.643 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.643 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.643 12:46:17 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.643 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.643 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.643 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.643 12:46:17 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.643 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.643 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.643 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.643 12:46:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.643 12:46:17 -- setup/common.sh@33 -- # echo 0 00:03:12.643 12:46:17 -- setup/common.sh@33 -- # return 0 00:03:12.643 12:46:17 -- setup/hugepages.sh@97 -- # anon=0 00:03:12.643 12:46:17 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:12.643 12:46:17 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:12.643 12:46:17 -- setup/common.sh@18 -- # local node= 00:03:12.643 12:46:17 -- setup/common.sh@19 -- # local var val 00:03:12.643 12:46:17 -- setup/common.sh@20 -- # local mem_f mem 00:03:12.643 12:46:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:12.643 12:46:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:12.643 12:46:17 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:12.643 12:46:17 -- setup/common.sh@28 -- # mapfile -t mem 00:03:12.643 12:46:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:12.643 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.643 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.643 12:46:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 109601436 kB' 'MemAvailable: 113127836 kB' 'Buffers: 4124 kB' 'Cached: 10262140 kB' 'SwapCached: 0 kB' 'Active: 7368612 kB' 'Inactive: 3515708 kB' 'Active(anon): 6678216 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515708 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 621332 kB' 'Mapped: 182196 kB' 'Shmem: 6060160 kB' 'KReclaimable: 289252 kB' 'Slab: 1047468 kB' 'SReclaimable: 289252 kB' 'SUnreclaim: 758216 kB' 'KernelStack: 27056 kB' 'PageTables: 8732 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 8038572 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234716 kB' 'VmallocChunk: 0 kB' 'Percpu: 103104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3585396 kB' 'DirectMap2M: 42231808 kB' 'DirectMap1G: 90177536 kB' 00:03:12.643 12:46:17 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.643 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.643 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.643 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.643 12:46:17 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.643 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.643 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.643 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.643 12:46:17 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.643 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.643 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.643 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.643 12:46:17 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.643 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.643 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.643 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.643 12:46:17 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.643 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.643 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.643 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.643 12:46:17 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.643 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.643 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.643 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.643 12:46:17 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.643 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.643 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.643 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.643 12:46:17 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.643 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.643 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.643 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.643 12:46:17 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.643 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.643 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.643 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.643 12:46:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.643 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.643 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.643 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.643 12:46:17 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.643 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.643 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.643 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.643 12:46:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.643 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.643 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.643 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.643 12:46:17 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.643 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.643 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.643 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.643 12:46:17 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.643 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.643 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.643 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.643 12:46:17 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.643 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.643 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.643 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.643 12:46:17 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.643 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.643 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.643 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.643 12:46:17 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.643 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.643 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.643 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.643 12:46:17 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.643 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.643 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.643 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.643 12:46:17 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.643 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.643 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.643 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.643 12:46:17 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.643 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.643 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.643 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.643 12:46:17 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.643 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.643 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.643 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.643 12:46:17 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.643 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.643 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.643 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.643 12:46:17 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.643 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.643 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.643 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.643 12:46:17 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.643 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.643 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.643 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.643 12:46:17 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.643 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.643 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.643 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.643 12:46:17 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.643 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.643 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.643 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.643 12:46:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.644 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.644 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.644 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.644 12:46:17 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.644 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.644 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.644 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.644 12:46:17 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.644 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.644 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.644 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.644 12:46:17 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.644 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.644 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.644 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.644 12:46:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.644 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.644 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.644 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.644 12:46:17 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.644 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.644 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.644 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.644 12:46:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.644 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.644 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.644 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.644 12:46:17 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.644 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.644 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.644 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.644 12:46:17 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.644 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.644 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.644 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.644 12:46:17 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.644 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.644 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.644 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.644 12:46:17 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.644 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.644 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.644 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.644 12:46:17 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.644 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.644 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.644 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.644 12:46:17 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.644 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.644 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.644 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.644 12:46:17 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.644 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.644 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.644 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.644 12:46:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.644 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.644 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.644 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.644 12:46:17 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.644 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.644 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.644 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.644 12:46:17 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.644 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.644 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.644 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.644 12:46:17 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.644 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.644 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.644 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.644 12:46:17 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.644 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.644 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.644 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.644 12:46:17 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.644 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.644 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.644 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.644 12:46:17 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.644 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.644 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.644 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.644 12:46:17 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.644 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.644 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.644 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.644 12:46:17 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.644 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.644 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.644 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.644 12:46:17 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.644 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.644 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.644 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.644 12:46:17 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.644 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.644 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.644 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.644 12:46:17 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.644 12:46:17 -- setup/common.sh@33 -- # echo 0 00:03:12.644 12:46:17 -- setup/common.sh@33 -- # return 0 00:03:12.644 12:46:17 -- setup/hugepages.sh@99 -- # surp=0 00:03:12.644 12:46:17 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:12.644 12:46:17 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:12.644 12:46:17 -- setup/common.sh@18 -- # local node= 00:03:12.644 12:46:17 -- setup/common.sh@19 -- # local var val 00:03:12.644 12:46:17 -- setup/common.sh@20 -- # local mem_f mem 00:03:12.644 12:46:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:12.644 12:46:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:12.644 12:46:17 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:12.644 12:46:17 -- setup/common.sh@28 -- # mapfile -t mem 00:03:12.644 12:46:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:12.644 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.644 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.644 12:46:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 109601464 kB' 'MemAvailable: 113127864 kB' 'Buffers: 4124 kB' 'Cached: 10262152 kB' 'SwapCached: 0 kB' 'Active: 7368336 kB' 'Inactive: 3515708 kB' 'Active(anon): 6677940 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515708 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 621048 kB' 'Mapped: 182196 kB' 'Shmem: 6060172 kB' 'KReclaimable: 289252 kB' 'Slab: 1047468 kB' 'SReclaimable: 289252 kB' 'SUnreclaim: 758216 kB' 'KernelStack: 27008 kB' 'PageTables: 8576 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 8038340 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234668 kB' 'VmallocChunk: 0 kB' 'Percpu: 103104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3585396 kB' 'DirectMap2M: 42231808 kB' 'DirectMap1G: 90177536 kB' 00:03:12.644 12:46:17 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.644 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.644 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.644 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.644 12:46:17 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.644 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.644 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.644 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.644 12:46:17 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.644 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.644 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.644 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.644 12:46:17 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.644 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.644 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.644 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.644 12:46:17 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.644 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.644 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.644 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.644 12:46:17 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.644 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.644 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.644 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.644 12:46:17 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.644 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.644 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.644 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.644 12:46:17 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.644 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.645 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.645 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.645 12:46:17 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.645 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.645 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.645 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.645 12:46:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.645 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.645 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.645 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.645 12:46:17 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.645 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.645 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.645 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.645 12:46:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.645 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.645 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.645 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.645 12:46:17 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.645 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.645 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.645 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.645 12:46:17 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.645 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.645 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.645 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.645 12:46:17 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.645 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.645 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.645 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.645 12:46:17 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.645 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.645 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.645 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.645 12:46:17 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.645 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.645 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.645 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.645 12:46:17 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.645 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.645 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.645 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.645 12:46:17 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.645 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.645 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.645 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.645 12:46:17 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.645 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.645 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.645 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.645 12:46:17 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.645 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.645 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.645 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.645 12:46:17 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.645 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.645 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.645 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.645 12:46:17 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.645 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.645 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.645 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.645 12:46:17 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.645 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.645 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.645 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.645 12:46:17 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.645 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.645 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.645 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.645 12:46:17 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.645 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.645 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.645 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.645 12:46:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.645 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.645 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.645 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.645 12:46:17 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.645 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.645 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.645 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.645 12:46:17 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.645 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.645 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.645 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.645 12:46:17 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.645 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.645 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.645 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.645 12:46:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.645 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.645 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.645 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.645 12:46:17 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.645 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.645 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.645 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.645 12:46:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.645 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.645 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.645 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.645 12:46:17 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.645 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.645 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.645 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.645 12:46:17 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.645 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.645 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.645 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.645 12:46:17 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.645 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.645 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.645 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.645 12:46:17 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.645 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.645 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.645 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.645 12:46:17 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.645 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.645 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.645 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.645 12:46:17 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.645 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.645 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.645 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.645 12:46:17 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.645 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.645 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.645 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.645 12:46:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.645 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.646 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.646 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.646 12:46:17 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.646 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.646 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.646 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.646 12:46:17 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.646 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.646 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.646 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.646 12:46:17 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.646 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.646 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.646 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.646 12:46:17 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.646 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.646 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.646 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.646 12:46:17 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.646 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.646 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.646 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.646 12:46:17 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.646 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.646 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.646 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.646 12:46:17 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.646 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.646 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.646 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.646 12:46:17 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.646 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.646 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.646 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.646 12:46:17 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.646 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.646 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.646 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.646 12:46:17 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.646 12:46:17 -- setup/common.sh@33 -- # echo 0 00:03:12.646 12:46:17 -- setup/common.sh@33 -- # return 0 00:03:12.646 12:46:17 -- setup/hugepages.sh@100 -- # resv=0 00:03:12.646 12:46:17 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:12.646 nr_hugepages=1024 00:03:12.646 12:46:17 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:12.646 resv_hugepages=0 00:03:12.646 12:46:17 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:12.646 surplus_hugepages=0 00:03:12.646 12:46:17 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:12.646 anon_hugepages=0 00:03:12.646 12:46:17 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:12.910 12:46:17 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:12.910 12:46:17 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:12.910 12:46:17 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:12.910 12:46:17 -- setup/common.sh@18 -- # local node= 00:03:12.910 12:46:17 -- setup/common.sh@19 -- # local var val 00:03:12.910 12:46:17 -- setup/common.sh@20 -- # local mem_f mem 00:03:12.910 12:46:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:12.910 12:46:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:12.910 12:46:17 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:12.910 12:46:17 -- setup/common.sh@28 -- # mapfile -t mem 00:03:12.910 12:46:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:12.910 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.910 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.910 12:46:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 109600484 kB' 'MemAvailable: 113126884 kB' 'Buffers: 4124 kB' 'Cached: 10262164 kB' 'SwapCached: 0 kB' 'Active: 7368620 kB' 'Inactive: 3515708 kB' 'Active(anon): 6678224 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515708 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 621292 kB' 'Mapped: 182196 kB' 'Shmem: 6060184 kB' 'KReclaimable: 289252 kB' 'Slab: 1047468 kB' 'SReclaimable: 289252 kB' 'SUnreclaim: 758216 kB' 'KernelStack: 27056 kB' 'PageTables: 8732 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 8038356 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234652 kB' 'VmallocChunk: 0 kB' 'Percpu: 103104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3585396 kB' 'DirectMap2M: 42231808 kB' 'DirectMap1G: 90177536 kB' 00:03:12.910 12:46:17 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.910 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.910 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.910 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.910 12:46:17 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.910 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.910 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.910 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.910 12:46:17 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.910 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.910 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.910 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.910 12:46:17 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.910 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.910 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.910 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.910 12:46:17 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.910 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.910 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.910 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.910 12:46:17 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.910 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.910 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.910 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.910 12:46:17 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.910 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.910 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.910 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.910 12:46:17 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.910 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.910 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.910 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.910 12:46:17 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.910 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.910 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.910 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.910 12:46:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.910 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.910 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.910 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.910 12:46:17 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.910 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.910 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.910 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.910 12:46:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.910 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.910 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.910 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.910 12:46:17 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.910 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.910 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.910 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.910 12:46:17 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.910 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.910 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.910 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.910 12:46:17 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.910 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.910 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.910 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.910 12:46:17 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.910 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.910 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.910 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.911 12:46:17 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.911 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.911 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.911 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.911 12:46:17 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.911 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.911 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.911 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.911 12:46:17 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.911 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.911 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.911 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.911 12:46:17 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.911 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.911 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.911 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.911 12:46:17 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.911 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.911 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.911 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.911 12:46:17 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.911 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.911 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.911 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.911 12:46:17 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.911 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.911 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.911 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.911 12:46:17 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.911 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.911 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.911 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.911 12:46:17 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.911 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.911 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.911 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.911 12:46:17 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.911 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.911 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.911 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.911 12:46:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.911 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.911 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.911 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.911 12:46:17 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.911 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.911 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.911 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.911 12:46:17 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.911 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.911 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.911 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.911 12:46:17 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.911 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.911 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.911 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.911 12:46:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.911 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.911 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.911 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.911 12:46:17 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.911 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.911 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.911 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.911 12:46:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.911 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.911 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.911 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.911 12:46:17 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.911 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.911 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.911 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.911 12:46:17 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.911 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.911 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.911 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.911 12:46:17 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.911 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.911 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.911 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.911 12:46:17 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.911 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.911 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.911 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.911 12:46:17 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.911 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.911 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.911 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.911 12:46:17 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.911 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.911 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.911 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.911 12:46:17 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.911 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.911 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.911 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.911 12:46:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.911 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.911 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.911 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.911 12:46:17 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.911 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.911 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.911 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.911 12:46:17 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.911 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.911 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.911 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.911 12:46:17 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.911 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.911 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.911 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.911 12:46:17 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.911 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.911 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.911 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.911 12:46:17 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.911 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.911 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.911 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.911 12:46:17 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.911 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.911 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.911 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.911 12:46:17 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.911 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.911 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.911 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.911 12:46:17 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.911 12:46:17 -- setup/common.sh@33 -- # echo 1024 00:03:12.911 12:46:17 -- setup/common.sh@33 -- # return 0 00:03:12.911 12:46:17 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:12.911 12:46:17 -- setup/hugepages.sh@112 -- # get_nodes 00:03:12.911 12:46:17 -- setup/hugepages.sh@27 -- # local node 00:03:12.911 12:46:17 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:12.911 12:46:17 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:12.911 12:46:17 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:12.911 12:46:17 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:12.911 12:46:17 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:12.911 12:46:17 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:12.911 12:46:17 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:12.911 12:46:17 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:12.911 12:46:17 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:12.911 12:46:17 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:12.911 12:46:17 -- setup/common.sh@18 -- # local node=0 00:03:12.911 12:46:17 -- setup/common.sh@19 -- # local var val 00:03:12.911 12:46:17 -- setup/common.sh@20 -- # local mem_f mem 00:03:12.911 12:46:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:12.911 12:46:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:12.911 12:46:17 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:12.911 12:46:17 -- setup/common.sh@28 -- # mapfile -t mem 00:03:12.911 12:46:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:12.911 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.911 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.912 12:46:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 60050672 kB' 'MemUsed: 5608336 kB' 'SwapCached: 0 kB' 'Active: 2391592 kB' 'Inactive: 107576 kB' 'Active(anon): 2082072 kB' 'Inactive(anon): 0 kB' 'Active(file): 309520 kB' 'Inactive(file): 107576 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2391680 kB' 'Mapped: 109148 kB' 'AnonPages: 110632 kB' 'Shmem: 1974584 kB' 'KernelStack: 11896 kB' 'PageTables: 3624 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 158032 kB' 'Slab: 538044 kB' 'SReclaimable: 158032 kB' 'SUnreclaim: 380012 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:12.912 12:46:17 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.912 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.912 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.912 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.912 12:46:17 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.912 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.912 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.912 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.912 12:46:17 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.912 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.912 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.912 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.912 12:46:17 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.912 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.912 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.912 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.912 12:46:17 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.912 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.912 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.912 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.912 12:46:17 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.912 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.912 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.912 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.912 12:46:17 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.912 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.912 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.912 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.912 12:46:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.912 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.912 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.912 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.912 12:46:17 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.912 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.912 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.912 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.912 12:46:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.912 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.912 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.912 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.912 12:46:17 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.912 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.912 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.912 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.912 12:46:17 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.912 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.912 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.912 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.912 12:46:17 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.912 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.912 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.912 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.912 12:46:17 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.912 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.912 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.912 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.912 12:46:17 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.912 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.912 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.912 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.912 12:46:17 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.912 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.912 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.912 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.912 12:46:17 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.912 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.912 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.912 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.912 12:46:17 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.912 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.912 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.912 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.912 12:46:17 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.912 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.912 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.912 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.912 12:46:17 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.912 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.912 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.912 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.912 12:46:17 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.912 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.912 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.912 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.912 12:46:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.912 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.912 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.912 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.912 12:46:17 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.912 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.912 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.912 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.912 12:46:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.912 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.912 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.912 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.912 12:46:17 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.912 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.912 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.912 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.912 12:46:17 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.912 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.912 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.912 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.912 12:46:17 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.912 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.912 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.912 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.912 12:46:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.912 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.912 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.912 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.912 12:46:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.912 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.912 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.912 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.912 12:46:17 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.912 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.912 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.912 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.912 12:46:17 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.912 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.912 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.912 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.912 12:46:17 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.912 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.912 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.912 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.912 12:46:17 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.912 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.912 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.912 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.912 12:46:17 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.912 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.912 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.912 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.912 12:46:17 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.912 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.912 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.912 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.912 12:46:17 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.912 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.912 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.912 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.912 12:46:17 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.912 12:46:17 -- setup/common.sh@33 -- # echo 0 00:03:12.912 12:46:17 -- setup/common.sh@33 -- # return 0 00:03:12.912 12:46:17 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:12.912 12:46:17 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:12.912 12:46:17 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:12.912 12:46:17 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:12.912 12:46:17 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:12.912 12:46:17 -- setup/common.sh@18 -- # local node=1 00:03:12.912 12:46:17 -- setup/common.sh@19 -- # local var val 00:03:12.912 12:46:17 -- setup/common.sh@20 -- # local mem_f mem 00:03:12.912 12:46:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:12.912 12:46:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:12.912 12:46:17 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:12.912 12:46:17 -- setup/common.sh@28 -- # mapfile -t mem 00:03:12.912 12:46:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:12.913 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.913 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.913 12:46:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679860 kB' 'MemFree: 49550524 kB' 'MemUsed: 11129336 kB' 'SwapCached: 0 kB' 'Active: 4977016 kB' 'Inactive: 3408132 kB' 'Active(anon): 4596140 kB' 'Inactive(anon): 0 kB' 'Active(file): 380876 kB' 'Inactive(file): 3408132 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7874624 kB' 'Mapped: 73048 kB' 'AnonPages: 510668 kB' 'Shmem: 4085616 kB' 'KernelStack: 15160 kB' 'PageTables: 5108 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 131220 kB' 'Slab: 509424 kB' 'SReclaimable: 131220 kB' 'SUnreclaim: 378204 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:12.913 12:46:17 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.913 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.913 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.913 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.913 12:46:17 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.913 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.913 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.913 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.913 12:46:17 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.913 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.913 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.913 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.913 12:46:17 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.913 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.913 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.913 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.913 12:46:17 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.913 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.913 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.913 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.913 12:46:17 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.913 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.913 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.913 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.913 12:46:17 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.913 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.913 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.913 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.913 12:46:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.913 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.913 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.913 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.913 12:46:17 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.913 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.913 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.913 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.913 12:46:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.913 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.913 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.913 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.913 12:46:17 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.913 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.913 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.913 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.913 12:46:17 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.913 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.913 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.913 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.913 12:46:17 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.913 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.913 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.913 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.913 12:46:17 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.913 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.913 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.913 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.913 12:46:17 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.913 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.913 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.913 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.913 12:46:17 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.913 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.913 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.913 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.913 12:46:17 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.913 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.913 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.913 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.913 12:46:17 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.913 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.913 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.913 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.913 12:46:17 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.913 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.913 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.913 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.913 12:46:17 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.913 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.913 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.913 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.913 12:46:17 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.913 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.913 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.913 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.913 12:46:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.913 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.913 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.913 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.913 12:46:17 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.913 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.913 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.913 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.913 12:46:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.913 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.913 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.913 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.913 12:46:17 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.913 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.913 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.913 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.913 12:46:17 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.913 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.913 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.913 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.913 12:46:17 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.913 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.913 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.913 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.913 12:46:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.913 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.913 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.913 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.913 12:46:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.913 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.913 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.913 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.913 12:46:17 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.913 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.913 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.913 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.913 12:46:17 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.913 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.913 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.913 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.913 12:46:17 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.913 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.913 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.913 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.913 12:46:17 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.913 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.913 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.913 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.913 12:46:17 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.913 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.913 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.913 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.913 12:46:17 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.913 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.913 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.913 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.913 12:46:17 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.913 12:46:17 -- setup/common.sh@32 -- # continue 00:03:12.913 12:46:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.913 12:46:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.913 12:46:17 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.913 12:46:17 -- setup/common.sh@33 -- # echo 0 00:03:12.913 12:46:17 -- setup/common.sh@33 -- # return 0 00:03:12.913 12:46:17 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:12.914 12:46:17 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:12.914 12:46:17 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:12.914 12:46:17 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:12.914 12:46:17 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:12.914 node0=512 expecting 512 00:03:12.914 12:46:17 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:12.914 12:46:17 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:12.914 12:46:17 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:12.914 12:46:17 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:12.914 node1=512 expecting 512 00:03:12.914 12:46:17 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:12.914 00:03:12.914 real 0m3.859s 00:03:12.914 user 0m1.489s 00:03:12.914 sys 0m2.427s 00:03:12.914 12:46:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:12.914 12:46:17 -- common/autotest_common.sh@10 -- # set +x 00:03:12.914 ************************************ 00:03:12.914 END TEST per_node_1G_alloc 00:03:12.914 ************************************ 00:03:12.914 12:46:17 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:12.914 12:46:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:12.914 12:46:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:12.914 12:46:17 -- common/autotest_common.sh@10 -- # set +x 00:03:12.914 ************************************ 00:03:12.914 START TEST even_2G_alloc 00:03:12.914 ************************************ 00:03:12.914 12:46:17 -- common/autotest_common.sh@1111 -- # even_2G_alloc 00:03:12.914 12:46:17 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:12.914 12:46:17 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:12.914 12:46:17 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:12.914 12:46:17 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:12.914 12:46:17 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:12.914 12:46:17 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:12.914 12:46:17 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:12.914 12:46:17 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:12.914 12:46:17 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:12.914 12:46:17 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:12.914 12:46:17 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:12.914 12:46:17 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:12.914 12:46:17 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:12.914 12:46:17 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:12.914 12:46:17 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:12.914 12:46:17 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:12.914 12:46:17 -- setup/hugepages.sh@83 -- # : 512 00:03:12.914 12:46:17 -- setup/hugepages.sh@84 -- # : 1 00:03:12.914 12:46:17 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:12.914 12:46:17 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:12.914 12:46:17 -- setup/hugepages.sh@83 -- # : 0 00:03:12.914 12:46:17 -- setup/hugepages.sh@84 -- # : 0 00:03:12.914 12:46:17 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:13.175 12:46:17 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:13.175 12:46:17 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:13.175 12:46:17 -- setup/hugepages.sh@153 -- # setup output 00:03:13.175 12:46:17 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:13.175 12:46:17 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:16.478 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:16.478 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:16.478 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:16.478 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:16.478 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:16.478 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:16.478 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:16.479 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:16.479 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:16.479 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:16.479 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:16.479 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:16.479 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:16.479 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:16.479 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:16.479 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:16.479 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:16.747 12:46:21 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:16.747 12:46:21 -- setup/hugepages.sh@89 -- # local node 00:03:16.747 12:46:21 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:16.747 12:46:21 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:16.747 12:46:21 -- setup/hugepages.sh@92 -- # local surp 00:03:16.747 12:46:21 -- setup/hugepages.sh@93 -- # local resv 00:03:16.747 12:46:21 -- setup/hugepages.sh@94 -- # local anon 00:03:16.747 12:46:21 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:16.747 12:46:21 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:16.747 12:46:21 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:16.747 12:46:21 -- setup/common.sh@18 -- # local node= 00:03:16.747 12:46:21 -- setup/common.sh@19 -- # local var val 00:03:16.747 12:46:21 -- setup/common.sh@20 -- # local mem_f mem 00:03:16.747 12:46:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:16.747 12:46:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:16.747 12:46:21 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:16.747 12:46:21 -- setup/common.sh@28 -- # mapfile -t mem 00:03:16.747 12:46:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:16.747 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.747 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.747 12:46:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 109593856 kB' 'MemAvailable: 113120256 kB' 'Buffers: 4124 kB' 'Cached: 10262280 kB' 'SwapCached: 0 kB' 'Active: 7368248 kB' 'Inactive: 3515708 kB' 'Active(anon): 6677852 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515708 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 620424 kB' 'Mapped: 182340 kB' 'Shmem: 6060300 kB' 'KReclaimable: 289252 kB' 'Slab: 1046896 kB' 'SReclaimable: 289252 kB' 'SUnreclaim: 757644 kB' 'KernelStack: 26992 kB' 'PageTables: 8536 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 8039060 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234620 kB' 'VmallocChunk: 0 kB' 'Percpu: 103104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3585396 kB' 'DirectMap2M: 42231808 kB' 'DirectMap1G: 90177536 kB' 00:03:16.747 12:46:21 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.747 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.747 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.747 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.747 12:46:21 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.747 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.747 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.747 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.747 12:46:21 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.747 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.747 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.747 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.747 12:46:21 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.747 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.747 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.747 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.747 12:46:21 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.747 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.747 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.747 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.747 12:46:21 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.747 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.747 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.747 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.747 12:46:21 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.747 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.747 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.747 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.747 12:46:21 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.747 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.747 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.747 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.747 12:46:21 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.747 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.747 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.747 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.747 12:46:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.747 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.747 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.747 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.747 12:46:21 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.747 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.747 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.747 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.747 12:46:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.747 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.747 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.747 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.747 12:46:21 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.747 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.747 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.747 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.747 12:46:21 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.747 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.747 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.747 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.747 12:46:21 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.747 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.747 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.747 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.747 12:46:21 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.747 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.747 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.747 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.747 12:46:21 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.747 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.747 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.747 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.747 12:46:21 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.747 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.747 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.747 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.747 12:46:21 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.747 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.747 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.747 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.747 12:46:21 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.747 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.747 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.747 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.747 12:46:21 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.747 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.747 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.747 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.747 12:46:21 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.747 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.747 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.747 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.747 12:46:21 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.747 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.747 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.747 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.747 12:46:21 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.747 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.747 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.747 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.747 12:46:21 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.747 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.747 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.747 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.747 12:46:21 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.747 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.747 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.747 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.747 12:46:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.747 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.747 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.747 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.747 12:46:21 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.747 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.747 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.748 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.748 12:46:21 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.748 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.748 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.748 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.748 12:46:21 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.748 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.748 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.748 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.748 12:46:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.748 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.748 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.748 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.748 12:46:21 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.748 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.748 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.748 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.748 12:46:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.748 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.748 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.748 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.748 12:46:21 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.748 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.748 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.748 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.748 12:46:21 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.748 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.748 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.748 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.748 12:46:21 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.748 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.748 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.748 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.748 12:46:21 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.748 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.748 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.748 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.748 12:46:21 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.748 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.748 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.748 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.748 12:46:21 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.748 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.748 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.748 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.748 12:46:21 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.748 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.748 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.748 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.748 12:46:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.748 12:46:21 -- setup/common.sh@33 -- # echo 0 00:03:16.748 12:46:21 -- setup/common.sh@33 -- # return 0 00:03:16.748 12:46:21 -- setup/hugepages.sh@97 -- # anon=0 00:03:16.748 12:46:21 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:16.748 12:46:21 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:16.748 12:46:21 -- setup/common.sh@18 -- # local node= 00:03:16.748 12:46:21 -- setup/common.sh@19 -- # local var val 00:03:16.748 12:46:21 -- setup/common.sh@20 -- # local mem_f mem 00:03:16.748 12:46:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:16.748 12:46:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:16.748 12:46:21 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:16.748 12:46:21 -- setup/common.sh@28 -- # mapfile -t mem 00:03:16.748 12:46:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:16.748 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.748 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.748 12:46:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 109593576 kB' 'MemAvailable: 113119976 kB' 'Buffers: 4124 kB' 'Cached: 10262284 kB' 'SwapCached: 0 kB' 'Active: 7367876 kB' 'Inactive: 3515708 kB' 'Active(anon): 6677480 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515708 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 620548 kB' 'Mapped: 182260 kB' 'Shmem: 6060304 kB' 'KReclaimable: 289252 kB' 'Slab: 1046880 kB' 'SReclaimable: 289252 kB' 'SUnreclaim: 757628 kB' 'KernelStack: 26976 kB' 'PageTables: 8476 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 8039072 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234588 kB' 'VmallocChunk: 0 kB' 'Percpu: 103104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3585396 kB' 'DirectMap2M: 42231808 kB' 'DirectMap1G: 90177536 kB' 00:03:16.748 12:46:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.748 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.748 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.748 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.748 12:46:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.748 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.748 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.748 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.748 12:46:21 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.748 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.748 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.748 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.748 12:46:21 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.748 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.748 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.748 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.748 12:46:21 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.748 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.748 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.748 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.748 12:46:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.748 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.748 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.748 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.748 12:46:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.748 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.748 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.748 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.748 12:46:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.748 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.748 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.748 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.748 12:46:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.748 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.748 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.748 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.748 12:46:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.748 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.748 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.748 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.748 12:46:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.748 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.748 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.748 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.748 12:46:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.748 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.748 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.748 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.748 12:46:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.748 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.748 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.748 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.748 12:46:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.748 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.748 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.748 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.748 12:46:21 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.748 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.748 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.748 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.748 12:46:21 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.748 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.748 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.748 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.748 12:46:21 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.748 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.748 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.748 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.748 12:46:21 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.748 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.748 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.748 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.748 12:46:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.748 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.748 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.748 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.748 12:46:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.748 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.748 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.748 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.748 12:46:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.748 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.748 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.748 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.748 12:46:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.748 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.748 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.748 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.748 12:46:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.748 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.748 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.748 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.748 12:46:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.748 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.748 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.748 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.748 12:46:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.748 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.748 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.748 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.748 12:46:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.748 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.748 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.748 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.748 12:46:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.748 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.748 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.748 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.748 12:46:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.748 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.748 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.748 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.748 12:46:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.748 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.748 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.748 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.748 12:46:21 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.748 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.748 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.748 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.748 12:46:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.748 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.748 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.748 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.748 12:46:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.748 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.748 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.748 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.748 12:46:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.748 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.748 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.748 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.748 12:46:21 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.748 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.748 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.748 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.748 12:46:21 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.748 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.748 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.748 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.748 12:46:21 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.748 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.748 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.748 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.748 12:46:21 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.748 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.748 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.748 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.748 12:46:21 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.748 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.748 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.748 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.748 12:46:21 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.748 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.748 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.748 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.748 12:46:21 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.748 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.748 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.748 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.748 12:46:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.748 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.748 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.748 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.748 12:46:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.748 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.748 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.748 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.748 12:46:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.748 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.748 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.748 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.748 12:46:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.748 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.748 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.748 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.748 12:46:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.748 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.748 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.748 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.748 12:46:21 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.748 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.748 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.748 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.748 12:46:21 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.748 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.748 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.748 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.748 12:46:21 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.748 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.748 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.748 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.748 12:46:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.748 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.748 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.748 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.748 12:46:21 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.748 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.748 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.748 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.748 12:46:21 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.748 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.748 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.748 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.748 12:46:21 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.748 12:46:21 -- setup/common.sh@33 -- # echo 0 00:03:16.748 12:46:21 -- setup/common.sh@33 -- # return 0 00:03:16.748 12:46:21 -- setup/hugepages.sh@99 -- # surp=0 00:03:16.748 12:46:21 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:16.748 12:46:21 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:16.748 12:46:21 -- setup/common.sh@18 -- # local node= 00:03:16.748 12:46:21 -- setup/common.sh@19 -- # local var val 00:03:16.748 12:46:21 -- setup/common.sh@20 -- # local mem_f mem 00:03:16.748 12:46:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:16.748 12:46:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:16.748 12:46:21 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:16.748 12:46:21 -- setup/common.sh@28 -- # mapfile -t mem 00:03:16.748 12:46:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:16.748 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.748 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.749 12:46:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 109593888 kB' 'MemAvailable: 113120288 kB' 'Buffers: 4124 kB' 'Cached: 10262296 kB' 'SwapCached: 0 kB' 'Active: 7367880 kB' 'Inactive: 3515708 kB' 'Active(anon): 6677484 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515708 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 620552 kB' 'Mapped: 182260 kB' 'Shmem: 6060316 kB' 'KReclaimable: 289252 kB' 'Slab: 1046880 kB' 'SReclaimable: 289252 kB' 'SUnreclaim: 757628 kB' 'KernelStack: 26976 kB' 'PageTables: 8476 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 8039084 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234604 kB' 'VmallocChunk: 0 kB' 'Percpu: 103104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3585396 kB' 'DirectMap2M: 42231808 kB' 'DirectMap1G: 90177536 kB' 00:03:16.749 12:46:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.749 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.749 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.749 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.749 12:46:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.749 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.749 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.749 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.749 12:46:21 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.749 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.749 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.749 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.749 12:46:21 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.749 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.749 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.749 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.749 12:46:21 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.749 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.749 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.749 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.749 12:46:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.749 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.749 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.749 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.749 12:46:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.749 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.749 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.749 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.749 12:46:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.749 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.749 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.749 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.749 12:46:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.749 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.749 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.749 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.749 12:46:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.749 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.749 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.749 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.749 12:46:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.749 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.749 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.749 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.749 12:46:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.749 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.749 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.749 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.749 12:46:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.749 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.749 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.749 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.749 12:46:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.749 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.749 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.749 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.749 12:46:21 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.749 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.749 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.749 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.749 12:46:21 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.749 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.749 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.749 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.749 12:46:21 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.749 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.749 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.749 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.749 12:46:21 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.749 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.749 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.749 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.749 12:46:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.749 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.749 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.749 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.749 12:46:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.749 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.749 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.749 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.749 12:46:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.749 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.749 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.749 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.749 12:46:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.749 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.749 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.749 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.749 12:46:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.749 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.749 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.749 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.749 12:46:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.749 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.749 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.749 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.749 12:46:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.749 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.749 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.749 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.749 12:46:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.749 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.749 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.749 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.749 12:46:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.749 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.749 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.749 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.749 12:46:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.749 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.749 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.749 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.749 12:46:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.749 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.749 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.749 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.749 12:46:21 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.749 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.749 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.749 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.749 12:46:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.749 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.749 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.749 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.749 12:46:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.749 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.749 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.749 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.749 12:46:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.749 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.749 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.749 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.749 12:46:21 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.749 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.749 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.749 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.749 12:46:21 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.749 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.749 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.749 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.749 12:46:21 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.749 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.749 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.749 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.749 12:46:21 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.749 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.749 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.749 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.749 12:46:21 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.749 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.749 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.749 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.749 12:46:21 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.749 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.749 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.749 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.749 12:46:21 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.749 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.749 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.749 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.749 12:46:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.749 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.749 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.749 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.749 12:46:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.749 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.749 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.749 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.749 12:46:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.749 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.749 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.749 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.749 12:46:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.749 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.749 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.749 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.749 12:46:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.749 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.749 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.749 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.749 12:46:21 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.749 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.749 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.749 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.749 12:46:21 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.749 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.749 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.749 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.749 12:46:21 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.749 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.749 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.749 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.749 12:46:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.749 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.749 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.749 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.749 12:46:21 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.749 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.749 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.749 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.749 12:46:21 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.749 12:46:21 -- setup/common.sh@33 -- # echo 0 00:03:16.749 12:46:21 -- setup/common.sh@33 -- # return 0 00:03:16.749 12:46:21 -- setup/hugepages.sh@100 -- # resv=0 00:03:16.749 12:46:21 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:16.749 nr_hugepages=1024 00:03:16.749 12:46:21 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:16.749 resv_hugepages=0 00:03:16.749 12:46:21 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:16.749 surplus_hugepages=0 00:03:16.749 12:46:21 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:16.749 anon_hugepages=0 00:03:16.749 12:46:21 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:16.749 12:46:21 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:16.749 12:46:21 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:16.749 12:46:21 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:16.749 12:46:21 -- setup/common.sh@18 -- # local node= 00:03:16.749 12:46:21 -- setup/common.sh@19 -- # local var val 00:03:16.749 12:46:21 -- setup/common.sh@20 -- # local mem_f mem 00:03:16.749 12:46:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:16.749 12:46:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:16.749 12:46:21 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:16.749 12:46:21 -- setup/common.sh@28 -- # mapfile -t mem 00:03:16.749 12:46:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:16.749 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.749 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.749 12:46:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 109593888 kB' 'MemAvailable: 113120288 kB' 'Buffers: 4124 kB' 'Cached: 10262312 kB' 'SwapCached: 0 kB' 'Active: 7367960 kB' 'Inactive: 3515708 kB' 'Active(anon): 6677564 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515708 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 620552 kB' 'Mapped: 182260 kB' 'Shmem: 6060332 kB' 'KReclaimable: 289252 kB' 'Slab: 1046880 kB' 'SReclaimable: 289252 kB' 'SUnreclaim: 757628 kB' 'KernelStack: 26976 kB' 'PageTables: 8476 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 8039100 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234620 kB' 'VmallocChunk: 0 kB' 'Percpu: 103104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3585396 kB' 'DirectMap2M: 42231808 kB' 'DirectMap1G: 90177536 kB' 00:03:16.749 12:46:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.749 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.749 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.749 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.749 12:46:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.749 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.749 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.749 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.749 12:46:21 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.749 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.749 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.749 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.749 12:46:21 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.749 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.749 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.749 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.749 12:46:21 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.749 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.749 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.749 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.749 12:46:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.749 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.749 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.749 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.749 12:46:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.749 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.749 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.749 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.749 12:46:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.749 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.749 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.749 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.749 12:46:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.749 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.749 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.749 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.749 12:46:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.749 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.750 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.750 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.750 12:46:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.750 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.750 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.750 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.750 12:46:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.750 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.750 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.750 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.750 12:46:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.750 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.750 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.750 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.750 12:46:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.750 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.750 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.750 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.750 12:46:21 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.750 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.750 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.750 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.750 12:46:21 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.750 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.750 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.750 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.750 12:46:21 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.750 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.750 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.750 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.750 12:46:21 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.750 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.750 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.750 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.750 12:46:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.750 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.750 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.750 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.750 12:46:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.750 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.750 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.750 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.750 12:46:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.750 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.750 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.750 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.750 12:46:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.750 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.750 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.750 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.750 12:46:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.750 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.750 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.750 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.750 12:46:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.750 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.750 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.750 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.750 12:46:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.750 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.750 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.750 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.750 12:46:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.750 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.750 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.750 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.750 12:46:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.750 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.750 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.750 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.750 12:46:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.750 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.750 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.750 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.750 12:46:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.750 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.750 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.750 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.750 12:46:21 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.750 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.750 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.750 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.750 12:46:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.750 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.750 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.750 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.750 12:46:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.750 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.750 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.750 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.750 12:46:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.750 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.750 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.750 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.750 12:46:21 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.750 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.750 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.750 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.750 12:46:21 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.750 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.750 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.750 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.750 12:46:21 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.750 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.750 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.750 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.750 12:46:21 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.750 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.750 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.750 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.750 12:46:21 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.750 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.750 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.750 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.750 12:46:21 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.750 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.750 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.750 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.750 12:46:21 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.750 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.750 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.750 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.750 12:46:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.750 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.750 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.750 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.750 12:46:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.750 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.750 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.750 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.750 12:46:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.750 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.750 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.750 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.750 12:46:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.750 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.750 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.750 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.750 12:46:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.750 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.750 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.750 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.750 12:46:21 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.750 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.750 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.750 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.750 12:46:21 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.750 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.750 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.750 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.750 12:46:21 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.750 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.750 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.750 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.750 12:46:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.750 12:46:21 -- setup/common.sh@33 -- # echo 1024 00:03:16.750 12:46:21 -- setup/common.sh@33 -- # return 0 00:03:16.750 12:46:21 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:16.750 12:46:21 -- setup/hugepages.sh@112 -- # get_nodes 00:03:16.750 12:46:21 -- setup/hugepages.sh@27 -- # local node 00:03:16.750 12:46:21 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:16.750 12:46:21 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:16.750 12:46:21 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:16.750 12:46:21 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:16.750 12:46:21 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:16.750 12:46:21 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:16.750 12:46:21 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:16.750 12:46:21 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:16.750 12:46:21 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:16.750 12:46:21 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:16.750 12:46:21 -- setup/common.sh@18 -- # local node=0 00:03:16.750 12:46:21 -- setup/common.sh@19 -- # local var val 00:03:16.750 12:46:21 -- setup/common.sh@20 -- # local mem_f mem 00:03:16.750 12:46:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:16.750 12:46:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:16.750 12:46:21 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:16.750 12:46:21 -- setup/common.sh@28 -- # mapfile -t mem 00:03:16.750 12:46:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:16.750 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.750 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.750 12:46:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 60038196 kB' 'MemUsed: 5620812 kB' 'SwapCached: 0 kB' 'Active: 2393364 kB' 'Inactive: 107576 kB' 'Active(anon): 2083844 kB' 'Inactive(anon): 0 kB' 'Active(file): 309520 kB' 'Inactive(file): 107576 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2391724 kB' 'Mapped: 109184 kB' 'AnonPages: 112408 kB' 'Shmem: 1974628 kB' 'KernelStack: 11944 kB' 'PageTables: 3720 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 158032 kB' 'Slab: 537772 kB' 'SReclaimable: 158032 kB' 'SUnreclaim: 379740 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:16.750 12:46:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.750 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.750 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.750 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.750 12:46:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.750 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.750 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.750 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.750 12:46:21 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.750 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.750 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.750 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.750 12:46:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.750 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.750 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.750 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.750 12:46:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.750 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.750 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.750 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.750 12:46:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.750 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.750 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.750 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.750 12:46:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.750 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.750 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.750 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.750 12:46:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.750 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.750 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.750 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.750 12:46:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.750 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.750 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.750 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.750 12:46:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.750 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.750 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.750 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.750 12:46:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.750 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.750 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.750 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.750 12:46:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.750 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.750 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.750 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.750 12:46:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.750 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.750 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.750 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.750 12:46:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.750 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.750 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.750 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.750 12:46:21 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.750 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.750 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.750 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.750 12:46:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.750 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.750 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.750 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.750 12:46:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.750 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.750 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.750 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.750 12:46:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.750 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.751 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.751 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.751 12:46:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.751 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.751 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.751 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.751 12:46:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.751 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.751 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.751 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.751 12:46:21 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.751 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.751 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.751 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.751 12:46:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.751 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.751 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.751 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.751 12:46:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.751 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.751 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.751 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.751 12:46:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.751 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.751 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.751 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.751 12:46:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.751 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.751 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.751 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.751 12:46:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.751 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.751 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.751 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.751 12:46:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.751 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.751 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.751 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.751 12:46:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.751 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.751 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.751 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.751 12:46:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.751 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.751 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.751 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.751 12:46:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.751 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.751 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.751 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.751 12:46:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.751 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.751 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.751 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.751 12:46:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.751 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.751 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.751 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.751 12:46:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.751 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.751 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.751 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.751 12:46:21 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.751 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.751 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.751 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.751 12:46:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.751 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.751 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.751 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.751 12:46:21 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.751 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.751 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.751 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.751 12:46:21 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.751 12:46:21 -- setup/common.sh@33 -- # echo 0 00:03:16.751 12:46:21 -- setup/common.sh@33 -- # return 0 00:03:16.751 12:46:21 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:16.751 12:46:21 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:16.751 12:46:21 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:16.751 12:46:21 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:16.751 12:46:21 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:16.751 12:46:21 -- setup/common.sh@18 -- # local node=1 00:03:16.751 12:46:21 -- setup/common.sh@19 -- # local var val 00:03:16.751 12:46:21 -- setup/common.sh@20 -- # local mem_f mem 00:03:16.751 12:46:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:16.751 12:46:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:16.751 12:46:21 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:16.751 12:46:21 -- setup/common.sh@28 -- # mapfile -t mem 00:03:16.751 12:46:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:16.751 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.751 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.751 12:46:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679860 kB' 'MemFree: 49555692 kB' 'MemUsed: 11124168 kB' 'SwapCached: 0 kB' 'Active: 4974888 kB' 'Inactive: 3408132 kB' 'Active(anon): 4594012 kB' 'Inactive(anon): 0 kB' 'Active(file): 380876 kB' 'Inactive(file): 3408132 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7874728 kB' 'Mapped: 73136 kB' 'AnonPages: 508400 kB' 'Shmem: 4085720 kB' 'KernelStack: 15048 kB' 'PageTables: 4808 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 131220 kB' 'Slab: 509108 kB' 'SReclaimable: 131220 kB' 'SUnreclaim: 377888 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:16.751 12:46:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.751 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.751 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.751 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.751 12:46:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.751 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.751 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.751 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.751 12:46:21 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.751 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.751 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.751 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.751 12:46:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.751 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.751 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.751 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.751 12:46:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.751 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.751 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.751 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.751 12:46:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.751 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.751 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.751 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.751 12:46:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.751 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.751 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.751 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.751 12:46:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.751 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.751 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.751 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.751 12:46:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.751 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.751 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.751 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.751 12:46:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.751 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.751 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.751 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.751 12:46:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.751 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.751 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.751 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.751 12:46:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.751 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.751 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.751 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.751 12:46:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.751 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.751 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.751 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.751 12:46:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.751 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.751 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.751 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.751 12:46:21 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.751 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.751 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.751 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.751 12:46:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.751 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.751 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.751 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.751 12:46:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.751 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.751 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.751 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.751 12:46:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.751 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.751 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.751 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.751 12:46:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.751 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.751 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.751 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.751 12:46:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.751 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.751 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.751 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.751 12:46:21 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.751 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.751 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.751 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.751 12:46:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.751 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.751 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.751 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.751 12:46:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.751 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.751 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.751 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.751 12:46:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.751 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.751 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.751 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.751 12:46:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.751 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.751 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.751 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.751 12:46:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.751 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.751 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.751 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.751 12:46:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.751 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.751 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.751 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.751 12:46:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.751 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.751 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.751 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.751 12:46:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.751 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.751 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.751 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.751 12:46:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.751 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.751 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.751 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.751 12:46:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.751 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.751 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.751 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.751 12:46:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.751 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.751 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.751 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.751 12:46:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.751 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.751 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.751 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.751 12:46:21 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.751 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.751 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.751 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.751 12:46:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.751 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.751 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.751 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.751 12:46:21 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.751 12:46:21 -- setup/common.sh@32 -- # continue 00:03:16.751 12:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.751 12:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.751 12:46:21 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.751 12:46:21 -- setup/common.sh@33 -- # echo 0 00:03:16.751 12:46:21 -- setup/common.sh@33 -- # return 0 00:03:16.751 12:46:21 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:16.751 12:46:21 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:16.751 12:46:21 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:16.751 12:46:21 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:16.751 12:46:21 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:16.751 node0=512 expecting 512 00:03:16.751 12:46:21 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:16.751 12:46:21 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:16.751 12:46:21 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:16.751 12:46:21 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:16.751 node1=512 expecting 512 00:03:16.751 12:46:21 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:16.751 00:03:16.751 real 0m3.802s 00:03:16.751 user 0m1.524s 00:03:16.751 sys 0m2.279s 00:03:16.751 12:46:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:16.751 12:46:21 -- common/autotest_common.sh@10 -- # set +x 00:03:16.751 ************************************ 00:03:16.751 END TEST even_2G_alloc 00:03:16.751 ************************************ 00:03:17.120 12:46:21 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:17.120 12:46:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:17.120 12:46:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:17.120 12:46:21 -- common/autotest_common.sh@10 -- # set +x 00:03:17.120 ************************************ 00:03:17.120 START TEST odd_alloc 00:03:17.120 ************************************ 00:03:17.120 12:46:21 -- common/autotest_common.sh@1111 -- # odd_alloc 00:03:17.120 12:46:21 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:17.120 12:46:21 -- setup/hugepages.sh@49 -- # local size=2098176 00:03:17.120 12:46:21 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:17.120 12:46:21 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:17.120 12:46:21 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:17.120 12:46:21 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:17.120 12:46:21 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:17.120 12:46:21 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:17.120 12:46:21 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:17.120 12:46:21 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:17.120 12:46:21 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:17.120 12:46:21 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:17.120 12:46:21 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:17.120 12:46:21 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:17.120 12:46:21 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:17.120 12:46:21 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:17.120 12:46:21 -- setup/hugepages.sh@83 -- # : 513 00:03:17.120 12:46:21 -- setup/hugepages.sh@84 -- # : 1 00:03:17.120 12:46:21 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:17.120 12:46:21 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:17.120 12:46:21 -- setup/hugepages.sh@83 -- # : 0 00:03:17.120 12:46:21 -- setup/hugepages.sh@84 -- # : 0 00:03:17.120 12:46:21 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:17.120 12:46:21 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:17.120 12:46:21 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:17.120 12:46:21 -- setup/hugepages.sh@160 -- # setup output 00:03:17.120 12:46:21 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:17.120 12:46:21 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:20.453 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:20.453 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:20.453 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:20.453 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:20.453 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:20.453 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:20.453 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:20.453 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:20.453 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:20.453 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:20.453 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:20.453 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:20.453 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:20.453 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:20.453 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:20.453 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:20.453 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:20.453 12:46:25 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:20.453 12:46:25 -- setup/hugepages.sh@89 -- # local node 00:03:20.453 12:46:25 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:20.453 12:46:25 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:20.453 12:46:25 -- setup/hugepages.sh@92 -- # local surp 00:03:20.453 12:46:25 -- setup/hugepages.sh@93 -- # local resv 00:03:20.453 12:46:25 -- setup/hugepages.sh@94 -- # local anon 00:03:20.453 12:46:25 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:20.453 12:46:25 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:20.453 12:46:25 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:20.453 12:46:25 -- setup/common.sh@18 -- # local node= 00:03:20.453 12:46:25 -- setup/common.sh@19 -- # local var val 00:03:20.453 12:46:25 -- setup/common.sh@20 -- # local mem_f mem 00:03:20.453 12:46:25 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.453 12:46:25 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:20.453 12:46:25 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:20.453 12:46:25 -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.453 12:46:25 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.453 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.453 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.453 12:46:25 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 109597036 kB' 'MemAvailable: 113123436 kB' 'Buffers: 4124 kB' 'Cached: 10262424 kB' 'SwapCached: 0 kB' 'Active: 7372536 kB' 'Inactive: 3515708 kB' 'Active(anon): 6682140 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515708 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 624852 kB' 'Mapped: 182916 kB' 'Shmem: 6060444 kB' 'KReclaimable: 289252 kB' 'Slab: 1046824 kB' 'SReclaimable: 289252 kB' 'SUnreclaim: 757572 kB' 'KernelStack: 26976 kB' 'PageTables: 8484 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508436 kB' 'Committed_AS: 8046932 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234876 kB' 'VmallocChunk: 0 kB' 'Percpu: 103104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3585396 kB' 'DirectMap2M: 42231808 kB' 'DirectMap1G: 90177536 kB' 00:03:20.453 12:46:25 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.453 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.453 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.453 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.453 12:46:25 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.453 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.453 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.453 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.453 12:46:25 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.453 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.453 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.453 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.453 12:46:25 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.453 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.453 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.453 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.453 12:46:25 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.453 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.453 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.453 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.453 12:46:25 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.453 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.453 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.453 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.453 12:46:25 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.453 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.453 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.453 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.453 12:46:25 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.453 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.453 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.453 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.453 12:46:25 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.453 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.453 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.453 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.453 12:46:25 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.453 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.453 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.453 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.453 12:46:25 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.453 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.453 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.453 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.453 12:46:25 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.453 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.453 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.453 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.453 12:46:25 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.453 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.453 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.453 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.453 12:46:25 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.453 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.453 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.453 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.453 12:46:25 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.453 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.453 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.453 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.453 12:46:25 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.453 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.453 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.453 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.453 12:46:25 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.453 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.453 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.453 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.453 12:46:25 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.453 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.453 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.453 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.453 12:46:25 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.453 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.453 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.453 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.453 12:46:25 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.453 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.453 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.453 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.453 12:46:25 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.453 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.453 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.454 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.454 12:46:25 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.454 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.454 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.454 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.454 12:46:25 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.454 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.454 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.454 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.454 12:46:25 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.454 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.454 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.454 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.454 12:46:25 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.454 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.454 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.454 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.454 12:46:25 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.454 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.454 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.454 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.454 12:46:25 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.454 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.454 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.454 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.454 12:46:25 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.454 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.454 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.454 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.454 12:46:25 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.454 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.454 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.454 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.454 12:46:25 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.454 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.454 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.454 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.454 12:46:25 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.454 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.454 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.454 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.454 12:46:25 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.454 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.454 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.454 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.454 12:46:25 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.454 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.454 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.454 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.454 12:46:25 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.454 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.454 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.454 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.454 12:46:25 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.454 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.454 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.454 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.454 12:46:25 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.454 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.454 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.454 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.454 12:46:25 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.454 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.454 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.454 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.454 12:46:25 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.454 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.454 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.454 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.454 12:46:25 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.454 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.454 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.454 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.454 12:46:25 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.454 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.454 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.454 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.454 12:46:25 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.454 12:46:25 -- setup/common.sh@33 -- # echo 0 00:03:20.454 12:46:25 -- setup/common.sh@33 -- # return 0 00:03:20.454 12:46:25 -- setup/hugepages.sh@97 -- # anon=0 00:03:20.454 12:46:25 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:20.454 12:46:25 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:20.454 12:46:25 -- setup/common.sh@18 -- # local node= 00:03:20.454 12:46:25 -- setup/common.sh@19 -- # local var val 00:03:20.454 12:46:25 -- setup/common.sh@20 -- # local mem_f mem 00:03:20.454 12:46:25 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.454 12:46:25 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:20.454 12:46:25 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:20.454 12:46:25 -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.454 12:46:25 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.454 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.454 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.454 12:46:25 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 109591820 kB' 'MemAvailable: 113118220 kB' 'Buffers: 4124 kB' 'Cached: 10262424 kB' 'SwapCached: 0 kB' 'Active: 7374944 kB' 'Inactive: 3515708 kB' 'Active(anon): 6684548 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515708 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 627384 kB' 'Mapped: 182868 kB' 'Shmem: 6060444 kB' 'KReclaimable: 289252 kB' 'Slab: 1046824 kB' 'SReclaimable: 289252 kB' 'SUnreclaim: 757572 kB' 'KernelStack: 27184 kB' 'PageTables: 8952 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508436 kB' 'Committed_AS: 8048820 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234832 kB' 'VmallocChunk: 0 kB' 'Percpu: 103104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3585396 kB' 'DirectMap2M: 42231808 kB' 'DirectMap1G: 90177536 kB' 00:03:20.454 12:46:25 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.454 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.454 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.454 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.454 12:46:25 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.454 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.454 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.454 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.454 12:46:25 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.454 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.454 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.454 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.454 12:46:25 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.454 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.454 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.454 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.454 12:46:25 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.454 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.454 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.454 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.454 12:46:25 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.454 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.454 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.454 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.454 12:46:25 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.454 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.454 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.454 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.454 12:46:25 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.454 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.454 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.454 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.454 12:46:25 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.454 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.454 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.454 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.454 12:46:25 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.454 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.454 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.454 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.454 12:46:25 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.454 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.454 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.454 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.454 12:46:25 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.454 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.454 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.454 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.454 12:46:25 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.454 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.454 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.454 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.454 12:46:25 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.454 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.455 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.455 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.455 12:46:25 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.455 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.455 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.455 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.455 12:46:25 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.455 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.455 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.455 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.455 12:46:25 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.455 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.455 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.455 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.455 12:46:25 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.455 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.455 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.455 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.455 12:46:25 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.455 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.455 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.455 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.455 12:46:25 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.455 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.455 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.455 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.455 12:46:25 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.455 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.455 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.455 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.455 12:46:25 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.455 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.455 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.455 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.455 12:46:25 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.455 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.455 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.455 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.455 12:46:25 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.455 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.455 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.455 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.455 12:46:25 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.455 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.455 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.455 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.455 12:46:25 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.455 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.455 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.455 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.455 12:46:25 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.455 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.455 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.455 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.455 12:46:25 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.455 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.455 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.455 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.455 12:46:25 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.455 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.455 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.455 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.455 12:46:25 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.455 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.455 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.455 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.455 12:46:25 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.455 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.455 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.455 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.455 12:46:25 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.455 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.455 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.455 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.455 12:46:25 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.455 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.455 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.455 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.455 12:46:25 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.455 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.455 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.455 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.455 12:46:25 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.455 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.455 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.455 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.455 12:46:25 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.455 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.455 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.455 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.455 12:46:25 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.455 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.455 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.455 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.455 12:46:25 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.455 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.455 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.455 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.455 12:46:25 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.455 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.455 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.455 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.455 12:46:25 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.455 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.455 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.455 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.455 12:46:25 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.455 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.455 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.455 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.455 12:46:25 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.455 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.455 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.455 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.455 12:46:25 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.455 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.455 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.455 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.455 12:46:25 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.455 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.455 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.455 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.455 12:46:25 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.455 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.455 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.455 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.455 12:46:25 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.455 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.455 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.455 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.455 12:46:25 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.455 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.455 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.455 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.455 12:46:25 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.455 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.455 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.455 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.455 12:46:25 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.455 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.455 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.455 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.455 12:46:25 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.455 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.455 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.455 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.455 12:46:25 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.455 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.455 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.455 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.455 12:46:25 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.455 12:46:25 -- setup/common.sh@33 -- # echo 0 00:03:20.455 12:46:25 -- setup/common.sh@33 -- # return 0 00:03:20.455 12:46:25 -- setup/hugepages.sh@99 -- # surp=0 00:03:20.455 12:46:25 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:20.455 12:46:25 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:20.455 12:46:25 -- setup/common.sh@18 -- # local node= 00:03:20.455 12:46:25 -- setup/common.sh@19 -- # local var val 00:03:20.455 12:46:25 -- setup/common.sh@20 -- # local mem_f mem 00:03:20.455 12:46:25 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.455 12:46:25 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:20.456 12:46:25 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:20.456 12:46:25 -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.456 12:46:25 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.456 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.456 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.456 12:46:25 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 109593516 kB' 'MemAvailable: 113119916 kB' 'Buffers: 4124 kB' 'Cached: 10262436 kB' 'SwapCached: 0 kB' 'Active: 7369696 kB' 'Inactive: 3515708 kB' 'Active(anon): 6679300 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515708 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 622096 kB' 'Mapped: 182364 kB' 'Shmem: 6060456 kB' 'KReclaimable: 289252 kB' 'Slab: 1046824 kB' 'SReclaimable: 289252 kB' 'SUnreclaim: 757572 kB' 'KernelStack: 27200 kB' 'PageTables: 8748 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508436 kB' 'Committed_AS: 8042960 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234844 kB' 'VmallocChunk: 0 kB' 'Percpu: 103104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3585396 kB' 'DirectMap2M: 42231808 kB' 'DirectMap1G: 90177536 kB' 00:03:20.456 12:46:25 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.456 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.456 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.456 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.456 12:46:25 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.456 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.456 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.456 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.456 12:46:25 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.456 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.456 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.456 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.456 12:46:25 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.456 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.456 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.456 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.456 12:46:25 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.456 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.456 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.456 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.456 12:46:25 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.456 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.456 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.456 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.456 12:46:25 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.456 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.456 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.456 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.456 12:46:25 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.456 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.456 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.456 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.456 12:46:25 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.456 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.456 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.456 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.456 12:46:25 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.456 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.456 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.456 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.456 12:46:25 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.456 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.456 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.456 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.456 12:46:25 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.456 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.456 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.456 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.456 12:46:25 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.456 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.456 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.456 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.456 12:46:25 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.456 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.456 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.456 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.456 12:46:25 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.456 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.456 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.456 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.456 12:46:25 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.456 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.456 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.456 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.456 12:46:25 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.456 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.456 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.456 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.456 12:46:25 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.456 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.456 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.456 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.456 12:46:25 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.456 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.456 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.456 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.456 12:46:25 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.456 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.456 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.456 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.456 12:46:25 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.456 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.456 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.456 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.456 12:46:25 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.456 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.456 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.456 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.456 12:46:25 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.456 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.456 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.456 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.456 12:46:25 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.456 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.456 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.456 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.456 12:46:25 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.456 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.456 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.456 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.456 12:46:25 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.456 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.456 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.456 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.456 12:46:25 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.456 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.456 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.456 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.456 12:46:25 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.456 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.456 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.456 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.456 12:46:25 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.456 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.456 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.456 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.456 12:46:25 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.456 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.456 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.456 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.456 12:46:25 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.456 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.456 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.456 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.456 12:46:25 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.456 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.456 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.456 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.456 12:46:25 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.456 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.456 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.456 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.456 12:46:25 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.456 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.456 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.456 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.456 12:46:25 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.456 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.456 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.456 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.456 12:46:25 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.456 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.456 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.457 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.457 12:46:25 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.457 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.457 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.457 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.457 12:46:25 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.457 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.457 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.457 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.457 12:46:25 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.457 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.457 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.457 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.457 12:46:25 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.457 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.457 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.457 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.457 12:46:25 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.457 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.457 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.457 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.457 12:46:25 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.457 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.457 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.457 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.457 12:46:25 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.457 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.457 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.457 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.457 12:46:25 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.457 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.457 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.457 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.457 12:46:25 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.457 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.457 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.457 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.457 12:46:25 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.457 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.457 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.457 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.457 12:46:25 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.457 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.457 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.457 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.457 12:46:25 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.457 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.457 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.457 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.457 12:46:25 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.457 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.457 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.457 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.457 12:46:25 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.457 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.457 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.457 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.457 12:46:25 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.457 12:46:25 -- setup/common.sh@33 -- # echo 0 00:03:20.457 12:46:25 -- setup/common.sh@33 -- # return 0 00:03:20.457 12:46:25 -- setup/hugepages.sh@100 -- # resv=0 00:03:20.457 12:46:25 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:20.457 nr_hugepages=1025 00:03:20.457 12:46:25 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:20.457 resv_hugepages=0 00:03:20.457 12:46:25 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:20.457 surplus_hugepages=0 00:03:20.457 12:46:25 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:20.457 anon_hugepages=0 00:03:20.457 12:46:25 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:20.457 12:46:25 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:20.457 12:46:25 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:20.457 12:46:25 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:20.457 12:46:25 -- setup/common.sh@18 -- # local node= 00:03:20.457 12:46:25 -- setup/common.sh@19 -- # local var val 00:03:20.457 12:46:25 -- setup/common.sh@20 -- # local mem_f mem 00:03:20.457 12:46:25 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.457 12:46:25 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:20.457 12:46:25 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:20.457 12:46:25 -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.457 12:46:25 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.457 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.457 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.457 12:46:25 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 109597496 kB' 'MemAvailable: 113123896 kB' 'Buffers: 4124 kB' 'Cached: 10262436 kB' 'SwapCached: 0 kB' 'Active: 7369452 kB' 'Inactive: 3515708 kB' 'Active(anon): 6679056 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515708 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 621840 kB' 'Mapped: 182304 kB' 'Shmem: 6060456 kB' 'KReclaimable: 289252 kB' 'Slab: 1046832 kB' 'SReclaimable: 289252 kB' 'SUnreclaim: 757580 kB' 'KernelStack: 27200 kB' 'PageTables: 8880 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508436 kB' 'Committed_AS: 8042976 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234908 kB' 'VmallocChunk: 0 kB' 'Percpu: 103104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3585396 kB' 'DirectMap2M: 42231808 kB' 'DirectMap1G: 90177536 kB' 00:03:20.457 12:46:25 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.457 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.457 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.457 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.457 12:46:25 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.457 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.457 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.457 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.457 12:46:25 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.457 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.457 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.457 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.457 12:46:25 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.457 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.457 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.457 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.457 12:46:25 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.457 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.457 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.457 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.457 12:46:25 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.457 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.457 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.457 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.457 12:46:25 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.457 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.457 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.457 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.457 12:46:25 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.457 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.457 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.457 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.457 12:46:25 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.457 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.458 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.458 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.458 12:46:25 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.458 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.458 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.458 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.458 12:46:25 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.458 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.458 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.458 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.458 12:46:25 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.458 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.458 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.458 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.458 12:46:25 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.458 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.458 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.458 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.458 12:46:25 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.458 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.458 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.458 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.458 12:46:25 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.458 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.458 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.458 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.458 12:46:25 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.458 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.458 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.458 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.458 12:46:25 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.458 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.458 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.458 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.458 12:46:25 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.458 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.458 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.458 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.458 12:46:25 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.458 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.458 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.458 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.458 12:46:25 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.458 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.458 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.458 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.458 12:46:25 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.458 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.458 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.458 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.458 12:46:25 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.458 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.458 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.458 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.458 12:46:25 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.458 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.458 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.458 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.458 12:46:25 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.458 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.458 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.458 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.458 12:46:25 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.458 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.458 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.458 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.458 12:46:25 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.458 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.458 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.458 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.458 12:46:25 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.458 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.458 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.458 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.458 12:46:25 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.458 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.458 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.458 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.458 12:46:25 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.458 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.458 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.458 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.458 12:46:25 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.458 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.458 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.458 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.458 12:46:25 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.458 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.458 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.458 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.458 12:46:25 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.458 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.458 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.458 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.458 12:46:25 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.458 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.458 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.458 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.458 12:46:25 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.458 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.458 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.458 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.458 12:46:25 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.458 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.458 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.458 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.458 12:46:25 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.458 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.458 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.458 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.458 12:46:25 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.458 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.458 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.458 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.458 12:46:25 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.458 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.458 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.458 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.458 12:46:25 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.458 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.458 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.458 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.458 12:46:25 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.458 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.458 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.458 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.458 12:46:25 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.458 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.458 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.458 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.458 12:46:25 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.458 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.458 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.458 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.458 12:46:25 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.458 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.458 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.458 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.458 12:46:25 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.458 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.458 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.458 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.458 12:46:25 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.458 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.458 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.458 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.458 12:46:25 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.458 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.458 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.458 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.458 12:46:25 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.458 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.458 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.458 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.458 12:46:25 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.458 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.458 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.458 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.458 12:46:25 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.458 12:46:25 -- setup/common.sh@33 -- # echo 1025 00:03:20.458 12:46:25 -- setup/common.sh@33 -- # return 0 00:03:20.458 12:46:25 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:20.458 12:46:25 -- setup/hugepages.sh@112 -- # get_nodes 00:03:20.458 12:46:25 -- setup/hugepages.sh@27 -- # local node 00:03:20.458 12:46:25 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:20.459 12:46:25 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:20.459 12:46:25 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:20.459 12:46:25 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:20.459 12:46:25 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:20.459 12:46:25 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:20.459 12:46:25 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:20.459 12:46:25 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:20.459 12:46:25 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:20.459 12:46:25 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:20.459 12:46:25 -- setup/common.sh@18 -- # local node=0 00:03:20.459 12:46:25 -- setup/common.sh@19 -- # local var val 00:03:20.459 12:46:25 -- setup/common.sh@20 -- # local mem_f mem 00:03:20.459 12:46:25 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.459 12:46:25 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:20.459 12:46:25 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:20.459 12:46:25 -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.459 12:46:25 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.459 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.459 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.459 12:46:25 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 60036636 kB' 'MemUsed: 5622372 kB' 'SwapCached: 0 kB' 'Active: 2395476 kB' 'Inactive: 107576 kB' 'Active(anon): 2085956 kB' 'Inactive(anon): 0 kB' 'Active(file): 309520 kB' 'Inactive(file): 107576 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2391792 kB' 'Mapped: 109212 kB' 'AnonPages: 114436 kB' 'Shmem: 1974696 kB' 'KernelStack: 11928 kB' 'PageTables: 3728 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 158032 kB' 'Slab: 537980 kB' 'SReclaimable: 158032 kB' 'SUnreclaim: 379948 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:20.459 12:46:25 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.459 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.459 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.459 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.459 12:46:25 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.459 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.459 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.459 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.459 12:46:25 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.459 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.459 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.459 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.459 12:46:25 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.459 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.459 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.459 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.459 12:46:25 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.459 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.459 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.459 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.459 12:46:25 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.459 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.459 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.459 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.459 12:46:25 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.459 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.459 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.459 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.459 12:46:25 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.459 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.459 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.459 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.459 12:46:25 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.459 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.459 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.459 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.459 12:46:25 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.459 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.459 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.459 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.459 12:46:25 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.459 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.459 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.459 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.459 12:46:25 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.459 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.459 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.459 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.459 12:46:25 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.459 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.459 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.459 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.459 12:46:25 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.459 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.459 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.722 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.722 12:46:25 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.722 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.722 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.722 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.722 12:46:25 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.722 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.722 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.722 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.722 12:46:25 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.722 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.722 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.722 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.722 12:46:25 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.722 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.722 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.722 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.722 12:46:25 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.722 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.722 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.722 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.722 12:46:25 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.722 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.722 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.722 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.722 12:46:25 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.722 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.722 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.722 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.722 12:46:25 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.722 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.722 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.722 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.722 12:46:25 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.722 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.722 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.722 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.722 12:46:25 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.722 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.722 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.722 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.722 12:46:25 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.722 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.722 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.722 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.722 12:46:25 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.722 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.722 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.722 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.722 12:46:25 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.722 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.722 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.722 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.722 12:46:25 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.722 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.722 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.722 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.722 12:46:25 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.722 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.722 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.722 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.722 12:46:25 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.722 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.722 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.722 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.722 12:46:25 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.722 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.722 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.722 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.722 12:46:25 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.722 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.722 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.722 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.722 12:46:25 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.722 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.722 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.722 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.722 12:46:25 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.722 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.722 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.722 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.722 12:46:25 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.722 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.722 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.722 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.722 12:46:25 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.722 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.722 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.722 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.722 12:46:25 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.722 12:46:25 -- setup/common.sh@33 -- # echo 0 00:03:20.722 12:46:25 -- setup/common.sh@33 -- # return 0 00:03:20.722 12:46:25 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:20.722 12:46:25 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:20.722 12:46:25 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:20.722 12:46:25 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:20.722 12:46:25 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:20.722 12:46:25 -- setup/common.sh@18 -- # local node=1 00:03:20.722 12:46:25 -- setup/common.sh@19 -- # local var val 00:03:20.722 12:46:25 -- setup/common.sh@20 -- # local mem_f mem 00:03:20.722 12:46:25 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.723 12:46:25 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:20.723 12:46:25 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:20.723 12:46:25 -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.723 12:46:25 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.723 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.723 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.723 12:46:25 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679860 kB' 'MemFree: 49560156 kB' 'MemUsed: 11119704 kB' 'SwapCached: 0 kB' 'Active: 4974564 kB' 'Inactive: 3408132 kB' 'Active(anon): 4593688 kB' 'Inactive(anon): 0 kB' 'Active(file): 380876 kB' 'Inactive(file): 3408132 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7874768 kB' 'Mapped: 73092 kB' 'AnonPages: 507992 kB' 'Shmem: 4085760 kB' 'KernelStack: 15272 kB' 'PageTables: 5008 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 131220 kB' 'Slab: 508852 kB' 'SReclaimable: 131220 kB' 'SUnreclaim: 377632 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:20.723 12:46:25 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.723 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.723 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.723 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.723 12:46:25 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.723 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.723 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.723 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.723 12:46:25 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.723 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.723 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.723 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.723 12:46:25 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.723 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.723 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.723 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.723 12:46:25 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.723 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.723 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.723 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.723 12:46:25 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.723 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.723 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.723 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.723 12:46:25 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.723 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.723 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.723 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.723 12:46:25 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.723 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.723 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.723 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.723 12:46:25 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.723 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.723 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.723 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.723 12:46:25 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.723 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.723 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.723 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.723 12:46:25 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.723 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.723 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.723 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.723 12:46:25 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.723 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.723 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.723 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.723 12:46:25 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.723 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.723 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.723 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.723 12:46:25 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.723 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.723 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.723 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.723 12:46:25 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.723 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.723 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.723 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.723 12:46:25 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.723 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.723 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.723 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.723 12:46:25 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.723 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.723 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.723 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.723 12:46:25 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.723 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.723 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.723 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.723 12:46:25 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.723 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.723 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.723 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.723 12:46:25 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.723 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.723 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.723 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.723 12:46:25 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.723 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.723 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.723 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.723 12:46:25 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.723 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.723 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.723 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.723 12:46:25 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.723 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.723 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.723 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.723 12:46:25 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.723 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.723 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.723 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.723 12:46:25 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.723 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.723 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.723 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.723 12:46:25 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.723 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.723 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.723 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.723 12:46:25 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.723 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.723 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.723 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.723 12:46:25 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.723 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.723 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.723 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.723 12:46:25 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.723 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.723 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.723 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.723 12:46:25 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.723 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.723 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.723 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.723 12:46:25 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.723 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.723 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.723 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.723 12:46:25 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.723 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.723 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.723 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.723 12:46:25 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.723 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.723 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.723 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.723 12:46:25 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.723 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.723 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.723 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.723 12:46:25 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.723 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.723 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.723 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.723 12:46:25 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.723 12:46:25 -- setup/common.sh@32 -- # continue 00:03:20.724 12:46:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.724 12:46:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.724 12:46:25 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.724 12:46:25 -- setup/common.sh@33 -- # echo 0 00:03:20.724 12:46:25 -- setup/common.sh@33 -- # return 0 00:03:20.724 12:46:25 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:20.724 12:46:25 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:20.724 12:46:25 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:20.724 12:46:25 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:20.724 12:46:25 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:20.724 node0=512 expecting 513 00:03:20.724 12:46:25 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:20.724 12:46:25 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:20.724 12:46:25 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:20.724 12:46:25 -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:20.724 node1=513 expecting 512 00:03:20.724 12:46:25 -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:20.724 00:03:20.724 real 0m3.584s 00:03:20.724 user 0m1.330s 00:03:20.724 sys 0m2.229s 00:03:20.724 12:46:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:20.724 12:46:25 -- common/autotest_common.sh@10 -- # set +x 00:03:20.724 ************************************ 00:03:20.724 END TEST odd_alloc 00:03:20.724 ************************************ 00:03:20.724 12:46:25 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:20.724 12:46:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:20.724 12:46:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:20.724 12:46:25 -- common/autotest_common.sh@10 -- # set +x 00:03:20.724 ************************************ 00:03:20.724 START TEST custom_alloc 00:03:20.724 ************************************ 00:03:20.724 12:46:25 -- common/autotest_common.sh@1111 -- # custom_alloc 00:03:20.724 12:46:25 -- setup/hugepages.sh@167 -- # local IFS=, 00:03:20.724 12:46:25 -- setup/hugepages.sh@169 -- # local node 00:03:20.724 12:46:25 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:20.724 12:46:25 -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:20.724 12:46:25 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:20.724 12:46:25 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:20.724 12:46:25 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:20.724 12:46:25 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:20.724 12:46:25 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:20.724 12:46:25 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:20.724 12:46:25 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:20.724 12:46:25 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:20.724 12:46:25 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:20.724 12:46:25 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:20.724 12:46:25 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:20.724 12:46:25 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:20.724 12:46:25 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:20.724 12:46:25 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:20.724 12:46:25 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:20.724 12:46:25 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:20.724 12:46:25 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:20.724 12:46:25 -- setup/hugepages.sh@83 -- # : 256 00:03:20.724 12:46:25 -- setup/hugepages.sh@84 -- # : 1 00:03:20.724 12:46:25 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:20.724 12:46:25 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:20.724 12:46:25 -- setup/hugepages.sh@83 -- # : 0 00:03:20.724 12:46:25 -- setup/hugepages.sh@84 -- # : 0 00:03:20.724 12:46:25 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:20.724 12:46:25 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:20.724 12:46:25 -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:20.724 12:46:25 -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:20.724 12:46:25 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:20.724 12:46:25 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:20.724 12:46:25 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:20.724 12:46:25 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:20.724 12:46:25 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:20.724 12:46:25 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:20.724 12:46:25 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:20.724 12:46:25 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:20.724 12:46:25 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:20.724 12:46:25 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:20.724 12:46:25 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:20.724 12:46:25 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:20.724 12:46:25 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:20.724 12:46:25 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:20.724 12:46:25 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:20.724 12:46:25 -- setup/hugepages.sh@78 -- # return 0 00:03:20.724 12:46:25 -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:20.724 12:46:25 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:20.724 12:46:25 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:20.724 12:46:25 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:20.724 12:46:25 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:20.724 12:46:25 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:20.724 12:46:25 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:20.724 12:46:25 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:20.724 12:46:25 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:20.724 12:46:25 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:20.724 12:46:25 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:20.724 12:46:25 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:20.724 12:46:25 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:20.724 12:46:25 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:20.724 12:46:25 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:20.724 12:46:25 -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:20.724 12:46:25 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:20.724 12:46:25 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:20.724 12:46:25 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:20.724 12:46:25 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:20.724 12:46:25 -- setup/hugepages.sh@78 -- # return 0 00:03:20.724 12:46:25 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:20.724 12:46:25 -- setup/hugepages.sh@187 -- # setup output 00:03:20.724 12:46:25 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:20.724 12:46:25 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:24.022 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:24.022 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:24.022 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:24.022 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:24.022 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:24.022 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:24.022 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:24.022 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:24.022 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:24.022 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:24.022 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:24.022 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:24.022 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:24.022 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:24.022 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:24.022 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:24.022 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:24.605 12:46:29 -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:24.605 12:46:29 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:24.605 12:46:29 -- setup/hugepages.sh@89 -- # local node 00:03:24.605 12:46:29 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:24.605 12:46:29 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:24.605 12:46:29 -- setup/hugepages.sh@92 -- # local surp 00:03:24.605 12:46:29 -- setup/hugepages.sh@93 -- # local resv 00:03:24.605 12:46:29 -- setup/hugepages.sh@94 -- # local anon 00:03:24.605 12:46:29 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:24.605 12:46:29 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:24.605 12:46:29 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:24.605 12:46:29 -- setup/common.sh@18 -- # local node= 00:03:24.605 12:46:29 -- setup/common.sh@19 -- # local var val 00:03:24.605 12:46:29 -- setup/common.sh@20 -- # local mem_f mem 00:03:24.605 12:46:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.605 12:46:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:24.605 12:46:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:24.606 12:46:29 -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.606 12:46:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.606 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.606 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.606 12:46:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 108557664 kB' 'MemAvailable: 112084064 kB' 'Buffers: 4124 kB' 'Cached: 10262568 kB' 'SwapCached: 0 kB' 'Active: 7372816 kB' 'Inactive: 3515708 kB' 'Active(anon): 6682420 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515708 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 624956 kB' 'Mapped: 182432 kB' 'Shmem: 6060588 kB' 'KReclaimable: 289252 kB' 'Slab: 1046448 kB' 'SReclaimable: 289252 kB' 'SUnreclaim: 757196 kB' 'KernelStack: 27088 kB' 'PageTables: 8580 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985172 kB' 'Committed_AS: 8063564 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234828 kB' 'VmallocChunk: 0 kB' 'Percpu: 103104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3585396 kB' 'DirectMap2M: 42231808 kB' 'DirectMap1G: 90177536 kB' 00:03:24.606 12:46:29 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.606 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.606 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.606 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.606 12:46:29 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.606 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.606 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.606 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.606 12:46:29 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.606 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.606 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.606 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.606 12:46:29 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.606 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.606 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.606 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.606 12:46:29 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.606 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.606 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.606 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.606 12:46:29 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.606 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.606 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.606 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.606 12:46:29 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.606 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.606 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.606 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.606 12:46:29 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.606 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.606 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.606 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.606 12:46:29 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.606 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.606 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.606 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.606 12:46:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.606 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.606 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.606 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.606 12:46:29 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.606 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.606 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.606 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.606 12:46:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.606 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.606 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.606 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.606 12:46:29 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.606 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.606 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.606 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.606 12:46:29 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.606 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.606 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.606 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.606 12:46:29 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.606 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.606 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.606 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.606 12:46:29 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.606 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.606 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.606 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.606 12:46:29 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.606 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.606 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.606 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.606 12:46:29 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.606 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.606 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.606 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.606 12:46:29 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.606 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.606 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.606 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.606 12:46:29 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.606 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.606 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.606 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.606 12:46:29 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.606 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.606 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.606 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.606 12:46:29 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.606 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.606 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.606 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.606 12:46:29 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.606 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.606 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.606 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.606 12:46:29 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.606 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.606 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.606 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.606 12:46:29 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.606 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.606 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.606 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.606 12:46:29 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.606 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.606 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.606 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.606 12:46:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.606 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.606 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.606 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.606 12:46:29 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.606 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.606 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.606 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.606 12:46:29 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.606 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.606 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.606 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.606 12:46:29 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.606 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.607 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.607 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.607 12:46:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.607 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.607 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.607 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.607 12:46:29 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.607 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.607 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.607 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.607 12:46:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.607 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.607 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.607 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.607 12:46:29 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.607 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.607 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.607 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.607 12:46:29 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.607 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.607 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.607 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.607 12:46:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.607 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.607 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.607 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.607 12:46:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.607 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.607 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.607 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.607 12:46:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.607 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.607 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.607 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.607 12:46:29 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.607 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.607 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.607 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.607 12:46:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.607 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.607 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.607 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.607 12:46:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.607 12:46:29 -- setup/common.sh@33 -- # echo 0 00:03:24.607 12:46:29 -- setup/common.sh@33 -- # return 0 00:03:24.607 12:46:29 -- setup/hugepages.sh@97 -- # anon=0 00:03:24.607 12:46:29 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:24.607 12:46:29 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:24.607 12:46:29 -- setup/common.sh@18 -- # local node= 00:03:24.607 12:46:29 -- setup/common.sh@19 -- # local var val 00:03:24.607 12:46:29 -- setup/common.sh@20 -- # local mem_f mem 00:03:24.607 12:46:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.607 12:46:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:24.607 12:46:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:24.607 12:46:29 -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.607 12:46:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.607 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.607 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.607 12:46:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 108559304 kB' 'MemAvailable: 112085704 kB' 'Buffers: 4124 kB' 'Cached: 10262568 kB' 'SwapCached: 0 kB' 'Active: 7371740 kB' 'Inactive: 3515708 kB' 'Active(anon): 6681344 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515708 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 623856 kB' 'Mapped: 182432 kB' 'Shmem: 6060588 kB' 'KReclaimable: 289252 kB' 'Slab: 1046448 kB' 'SReclaimable: 289252 kB' 'SUnreclaim: 757196 kB' 'KernelStack: 26960 kB' 'PageTables: 8760 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985172 kB' 'Committed_AS: 8043372 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234812 kB' 'VmallocChunk: 0 kB' 'Percpu: 103104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3585396 kB' 'DirectMap2M: 42231808 kB' 'DirectMap1G: 90177536 kB' 00:03:24.607 12:46:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.607 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.607 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.607 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.607 12:46:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.607 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.607 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.607 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.607 12:46:29 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.607 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.607 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.607 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.607 12:46:29 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.607 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.607 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.607 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.607 12:46:29 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.607 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.607 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.607 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.607 12:46:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.607 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.607 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.607 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.607 12:46:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.607 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.607 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.607 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.607 12:46:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.607 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.607 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.607 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.607 12:46:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.607 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.607 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.607 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.607 12:46:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.607 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.607 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.607 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.607 12:46:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.607 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.607 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.607 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.607 12:46:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.607 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.607 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.607 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.607 12:46:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.607 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.607 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.607 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.607 12:46:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.607 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.607 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.607 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.607 12:46:29 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.607 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.607 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.607 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.607 12:46:29 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.607 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.607 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.607 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.607 12:46:29 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.607 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.607 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.607 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.607 12:46:29 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.607 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.607 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.607 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.607 12:46:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.607 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.607 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.607 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.607 12:46:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.607 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.607 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.607 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.607 12:46:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.607 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.607 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.607 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.607 12:46:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.607 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.607 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.607 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.608 12:46:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.608 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.608 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.608 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.608 12:46:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.608 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.608 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.608 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.608 12:46:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.608 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.608 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.608 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.608 12:46:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.608 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.608 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.608 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.608 12:46:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.608 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.608 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.608 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.608 12:46:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.608 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.608 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.608 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.608 12:46:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.608 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.608 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.608 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.608 12:46:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.608 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.608 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.608 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.608 12:46:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.608 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.608 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.608 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.608 12:46:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.608 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.608 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.608 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.608 12:46:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.608 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.608 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.608 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.608 12:46:29 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.608 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.608 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.608 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.608 12:46:29 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.608 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.608 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.608 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.608 12:46:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.608 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.608 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.608 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.608 12:46:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.608 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.608 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.608 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.608 12:46:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.608 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.608 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.608 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.608 12:46:29 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.608 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.608 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.608 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.608 12:46:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.608 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.608 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.608 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.608 12:46:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.608 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.608 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.608 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.608 12:46:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.608 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.608 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.608 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.608 12:46:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.608 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.608 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.608 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.608 12:46:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.608 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.608 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.608 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.608 12:46:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.608 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.608 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.608 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.608 12:46:29 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.608 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.608 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.608 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.608 12:46:29 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.608 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.608 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.608 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.608 12:46:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.608 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.608 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.608 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.608 12:46:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.608 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.608 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.608 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.608 12:46:29 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.608 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.608 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.608 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.608 12:46:29 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.608 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.608 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.608 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.608 12:46:29 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.608 12:46:29 -- setup/common.sh@33 -- # echo 0 00:03:24.608 12:46:29 -- setup/common.sh@33 -- # return 0 00:03:24.608 12:46:29 -- setup/hugepages.sh@99 -- # surp=0 00:03:24.608 12:46:29 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:24.608 12:46:29 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:24.608 12:46:29 -- setup/common.sh@18 -- # local node= 00:03:24.608 12:46:29 -- setup/common.sh@19 -- # local var val 00:03:24.608 12:46:29 -- setup/common.sh@20 -- # local mem_f mem 00:03:24.608 12:46:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.608 12:46:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:24.608 12:46:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:24.608 12:46:29 -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.608 12:46:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.608 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.608 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.608 12:46:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 108559596 kB' 'MemAvailable: 112085996 kB' 'Buffers: 4124 kB' 'Cached: 10262568 kB' 'SwapCached: 0 kB' 'Active: 7370324 kB' 'Inactive: 3515708 kB' 'Active(anon): 6679928 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515708 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 622920 kB' 'Mapped: 182348 kB' 'Shmem: 6060588 kB' 'KReclaimable: 289252 kB' 'Slab: 1046432 kB' 'SReclaimable: 289252 kB' 'SUnreclaim: 757180 kB' 'KernelStack: 27024 kB' 'PageTables: 8748 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985172 kB' 'Committed_AS: 8043516 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234812 kB' 'VmallocChunk: 0 kB' 'Percpu: 103104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3585396 kB' 'DirectMap2M: 42231808 kB' 'DirectMap1G: 90177536 kB' 00:03:24.608 12:46:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.608 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.609 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.609 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.609 12:46:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.609 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.609 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.609 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.609 12:46:29 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.609 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.609 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.609 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.609 12:46:29 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.609 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.609 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.609 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.609 12:46:29 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.609 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.609 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.609 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.609 12:46:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.609 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.609 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.609 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.609 12:46:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.609 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.609 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.609 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.609 12:46:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.609 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.609 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.609 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.609 12:46:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.609 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.609 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.609 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.609 12:46:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.609 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.609 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.609 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.609 12:46:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.609 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.609 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.609 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.609 12:46:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.609 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.609 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.609 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.609 12:46:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.609 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.609 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.609 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.609 12:46:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.609 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.609 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.609 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.609 12:46:29 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.609 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.609 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.609 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.609 12:46:29 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.609 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.609 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.609 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.609 12:46:29 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.609 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.609 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.609 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.609 12:46:29 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.609 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.609 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.609 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.609 12:46:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.609 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.609 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.609 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.609 12:46:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.609 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.609 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.609 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.609 12:46:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.609 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.609 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.609 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.609 12:46:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.609 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.609 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.609 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.609 12:46:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.609 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.609 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.609 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.609 12:46:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.609 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.609 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.609 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.609 12:46:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.609 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.609 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.609 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.609 12:46:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.609 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.609 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.609 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.609 12:46:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.609 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.609 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.609 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.609 12:46:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.609 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.609 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.609 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.609 12:46:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.609 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.609 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.609 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.609 12:46:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.609 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.609 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.609 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.609 12:46:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.609 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.609 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.609 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.609 12:46:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.609 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.609 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.609 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.609 12:46:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.609 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.609 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.609 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.609 12:46:29 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.609 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.609 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.609 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.609 12:46:29 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.609 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.609 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.609 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.609 12:46:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.609 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.609 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.609 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.609 12:46:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.609 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.609 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.609 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.609 12:46:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.609 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.609 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.609 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.609 12:46:29 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.609 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.609 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.609 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.609 12:46:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.609 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.609 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.609 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.609 12:46:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.610 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.610 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.610 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.610 12:46:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.610 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.610 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.610 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.610 12:46:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.610 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.610 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.610 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.610 12:46:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.610 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.610 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.610 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.610 12:46:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.610 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.610 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.610 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.610 12:46:29 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.610 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.610 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.610 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.610 12:46:29 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.610 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.610 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.610 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.610 12:46:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.610 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.610 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.610 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.610 12:46:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.610 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.610 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.610 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.610 12:46:29 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.610 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.610 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.610 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.610 12:46:29 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.610 12:46:29 -- setup/common.sh@33 -- # echo 0 00:03:24.610 12:46:29 -- setup/common.sh@33 -- # return 0 00:03:24.610 12:46:29 -- setup/hugepages.sh@100 -- # resv=0 00:03:24.610 12:46:29 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:24.610 nr_hugepages=1536 00:03:24.610 12:46:29 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:24.610 resv_hugepages=0 00:03:24.610 12:46:29 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:24.610 surplus_hugepages=0 00:03:24.610 12:46:29 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:24.610 anon_hugepages=0 00:03:24.610 12:46:29 -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:24.610 12:46:29 -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:24.610 12:46:29 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:24.610 12:46:29 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:24.610 12:46:29 -- setup/common.sh@18 -- # local node= 00:03:24.610 12:46:29 -- setup/common.sh@19 -- # local var val 00:03:24.610 12:46:29 -- setup/common.sh@20 -- # local mem_f mem 00:03:24.610 12:46:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.610 12:46:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:24.610 12:46:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:24.610 12:46:29 -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.610 12:46:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.610 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.610 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.610 12:46:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 108561444 kB' 'MemAvailable: 112087844 kB' 'Buffers: 4124 kB' 'Cached: 10262596 kB' 'SwapCached: 0 kB' 'Active: 7370860 kB' 'Inactive: 3515708 kB' 'Active(anon): 6680464 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515708 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 622788 kB' 'Mapped: 182356 kB' 'Shmem: 6060616 kB' 'KReclaimable: 289252 kB' 'Slab: 1046432 kB' 'SReclaimable: 289252 kB' 'SUnreclaim: 757180 kB' 'KernelStack: 27008 kB' 'PageTables: 8588 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985172 kB' 'Committed_AS: 8040616 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234716 kB' 'VmallocChunk: 0 kB' 'Percpu: 103104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3585396 kB' 'DirectMap2M: 42231808 kB' 'DirectMap1G: 90177536 kB' 00:03:24.610 12:46:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.610 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.610 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.610 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.610 12:46:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.610 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.610 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.610 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.610 12:46:29 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.610 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.610 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.610 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.610 12:46:29 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.610 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.610 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.610 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.610 12:46:29 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.610 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.610 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.610 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.610 12:46:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.610 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.610 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.610 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.610 12:46:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.610 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.610 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.610 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.610 12:46:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.610 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.610 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.610 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.610 12:46:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.610 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.610 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.610 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.610 12:46:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.610 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.610 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.610 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.610 12:46:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.610 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.610 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.610 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.610 12:46:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.610 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.610 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.610 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.610 12:46:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.611 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.611 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.611 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.611 12:46:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.611 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.611 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.611 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.611 12:46:29 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.611 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.611 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.611 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.611 12:46:29 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.611 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.611 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.611 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.611 12:46:29 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.611 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.611 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.611 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.611 12:46:29 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.611 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.611 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.611 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.611 12:46:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.611 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.611 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.611 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.611 12:46:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.611 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.611 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.611 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.611 12:46:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.611 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.611 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.611 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.611 12:46:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.611 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.611 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.611 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.611 12:46:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.611 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.611 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.611 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.611 12:46:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.611 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.611 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.611 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.611 12:46:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.611 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.611 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.611 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.611 12:46:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.611 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.611 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.611 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.611 12:46:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.611 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.611 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.611 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.611 12:46:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.611 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.611 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.611 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.611 12:46:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.611 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.611 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.611 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.611 12:46:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.611 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.611 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.611 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.611 12:46:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.611 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.611 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.611 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.611 12:46:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.611 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.611 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.611 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.611 12:46:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.611 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.611 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.611 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.611 12:46:29 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.611 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.611 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.611 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.611 12:46:29 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.611 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.611 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.611 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.611 12:46:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.611 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.611 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.611 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.611 12:46:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.611 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.611 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.611 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.611 12:46:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.611 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.611 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.611 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.611 12:46:29 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.611 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.611 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.611 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.611 12:46:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.611 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.611 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.611 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.611 12:46:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.611 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.611 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.611 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.611 12:46:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.611 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.611 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.611 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.611 12:46:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.611 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.611 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.611 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.611 12:46:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.611 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.611 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.611 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.611 12:46:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.611 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.611 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.611 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.611 12:46:29 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.611 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.611 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.611 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.611 12:46:29 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.611 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.611 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.611 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.611 12:46:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.611 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.611 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.611 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.611 12:46:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.611 12:46:29 -- setup/common.sh@33 -- # echo 1536 00:03:24.612 12:46:29 -- setup/common.sh@33 -- # return 0 00:03:24.612 12:46:29 -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:24.612 12:46:29 -- setup/hugepages.sh@112 -- # get_nodes 00:03:24.612 12:46:29 -- setup/hugepages.sh@27 -- # local node 00:03:24.612 12:46:29 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:24.612 12:46:29 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:24.612 12:46:29 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:24.612 12:46:29 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:24.612 12:46:29 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:24.612 12:46:29 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:24.612 12:46:29 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:24.612 12:46:29 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:24.612 12:46:29 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:24.612 12:46:29 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:24.612 12:46:29 -- setup/common.sh@18 -- # local node=0 00:03:24.612 12:46:29 -- setup/common.sh@19 -- # local var val 00:03:24.612 12:46:29 -- setup/common.sh@20 -- # local mem_f mem 00:03:24.612 12:46:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.612 12:46:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:24.612 12:46:29 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:24.612 12:46:29 -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.612 12:46:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.612 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.612 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.612 12:46:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 60045272 kB' 'MemUsed: 5613736 kB' 'SwapCached: 0 kB' 'Active: 2393360 kB' 'Inactive: 107576 kB' 'Active(anon): 2083840 kB' 'Inactive(anon): 0 kB' 'Active(file): 309520 kB' 'Inactive(file): 107576 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2391908 kB' 'Mapped: 109256 kB' 'AnonPages: 112176 kB' 'Shmem: 1974812 kB' 'KernelStack: 11928 kB' 'PageTables: 3628 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 158032 kB' 'Slab: 538312 kB' 'SReclaimable: 158032 kB' 'SUnreclaim: 380280 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:24.612 12:46:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.612 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.612 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.612 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.612 12:46:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.612 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.612 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.612 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.612 12:46:29 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.612 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.612 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.612 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.612 12:46:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.612 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.612 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.612 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.612 12:46:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.612 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.612 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.612 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.612 12:46:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.612 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.612 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.612 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.612 12:46:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.612 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.612 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.612 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.612 12:46:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.612 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.612 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.612 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.612 12:46:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.612 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.612 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.612 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.612 12:46:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.612 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.612 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.612 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.612 12:46:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.612 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.612 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.612 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.612 12:46:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.612 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.612 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.612 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.612 12:46:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.612 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.612 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.612 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.612 12:46:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.612 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.612 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.612 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.612 12:46:29 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.612 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.612 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.612 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.612 12:46:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.612 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.612 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.612 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.612 12:46:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.612 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.612 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.612 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.612 12:46:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.612 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.612 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.612 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.612 12:46:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.612 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.612 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.612 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.612 12:46:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.612 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.612 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.612 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.612 12:46:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.612 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.612 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.612 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.612 12:46:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.612 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.612 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.612 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.612 12:46:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.612 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.612 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.612 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.612 12:46:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.612 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.612 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.612 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.612 12:46:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.612 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.612 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.612 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.612 12:46:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.612 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.612 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.612 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.612 12:46:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.612 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.612 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.612 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.612 12:46:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.612 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.612 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.612 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.612 12:46:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.612 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.612 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.612 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.612 12:46:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.612 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.612 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.612 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.612 12:46:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.612 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.612 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.612 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.612 12:46:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.612 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.612 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.613 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.613 12:46:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.613 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.613 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.613 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.613 12:46:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.613 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.613 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.613 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.613 12:46:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.613 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.613 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.613 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.613 12:46:29 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.613 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.613 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.613 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.613 12:46:29 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.613 12:46:29 -- setup/common.sh@33 -- # echo 0 00:03:24.613 12:46:29 -- setup/common.sh@33 -- # return 0 00:03:24.613 12:46:29 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:24.613 12:46:29 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:24.613 12:46:29 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:24.613 12:46:29 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:24.613 12:46:29 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:24.613 12:46:29 -- setup/common.sh@18 -- # local node=1 00:03:24.613 12:46:29 -- setup/common.sh@19 -- # local var val 00:03:24.613 12:46:29 -- setup/common.sh@20 -- # local mem_f mem 00:03:24.613 12:46:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.613 12:46:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:24.613 12:46:29 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:24.613 12:46:29 -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.613 12:46:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.613 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.613 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.613 12:46:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679860 kB' 'MemFree: 48517160 kB' 'MemUsed: 12162700 kB' 'SwapCached: 0 kB' 'Active: 4977060 kB' 'Inactive: 3408132 kB' 'Active(anon): 4596184 kB' 'Inactive(anon): 0 kB' 'Active(file): 380876 kB' 'Inactive(file): 3408132 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7874828 kB' 'Mapped: 73052 kB' 'AnonPages: 510424 kB' 'Shmem: 4085820 kB' 'KernelStack: 15064 kB' 'PageTables: 4924 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 131220 kB' 'Slab: 508208 kB' 'SReclaimable: 131220 kB' 'SUnreclaim: 376988 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:24.613 12:46:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.613 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.613 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.613 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.613 12:46:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.613 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.613 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.613 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.613 12:46:29 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.613 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.613 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.613 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.613 12:46:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.613 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.613 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.613 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.613 12:46:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.613 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.613 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.613 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.613 12:46:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.613 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.613 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.613 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.613 12:46:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.613 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.613 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.613 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.613 12:46:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.613 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.613 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.613 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.613 12:46:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.613 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.613 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.613 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.613 12:46:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.613 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.613 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.613 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.613 12:46:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.613 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.613 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.613 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.613 12:46:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.613 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.613 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.613 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.613 12:46:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.613 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.613 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.613 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.613 12:46:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.613 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.613 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.613 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.613 12:46:29 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.613 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.613 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.613 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.613 12:46:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.613 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.613 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.613 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.613 12:46:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.613 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.613 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.613 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.613 12:46:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.613 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.613 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.613 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.613 12:46:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.613 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.613 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.613 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.613 12:46:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.613 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.613 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.613 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.613 12:46:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.613 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.613 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.613 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.613 12:46:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.613 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.613 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.613 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.613 12:46:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.613 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.613 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.613 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.613 12:46:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.613 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.613 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.613 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.613 12:46:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.613 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.613 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.613 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.613 12:46:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.613 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.613 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.613 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.613 12:46:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.613 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.613 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.613 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.613 12:46:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.613 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.613 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.613 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.613 12:46:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.613 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.613 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.613 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.613 12:46:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.613 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.614 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.614 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.614 12:46:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.614 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.614 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.614 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.614 12:46:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.614 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.614 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.614 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.614 12:46:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.614 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.614 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.614 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.614 12:46:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.614 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.614 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.614 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.614 12:46:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.614 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.614 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.614 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.614 12:46:29 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.614 12:46:29 -- setup/common.sh@32 -- # continue 00:03:24.614 12:46:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.614 12:46:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.614 12:46:29 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.614 12:46:29 -- setup/common.sh@33 -- # echo 0 00:03:24.614 12:46:29 -- setup/common.sh@33 -- # return 0 00:03:24.614 12:46:29 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:24.614 12:46:29 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:24.614 12:46:29 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:24.614 12:46:29 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:24.614 12:46:29 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:24.614 node0=512 expecting 512 00:03:24.614 12:46:29 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:24.614 12:46:29 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:24.614 12:46:29 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:24.614 12:46:29 -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:24.614 node1=1024 expecting 1024 00:03:24.614 12:46:29 -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:24.614 00:03:24.614 real 0m3.827s 00:03:24.614 user 0m1.494s 00:03:24.614 sys 0m2.381s 00:03:24.614 12:46:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:24.614 12:46:29 -- common/autotest_common.sh@10 -- # set +x 00:03:24.614 ************************************ 00:03:24.614 END TEST custom_alloc 00:03:24.614 ************************************ 00:03:24.614 12:46:29 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:24.614 12:46:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:24.614 12:46:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:24.614 12:46:29 -- common/autotest_common.sh@10 -- # set +x 00:03:24.874 ************************************ 00:03:24.874 START TEST no_shrink_alloc 00:03:24.874 ************************************ 00:03:24.874 12:46:29 -- common/autotest_common.sh@1111 -- # no_shrink_alloc 00:03:24.874 12:46:29 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:24.874 12:46:29 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:24.874 12:46:29 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:24.874 12:46:29 -- setup/hugepages.sh@51 -- # shift 00:03:24.874 12:46:29 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:24.874 12:46:29 -- setup/hugepages.sh@52 -- # local node_ids 00:03:24.874 12:46:29 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:24.874 12:46:29 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:24.874 12:46:29 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:24.874 12:46:29 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:24.874 12:46:29 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:24.874 12:46:29 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:24.874 12:46:29 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:24.874 12:46:29 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:24.874 12:46:29 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:24.874 12:46:29 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:24.874 12:46:29 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:24.874 12:46:29 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:24.874 12:46:29 -- setup/hugepages.sh@73 -- # return 0 00:03:24.874 12:46:29 -- setup/hugepages.sh@198 -- # setup output 00:03:24.874 12:46:29 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:24.874 12:46:29 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:28.179 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:28.179 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:28.179 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:28.179 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:28.179 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:28.179 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:28.179 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:28.179 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:28.179 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:28.179 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:28.179 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:28.179 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:28.179 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:28.179 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:28.179 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:28.179 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:28.179 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:28.443 12:46:33 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:28.443 12:46:33 -- setup/hugepages.sh@89 -- # local node 00:03:28.443 12:46:33 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:28.443 12:46:33 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:28.443 12:46:33 -- setup/hugepages.sh@92 -- # local surp 00:03:28.443 12:46:33 -- setup/hugepages.sh@93 -- # local resv 00:03:28.443 12:46:33 -- setup/hugepages.sh@94 -- # local anon 00:03:28.443 12:46:33 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:28.443 12:46:33 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:28.443 12:46:33 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:28.443 12:46:33 -- setup/common.sh@18 -- # local node= 00:03:28.443 12:46:33 -- setup/common.sh@19 -- # local var val 00:03:28.443 12:46:33 -- setup/common.sh@20 -- # local mem_f mem 00:03:28.443 12:46:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.443 12:46:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:28.443 12:46:33 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:28.443 12:46:33 -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.443 12:46:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.443 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.443 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.443 12:46:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 109609768 kB' 'MemAvailable: 113136168 kB' 'Buffers: 4124 kB' 'Cached: 10262728 kB' 'SwapCached: 0 kB' 'Active: 7370948 kB' 'Inactive: 3515708 kB' 'Active(anon): 6680552 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515708 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 623220 kB' 'Mapped: 182380 kB' 'Shmem: 6060748 kB' 'KReclaimable: 289252 kB' 'Slab: 1046376 kB' 'SReclaimable: 289252 kB' 'SUnreclaim: 757124 kB' 'KernelStack: 26976 kB' 'PageTables: 8480 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 8041700 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234668 kB' 'VmallocChunk: 0 kB' 'Percpu: 103104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3585396 kB' 'DirectMap2M: 42231808 kB' 'DirectMap1G: 90177536 kB' 00:03:28.443 12:46:33 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.443 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.443 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.443 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.443 12:46:33 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.443 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.443 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.443 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.443 12:46:33 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.443 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.443 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.443 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.443 12:46:33 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.443 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.443 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.443 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.443 12:46:33 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.443 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.443 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.443 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.443 12:46:33 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.443 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.443 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.443 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.443 12:46:33 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.443 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.443 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.443 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.443 12:46:33 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.443 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.443 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.443 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.443 12:46:33 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.443 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.443 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.443 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.443 12:46:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.443 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.443 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.443 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.443 12:46:33 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.443 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.443 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.443 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.443 12:46:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.443 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.443 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.443 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.443 12:46:33 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.443 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.443 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.443 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.443 12:46:33 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.443 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.443 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.444 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.444 12:46:33 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.444 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.444 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.444 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.444 12:46:33 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.444 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.444 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.444 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.444 12:46:33 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.444 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.444 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.444 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.444 12:46:33 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.444 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.444 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.444 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.444 12:46:33 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.444 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.444 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.444 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.444 12:46:33 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.444 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.444 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.444 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.444 12:46:33 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.444 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.444 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.444 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.444 12:46:33 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.444 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.444 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.444 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.444 12:46:33 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.444 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.444 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.444 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.444 12:46:33 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.444 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.444 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.444 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.444 12:46:33 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.444 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.444 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.444 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.444 12:46:33 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.444 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.444 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.444 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.444 12:46:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.444 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.444 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.444 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.444 12:46:33 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.444 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.444 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.444 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.444 12:46:33 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.444 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.444 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.444 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.444 12:46:33 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.444 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.444 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.444 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.444 12:46:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.444 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.444 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.444 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.444 12:46:33 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.444 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.444 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.444 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.444 12:46:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.444 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.444 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.444 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.444 12:46:33 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.444 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.444 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.444 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.444 12:46:33 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.444 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.444 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.444 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.444 12:46:33 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.444 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.444 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.444 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.444 12:46:33 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.444 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.444 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.444 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.444 12:46:33 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.444 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.444 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.444 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.444 12:46:33 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.444 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.444 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.444 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.444 12:46:33 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.444 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.444 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.444 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.444 12:46:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.444 12:46:33 -- setup/common.sh@33 -- # echo 0 00:03:28.444 12:46:33 -- setup/common.sh@33 -- # return 0 00:03:28.444 12:46:33 -- setup/hugepages.sh@97 -- # anon=0 00:03:28.444 12:46:33 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:28.444 12:46:33 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:28.444 12:46:33 -- setup/common.sh@18 -- # local node= 00:03:28.444 12:46:33 -- setup/common.sh@19 -- # local var val 00:03:28.444 12:46:33 -- setup/common.sh@20 -- # local mem_f mem 00:03:28.444 12:46:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.444 12:46:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:28.444 12:46:33 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:28.444 12:46:33 -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.444 12:46:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.444 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.444 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.444 12:46:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 109611144 kB' 'MemAvailable: 113137544 kB' 'Buffers: 4124 kB' 'Cached: 10262728 kB' 'SwapCached: 0 kB' 'Active: 7370584 kB' 'Inactive: 3515708 kB' 'Active(anon): 6680188 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515708 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 622892 kB' 'Mapped: 182352 kB' 'Shmem: 6060748 kB' 'KReclaimable: 289252 kB' 'Slab: 1046368 kB' 'SReclaimable: 289252 kB' 'SUnreclaim: 757116 kB' 'KernelStack: 26960 kB' 'PageTables: 8432 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 8041712 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234620 kB' 'VmallocChunk: 0 kB' 'Percpu: 103104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3585396 kB' 'DirectMap2M: 42231808 kB' 'DirectMap1G: 90177536 kB' 00:03:28.444 12:46:33 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.444 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.444 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.444 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.444 12:46:33 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.444 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.444 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.444 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.444 12:46:33 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.444 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.444 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.444 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.444 12:46:33 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.444 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.444 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.444 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.444 12:46:33 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.444 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.444 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.444 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.444 12:46:33 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.444 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.444 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.444 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.444 12:46:33 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.444 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.445 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.445 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.445 12:46:33 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.445 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.445 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.445 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.445 12:46:33 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.445 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.445 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.445 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.445 12:46:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.445 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.445 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.445 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.445 12:46:33 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.445 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.445 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.445 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.445 12:46:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.445 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.445 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.445 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.445 12:46:33 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.445 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.445 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.445 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.445 12:46:33 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.445 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.445 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.445 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.445 12:46:33 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.445 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.445 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.445 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.445 12:46:33 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.445 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.445 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.445 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.445 12:46:33 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.445 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.445 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.445 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.445 12:46:33 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.445 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.445 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.445 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.445 12:46:33 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.445 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.445 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.445 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.445 12:46:33 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.445 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.445 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.445 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.445 12:46:33 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.445 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.445 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.445 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.445 12:46:33 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.445 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.445 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.445 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.445 12:46:33 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.445 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.445 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.445 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.445 12:46:33 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.445 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.445 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.445 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.445 12:46:33 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.445 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.445 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.445 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.445 12:46:33 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.445 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.445 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.445 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.445 12:46:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.445 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.445 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.445 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.445 12:46:33 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.445 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.445 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.445 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.445 12:46:33 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.445 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.445 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.445 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.445 12:46:33 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.445 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.445 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.445 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.445 12:46:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.445 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.445 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.445 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.445 12:46:33 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.445 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.445 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.445 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.445 12:46:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.445 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.445 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.445 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.445 12:46:33 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.445 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.445 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.445 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.445 12:46:33 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.445 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.445 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.445 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.445 12:46:33 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.445 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.445 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.445 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.445 12:46:33 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.445 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.445 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.445 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.445 12:46:33 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.445 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.445 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.445 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.445 12:46:33 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.445 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.445 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.445 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.445 12:46:33 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.445 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.445 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.445 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.445 12:46:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.445 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.445 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.445 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.445 12:46:33 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.445 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.445 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.445 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.445 12:46:33 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.445 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.445 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.445 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.445 12:46:33 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.445 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.445 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.445 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.445 12:46:33 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.445 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.445 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.445 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.445 12:46:33 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.445 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.445 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.445 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.445 12:46:33 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.445 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.445 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.445 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.445 12:46:33 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.445 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.445 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.445 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.446 12:46:33 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.446 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.446 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.446 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.446 12:46:33 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.446 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.446 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.446 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.446 12:46:33 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.446 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.446 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.446 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.446 12:46:33 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.446 12:46:33 -- setup/common.sh@33 -- # echo 0 00:03:28.446 12:46:33 -- setup/common.sh@33 -- # return 0 00:03:28.446 12:46:33 -- setup/hugepages.sh@99 -- # surp=0 00:03:28.446 12:46:33 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:28.446 12:46:33 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:28.446 12:46:33 -- setup/common.sh@18 -- # local node= 00:03:28.446 12:46:33 -- setup/common.sh@19 -- # local var val 00:03:28.446 12:46:33 -- setup/common.sh@20 -- # local mem_f mem 00:03:28.446 12:46:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.446 12:46:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:28.446 12:46:33 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:28.446 12:46:33 -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.446 12:46:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.446 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.446 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.446 12:46:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 109611284 kB' 'MemAvailable: 113137684 kB' 'Buffers: 4124 kB' 'Cached: 10262740 kB' 'SwapCached: 0 kB' 'Active: 7370580 kB' 'Inactive: 3515708 kB' 'Active(anon): 6680184 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515708 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 622920 kB' 'Mapped: 182352 kB' 'Shmem: 6060760 kB' 'KReclaimable: 289252 kB' 'Slab: 1046348 kB' 'SReclaimable: 289252 kB' 'SUnreclaim: 757096 kB' 'KernelStack: 26976 kB' 'PageTables: 8480 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 8041724 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234620 kB' 'VmallocChunk: 0 kB' 'Percpu: 103104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3585396 kB' 'DirectMap2M: 42231808 kB' 'DirectMap1G: 90177536 kB' 00:03:28.446 12:46:33 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.446 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.446 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.446 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.446 12:46:33 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.446 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.446 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.446 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.446 12:46:33 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.446 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.446 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.446 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.446 12:46:33 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.446 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.446 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.446 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.446 12:46:33 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.446 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.446 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.446 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.446 12:46:33 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.446 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.446 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.446 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.446 12:46:33 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.446 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.446 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.446 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.446 12:46:33 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.446 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.446 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.446 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.446 12:46:33 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.446 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.446 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.446 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.446 12:46:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.446 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.446 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.446 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.446 12:46:33 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.446 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.446 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.446 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.446 12:46:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.446 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.446 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.446 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.446 12:46:33 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.446 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.446 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.446 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.446 12:46:33 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.446 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.446 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.446 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.446 12:46:33 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.446 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.446 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.446 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.446 12:46:33 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.446 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.446 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.446 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.710 12:46:33 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.710 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.711 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.711 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.711 12:46:33 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.711 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.711 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.711 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.711 12:46:33 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.711 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.711 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.711 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.711 12:46:33 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.711 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.711 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.711 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.711 12:46:33 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.711 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.711 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.711 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.711 12:46:33 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.711 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.711 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.711 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.711 12:46:33 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.711 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.711 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.711 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.711 12:46:33 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.711 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.711 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.711 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.711 12:46:33 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.711 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.711 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.711 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.711 12:46:33 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.711 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.711 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.711 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.711 12:46:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.711 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.711 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.711 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.711 12:46:33 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.711 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.711 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.711 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.711 12:46:33 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.711 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.711 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.711 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.711 12:46:33 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.711 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.711 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.711 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.711 12:46:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.711 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.711 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.711 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.711 12:46:33 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.711 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.711 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.711 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.711 12:46:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.711 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.711 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.711 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.711 12:46:33 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.711 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.711 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.711 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.711 12:46:33 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.711 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.711 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.711 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.711 12:46:33 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.711 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.711 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.711 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.711 12:46:33 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.711 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.711 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.711 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.711 12:46:33 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.711 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.711 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.711 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.711 12:46:33 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.711 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.711 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.711 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.711 12:46:33 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.711 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.711 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.711 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.711 12:46:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.711 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.711 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.711 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.711 12:46:33 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.711 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.711 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.711 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.711 12:46:33 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.711 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.711 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.711 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.711 12:46:33 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.711 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.711 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.711 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.711 12:46:33 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.711 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.711 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.711 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.711 12:46:33 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.711 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.711 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.711 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.711 12:46:33 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.711 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.711 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.711 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.711 12:46:33 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.711 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.711 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.711 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.711 12:46:33 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.711 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.711 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.711 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.711 12:46:33 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.711 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.711 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.711 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.711 12:46:33 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.711 12:46:33 -- setup/common.sh@33 -- # echo 0 00:03:28.711 12:46:33 -- setup/common.sh@33 -- # return 0 00:03:28.711 12:46:33 -- setup/hugepages.sh@100 -- # resv=0 00:03:28.711 12:46:33 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:28.711 nr_hugepages=1024 00:03:28.711 12:46:33 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:28.711 resv_hugepages=0 00:03:28.711 12:46:33 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:28.711 surplus_hugepages=0 00:03:28.711 12:46:33 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:28.711 anon_hugepages=0 00:03:28.711 12:46:33 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:28.711 12:46:33 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:28.711 12:46:33 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:28.711 12:46:33 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:28.711 12:46:33 -- setup/common.sh@18 -- # local node= 00:03:28.711 12:46:33 -- setup/common.sh@19 -- # local var val 00:03:28.711 12:46:33 -- setup/common.sh@20 -- # local mem_f mem 00:03:28.711 12:46:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.711 12:46:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:28.711 12:46:33 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:28.711 12:46:33 -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.711 12:46:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.711 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.711 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.712 12:46:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 109611920 kB' 'MemAvailable: 113138320 kB' 'Buffers: 4124 kB' 'Cached: 10262764 kB' 'SwapCached: 0 kB' 'Active: 7370604 kB' 'Inactive: 3515708 kB' 'Active(anon): 6680208 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515708 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 622872 kB' 'Mapped: 182352 kB' 'Shmem: 6060784 kB' 'KReclaimable: 289252 kB' 'Slab: 1046348 kB' 'SReclaimable: 289252 kB' 'SUnreclaim: 757096 kB' 'KernelStack: 26976 kB' 'PageTables: 8480 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 8041740 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234620 kB' 'VmallocChunk: 0 kB' 'Percpu: 103104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3585396 kB' 'DirectMap2M: 42231808 kB' 'DirectMap1G: 90177536 kB' 00:03:28.712 12:46:33 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.712 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.712 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.712 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.712 12:46:33 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.712 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.712 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.712 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.712 12:46:33 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.712 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.712 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.712 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.712 12:46:33 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.712 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.712 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.712 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.712 12:46:33 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.712 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.712 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.712 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.712 12:46:33 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.712 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.712 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.712 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.712 12:46:33 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.712 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.712 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.712 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.712 12:46:33 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.712 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.712 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.712 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.712 12:46:33 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.712 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.712 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.712 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.712 12:46:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.712 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.712 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.712 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.712 12:46:33 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.712 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.712 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.712 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.712 12:46:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.712 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.712 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.712 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.712 12:46:33 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.712 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.712 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.712 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.712 12:46:33 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.712 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.712 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.712 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.712 12:46:33 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.712 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.712 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.712 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.712 12:46:33 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.712 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.712 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.712 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.712 12:46:33 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.712 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.712 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.712 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.712 12:46:33 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.712 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.712 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.712 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.712 12:46:33 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.712 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.712 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.712 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.712 12:46:33 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.712 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.712 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.712 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.712 12:46:33 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.712 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.712 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.712 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.712 12:46:33 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.712 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.712 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.712 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.712 12:46:33 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.712 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.712 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.712 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.712 12:46:33 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.712 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.712 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.712 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.712 12:46:33 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.712 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.712 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.712 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.712 12:46:33 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.712 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.712 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.712 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.712 12:46:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.712 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.712 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.712 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.712 12:46:33 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.712 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.712 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.712 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.712 12:46:33 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.712 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.712 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.712 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.712 12:46:33 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.712 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.712 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.712 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.712 12:46:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.712 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.712 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.712 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.712 12:46:33 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.712 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.712 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.712 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.712 12:46:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.712 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.712 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.712 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.712 12:46:33 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.712 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.712 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.712 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.712 12:46:33 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.712 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.712 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.712 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.712 12:46:33 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.712 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.712 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.712 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.712 12:46:33 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.712 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.712 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.712 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.712 12:46:33 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.712 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.712 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.712 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.712 12:46:33 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.713 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.713 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.713 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.713 12:46:33 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.713 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.713 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.713 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.713 12:46:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.713 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.713 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.713 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.713 12:46:33 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.713 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.713 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.713 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.713 12:46:33 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.713 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.713 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.713 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.713 12:46:33 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.713 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.713 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.713 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.713 12:46:33 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.713 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.713 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.713 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.713 12:46:33 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.713 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.713 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.713 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.713 12:46:33 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.713 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.713 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.713 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.713 12:46:33 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.713 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.713 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.713 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.713 12:46:33 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.713 12:46:33 -- setup/common.sh@33 -- # echo 1024 00:03:28.713 12:46:33 -- setup/common.sh@33 -- # return 0 00:03:28.713 12:46:33 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:28.713 12:46:33 -- setup/hugepages.sh@112 -- # get_nodes 00:03:28.713 12:46:33 -- setup/hugepages.sh@27 -- # local node 00:03:28.713 12:46:33 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:28.713 12:46:33 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:28.713 12:46:33 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:28.713 12:46:33 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:28.713 12:46:33 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:28.713 12:46:33 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:28.713 12:46:33 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:28.713 12:46:33 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:28.713 12:46:33 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:28.713 12:46:33 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:28.713 12:46:33 -- setup/common.sh@18 -- # local node=0 00:03:28.713 12:46:33 -- setup/common.sh@19 -- # local var val 00:03:28.713 12:46:33 -- setup/common.sh@20 -- # local mem_f mem 00:03:28.713 12:46:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.713 12:46:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:28.713 12:46:33 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:28.713 12:46:33 -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.713 12:46:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.713 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.713 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.713 12:46:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 59008824 kB' 'MemUsed: 6650184 kB' 'SwapCached: 0 kB' 'Active: 2392268 kB' 'Inactive: 107576 kB' 'Active(anon): 2082748 kB' 'Inactive(anon): 0 kB' 'Active(file): 309520 kB' 'Inactive(file): 107576 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2391960 kB' 'Mapped: 109300 kB' 'AnonPages: 111036 kB' 'Shmem: 1974864 kB' 'KernelStack: 11864 kB' 'PageTables: 3512 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 158032 kB' 'Slab: 538100 kB' 'SReclaimable: 158032 kB' 'SUnreclaim: 380068 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:28.713 12:46:33 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.713 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.713 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.713 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.713 12:46:33 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.713 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.713 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.713 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.713 12:46:33 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.713 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.713 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.713 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.713 12:46:33 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.713 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.713 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.713 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.713 12:46:33 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.713 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.713 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.713 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.713 12:46:33 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.713 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.713 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.713 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.713 12:46:33 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.713 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.713 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.713 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.713 12:46:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.713 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.713 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.713 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.713 12:46:33 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.713 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.713 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.713 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.713 12:46:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.713 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.713 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.713 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.713 12:46:33 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.713 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.713 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.713 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.713 12:46:33 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.713 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.713 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.713 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.713 12:46:33 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.713 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.713 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.713 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.713 12:46:33 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.713 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.713 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.713 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.713 12:46:33 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.713 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.713 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.713 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.713 12:46:33 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.713 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.713 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.713 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.713 12:46:33 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.713 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.713 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.713 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.713 12:46:33 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.713 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.713 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.713 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.713 12:46:33 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.713 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.713 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.713 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.713 12:46:33 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.713 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.713 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.713 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.713 12:46:33 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.713 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.713 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.713 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.713 12:46:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.713 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.713 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.714 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.714 12:46:33 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.714 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.714 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.714 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.714 12:46:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.714 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.714 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.714 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.714 12:46:33 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.714 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.714 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.714 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.714 12:46:33 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.714 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.714 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.714 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.714 12:46:33 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.714 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.714 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.714 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.714 12:46:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.714 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.714 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.714 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.714 12:46:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.714 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.714 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.714 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.714 12:46:33 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.714 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.714 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.714 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.714 12:46:33 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.714 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.714 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.714 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.714 12:46:33 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.714 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.714 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.714 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.714 12:46:33 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.714 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.714 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.714 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.714 12:46:33 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.714 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.714 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.714 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.714 12:46:33 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.714 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.714 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.714 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.714 12:46:33 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.714 12:46:33 -- setup/common.sh@32 -- # continue 00:03:28.714 12:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.714 12:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.714 12:46:33 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.714 12:46:33 -- setup/common.sh@33 -- # echo 0 00:03:28.714 12:46:33 -- setup/common.sh@33 -- # return 0 00:03:28.714 12:46:33 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:28.714 12:46:33 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:28.714 12:46:33 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:28.714 12:46:33 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:28.714 12:46:33 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:28.714 node0=1024 expecting 1024 00:03:28.714 12:46:33 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:28.714 12:46:33 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:28.714 12:46:33 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:28.714 12:46:33 -- setup/hugepages.sh@202 -- # setup output 00:03:28.714 12:46:33 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:28.714 12:46:33 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:32.021 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:32.021 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:32.021 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:32.021 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:32.021 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:32.021 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:32.021 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:32.021 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:32.021 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:32.021 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:32.021 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:32.021 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:32.021 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:32.021 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:32.021 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:32.021 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:32.021 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:32.286 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:32.286 12:46:37 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:32.286 12:46:37 -- setup/hugepages.sh@89 -- # local node 00:03:32.286 12:46:37 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:32.286 12:46:37 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:32.286 12:46:37 -- setup/hugepages.sh@92 -- # local surp 00:03:32.286 12:46:37 -- setup/hugepages.sh@93 -- # local resv 00:03:32.286 12:46:37 -- setup/hugepages.sh@94 -- # local anon 00:03:32.286 12:46:37 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:32.286 12:46:37 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:32.286 12:46:37 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:32.286 12:46:37 -- setup/common.sh@18 -- # local node= 00:03:32.286 12:46:37 -- setup/common.sh@19 -- # local var val 00:03:32.286 12:46:37 -- setup/common.sh@20 -- # local mem_f mem 00:03:32.286 12:46:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.286 12:46:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:32.286 12:46:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:32.286 12:46:37 -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.286 12:46:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.286 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.286 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.286 12:46:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 109583908 kB' 'MemAvailable: 113110308 kB' 'Buffers: 4124 kB' 'Cached: 10262848 kB' 'SwapCached: 0 kB' 'Active: 7372384 kB' 'Inactive: 3515708 kB' 'Active(anon): 6681988 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515708 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 623936 kB' 'Mapped: 182448 kB' 'Shmem: 6060868 kB' 'KReclaimable: 289252 kB' 'Slab: 1046320 kB' 'SReclaimable: 289252 kB' 'SUnreclaim: 757068 kB' 'KernelStack: 26976 kB' 'PageTables: 8500 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 8042596 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234780 kB' 'VmallocChunk: 0 kB' 'Percpu: 103104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3585396 kB' 'DirectMap2M: 42231808 kB' 'DirectMap1G: 90177536 kB' 00:03:32.286 12:46:37 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.286 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.286 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.286 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.286 12:46:37 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.286 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.286 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.286 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.286 12:46:37 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.286 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.286 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.286 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.286 12:46:37 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.286 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.286 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.286 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.286 12:46:37 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.286 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.286 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.286 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.286 12:46:37 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.286 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.286 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.286 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.286 12:46:37 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.286 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.286 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.286 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.286 12:46:37 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.286 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.286 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.286 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.286 12:46:37 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.286 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.286 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.286 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.286 12:46:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.286 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.286 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.286 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.286 12:46:37 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.286 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.286 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.286 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.286 12:46:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.286 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.286 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.286 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.286 12:46:37 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.286 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.286 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.286 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.286 12:46:37 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.286 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.286 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.286 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.286 12:46:37 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.286 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.286 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.286 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.286 12:46:37 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.286 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.286 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.286 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.286 12:46:37 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.286 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.286 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.286 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.286 12:46:37 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.286 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.286 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.286 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.286 12:46:37 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.286 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.286 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.286 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.286 12:46:37 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.286 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.286 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.286 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.286 12:46:37 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.286 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.286 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.286 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.286 12:46:37 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.286 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.286 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.286 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.286 12:46:37 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.286 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.286 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.286 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.286 12:46:37 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.286 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.286 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.286 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.286 12:46:37 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.286 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.286 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.286 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.286 12:46:37 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.286 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.286 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.286 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.286 12:46:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.286 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.286 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.286 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.286 12:46:37 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.286 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.286 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.286 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.286 12:46:37 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.286 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.286 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.286 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.286 12:46:37 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.286 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.286 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.286 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.286 12:46:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.286 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.286 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.286 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.286 12:46:37 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.286 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.286 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.286 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.286 12:46:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.286 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.286 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.286 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.286 12:46:37 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.286 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.286 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.286 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.286 12:46:37 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.287 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.287 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.287 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.287 12:46:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.287 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.287 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.287 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.287 12:46:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.287 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.287 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.287 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.287 12:46:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.287 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.287 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.287 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.287 12:46:37 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.287 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.287 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.287 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.287 12:46:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.287 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.287 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.287 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.287 12:46:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.287 12:46:37 -- setup/common.sh@33 -- # echo 0 00:03:32.287 12:46:37 -- setup/common.sh@33 -- # return 0 00:03:32.287 12:46:37 -- setup/hugepages.sh@97 -- # anon=0 00:03:32.287 12:46:37 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:32.287 12:46:37 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:32.287 12:46:37 -- setup/common.sh@18 -- # local node= 00:03:32.287 12:46:37 -- setup/common.sh@19 -- # local var val 00:03:32.287 12:46:37 -- setup/common.sh@20 -- # local mem_f mem 00:03:32.287 12:46:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.287 12:46:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:32.287 12:46:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:32.287 12:46:37 -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.287 12:46:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.287 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.287 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.287 12:46:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 109584416 kB' 'MemAvailable: 113110816 kB' 'Buffers: 4124 kB' 'Cached: 10262848 kB' 'SwapCached: 0 kB' 'Active: 7372056 kB' 'Inactive: 3515708 kB' 'Active(anon): 6681660 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515708 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 623648 kB' 'Mapped: 182440 kB' 'Shmem: 6060868 kB' 'KReclaimable: 289252 kB' 'Slab: 1046292 kB' 'SReclaimable: 289252 kB' 'SUnreclaim: 757040 kB' 'KernelStack: 26976 kB' 'PageTables: 8476 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 8042608 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234732 kB' 'VmallocChunk: 0 kB' 'Percpu: 103104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3585396 kB' 'DirectMap2M: 42231808 kB' 'DirectMap1G: 90177536 kB' 00:03:32.287 12:46:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.287 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.287 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.287 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.287 12:46:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.287 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.287 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.287 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.287 12:46:37 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.287 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.287 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.287 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.287 12:46:37 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.287 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.287 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.287 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.287 12:46:37 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.287 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.287 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.287 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.287 12:46:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.287 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.287 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.287 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.287 12:46:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.287 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.287 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.287 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.287 12:46:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.287 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.287 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.287 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.287 12:46:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.287 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.287 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.287 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.287 12:46:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.287 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.287 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.287 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.287 12:46:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.287 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.287 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.287 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.287 12:46:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.287 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.287 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.287 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.287 12:46:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.287 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.287 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.287 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.287 12:46:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.287 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.287 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.287 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.287 12:46:37 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.287 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.287 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.287 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.287 12:46:37 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.287 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.287 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.287 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.287 12:46:37 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.287 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.287 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.287 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.287 12:46:37 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.287 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.287 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.287 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.287 12:46:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.287 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.287 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.287 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.287 12:46:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.287 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.287 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.287 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.287 12:46:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.287 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.287 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.287 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.287 12:46:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.287 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.287 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.287 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.287 12:46:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.287 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.287 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.287 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.287 12:46:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.287 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.287 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.287 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.287 12:46:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.287 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.287 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.287 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.287 12:46:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.287 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.287 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.287 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.287 12:46:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.287 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.287 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.287 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.287 12:46:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.288 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.288 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.288 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.288 12:46:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.288 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.288 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.288 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.288 12:46:37 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.288 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.288 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.288 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.288 12:46:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.288 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.288 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.288 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.288 12:46:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.288 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.288 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.288 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.288 12:46:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.288 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.288 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.288 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.288 12:46:37 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.288 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.288 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.288 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.288 12:46:37 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.288 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.288 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.288 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.288 12:46:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.288 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.288 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.288 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.288 12:46:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.288 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.288 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.288 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.288 12:46:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.288 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.288 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.288 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.288 12:46:37 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.288 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.288 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.288 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.288 12:46:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.288 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.288 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.288 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.288 12:46:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.288 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.288 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.288 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.288 12:46:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.288 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.288 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.288 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.288 12:46:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.288 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.288 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.288 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.288 12:46:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.288 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.288 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.288 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.288 12:46:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.288 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.288 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.288 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.288 12:46:37 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.288 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.288 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.288 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.288 12:46:37 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.288 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.288 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.288 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.288 12:46:37 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.288 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.288 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.288 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.288 12:46:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.288 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.288 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.288 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.288 12:46:37 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.288 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.288 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.288 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.288 12:46:37 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.288 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.288 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.288 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.288 12:46:37 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.288 12:46:37 -- setup/common.sh@33 -- # echo 0 00:03:32.288 12:46:37 -- setup/common.sh@33 -- # return 0 00:03:32.288 12:46:37 -- setup/hugepages.sh@99 -- # surp=0 00:03:32.288 12:46:37 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:32.288 12:46:37 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:32.288 12:46:37 -- setup/common.sh@18 -- # local node= 00:03:32.288 12:46:37 -- setup/common.sh@19 -- # local var val 00:03:32.288 12:46:37 -- setup/common.sh@20 -- # local mem_f mem 00:03:32.288 12:46:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.288 12:46:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:32.288 12:46:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:32.288 12:46:37 -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.288 12:46:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.288 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.288 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.288 12:46:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 109584500 kB' 'MemAvailable: 113110900 kB' 'Buffers: 4124 kB' 'Cached: 10262860 kB' 'SwapCached: 0 kB' 'Active: 7371904 kB' 'Inactive: 3515708 kB' 'Active(anon): 6681508 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515708 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 623920 kB' 'Mapped: 182364 kB' 'Shmem: 6060880 kB' 'KReclaimable: 289252 kB' 'Slab: 1046276 kB' 'SReclaimable: 289252 kB' 'SUnreclaim: 757024 kB' 'KernelStack: 26976 kB' 'PageTables: 8468 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 8042620 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234748 kB' 'VmallocChunk: 0 kB' 'Percpu: 103104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3585396 kB' 'DirectMap2M: 42231808 kB' 'DirectMap1G: 90177536 kB' 00:03:32.288 12:46:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.288 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.288 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.288 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.288 12:46:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.288 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.288 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.288 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.288 12:46:37 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.288 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.288 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.288 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.288 12:46:37 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.288 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.288 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.288 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.288 12:46:37 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.288 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.288 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.288 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.288 12:46:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.288 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.288 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.288 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.288 12:46:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.288 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.288 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.288 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.288 12:46:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.288 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.288 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.288 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.288 12:46:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.288 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.288 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.289 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.289 12:46:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.289 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.289 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.289 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.289 12:46:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.289 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.289 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.289 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.289 12:46:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.289 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.289 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.289 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.289 12:46:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.289 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.289 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.289 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.289 12:46:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.289 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.289 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.289 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.289 12:46:37 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.289 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.289 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.289 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.289 12:46:37 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.289 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.289 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.289 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.289 12:46:37 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.289 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.289 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.289 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.289 12:46:37 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.289 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.289 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.289 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.289 12:46:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.289 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.289 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.289 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.289 12:46:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.289 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.289 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.289 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.289 12:46:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.289 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.289 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.289 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.289 12:46:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.289 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.289 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.289 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.289 12:46:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.289 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.289 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.289 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.289 12:46:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.289 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.289 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.289 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.289 12:46:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.289 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.289 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.289 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.289 12:46:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.289 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.289 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.289 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.289 12:46:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.289 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.289 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.289 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.289 12:46:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.289 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.289 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.289 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.289 12:46:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.289 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.289 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.289 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.289 12:46:37 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.289 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.289 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.289 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.289 12:46:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.289 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.289 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.289 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.289 12:46:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.289 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.289 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.289 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.289 12:46:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.289 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.289 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.289 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.289 12:46:37 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.289 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.289 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.289 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.289 12:46:37 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.289 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.289 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.289 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.289 12:46:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.289 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.289 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.289 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.289 12:46:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.289 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.289 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.289 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.289 12:46:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.289 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.289 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.289 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.289 12:46:37 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.289 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.289 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.289 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.289 12:46:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.289 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.289 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.289 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.289 12:46:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.289 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.289 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.289 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.289 12:46:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.289 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.289 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.289 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.289 12:46:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.289 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.289 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.289 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.289 12:46:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.289 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.289 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.289 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.289 12:46:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.289 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.289 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.289 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.289 12:46:37 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.289 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.289 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.289 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.289 12:46:37 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.289 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.289 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.289 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.289 12:46:37 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.289 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.289 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.289 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.289 12:46:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.289 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.289 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.289 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.289 12:46:37 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.289 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.289 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.289 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.290 12:46:37 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.290 12:46:37 -- setup/common.sh@33 -- # echo 0 00:03:32.290 12:46:37 -- setup/common.sh@33 -- # return 0 00:03:32.290 12:46:37 -- setup/hugepages.sh@100 -- # resv=0 00:03:32.290 12:46:37 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:32.290 nr_hugepages=1024 00:03:32.290 12:46:37 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:32.290 resv_hugepages=0 00:03:32.290 12:46:37 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:32.290 surplus_hugepages=0 00:03:32.290 12:46:37 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:32.290 anon_hugepages=0 00:03:32.290 12:46:37 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:32.290 12:46:37 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:32.290 12:46:37 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:32.290 12:46:37 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:32.290 12:46:37 -- setup/common.sh@18 -- # local node= 00:03:32.290 12:46:37 -- setup/common.sh@19 -- # local var val 00:03:32.290 12:46:37 -- setup/common.sh@20 -- # local mem_f mem 00:03:32.290 12:46:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.290 12:46:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:32.290 12:46:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:32.290 12:46:37 -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.290 12:46:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.290 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.290 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.290 12:46:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 109584752 kB' 'MemAvailable: 113111152 kB' 'Buffers: 4124 kB' 'Cached: 10262876 kB' 'SwapCached: 0 kB' 'Active: 7371604 kB' 'Inactive: 3515708 kB' 'Active(anon): 6681208 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515708 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 623620 kB' 'Mapped: 182364 kB' 'Shmem: 6060896 kB' 'KReclaimable: 289252 kB' 'Slab: 1046276 kB' 'SReclaimable: 289252 kB' 'SUnreclaim: 757024 kB' 'KernelStack: 26976 kB' 'PageTables: 8468 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 8042636 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234748 kB' 'VmallocChunk: 0 kB' 'Percpu: 103104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3585396 kB' 'DirectMap2M: 42231808 kB' 'DirectMap1G: 90177536 kB' 00:03:32.290 12:46:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.290 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.290 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.290 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.290 12:46:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.290 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.290 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.290 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.290 12:46:37 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.290 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.290 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.290 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.290 12:46:37 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.290 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.290 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.290 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.290 12:46:37 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.290 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.290 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.290 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.290 12:46:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.290 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.290 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.290 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.290 12:46:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.290 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.290 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.290 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.290 12:46:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.290 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.290 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.290 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.290 12:46:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.290 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.290 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.290 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.290 12:46:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.290 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.290 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.290 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.290 12:46:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.290 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.290 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.290 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.290 12:46:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.290 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.290 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.290 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.290 12:46:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.290 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.290 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.290 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.290 12:46:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.290 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.290 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.290 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.290 12:46:37 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.290 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.290 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.290 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.290 12:46:37 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.290 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.290 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.290 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.290 12:46:37 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.290 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.290 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.290 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.290 12:46:37 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.290 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.290 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.290 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.290 12:46:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.290 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.290 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.290 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.290 12:46:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.290 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.290 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.290 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.290 12:46:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.290 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.290 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.290 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.290 12:46:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.290 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.290 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.290 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.290 12:46:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.290 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.290 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.290 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.290 12:46:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.290 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.291 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.291 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.291 12:46:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.291 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.291 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.291 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.291 12:46:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.291 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.291 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.291 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.291 12:46:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.291 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.291 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.291 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.291 12:46:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.291 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.291 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.291 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.291 12:46:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.291 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.291 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.291 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.291 12:46:37 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.291 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.291 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.291 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.291 12:46:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.291 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.291 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.291 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.291 12:46:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.291 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.291 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.291 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.291 12:46:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.291 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.291 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.291 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.291 12:46:37 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.291 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.291 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.291 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.291 12:46:37 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.291 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.291 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.291 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.291 12:46:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.291 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.291 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.291 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.291 12:46:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.291 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.291 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.291 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.291 12:46:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.291 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.291 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.291 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.291 12:46:37 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.291 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.291 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.291 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.291 12:46:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.291 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.291 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.291 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.291 12:46:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.291 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.291 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.291 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.291 12:46:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.291 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.291 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.291 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.291 12:46:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.291 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.291 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.291 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.291 12:46:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.291 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.291 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.291 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.291 12:46:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.291 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.291 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.291 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.291 12:46:37 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.291 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.291 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.291 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.291 12:46:37 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.291 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.291 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.291 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.291 12:46:37 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.291 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.291 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.291 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.291 12:46:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.291 12:46:37 -- setup/common.sh@33 -- # echo 1024 00:03:32.291 12:46:37 -- setup/common.sh@33 -- # return 0 00:03:32.291 12:46:37 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:32.291 12:46:37 -- setup/hugepages.sh@112 -- # get_nodes 00:03:32.291 12:46:37 -- setup/hugepages.sh@27 -- # local node 00:03:32.291 12:46:37 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:32.291 12:46:37 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:32.291 12:46:37 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:32.291 12:46:37 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:32.291 12:46:37 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:32.291 12:46:37 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:32.291 12:46:37 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:32.291 12:46:37 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:32.291 12:46:37 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:32.291 12:46:37 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:32.291 12:46:37 -- setup/common.sh@18 -- # local node=0 00:03:32.291 12:46:37 -- setup/common.sh@19 -- # local var val 00:03:32.291 12:46:37 -- setup/common.sh@20 -- # local mem_f mem 00:03:32.291 12:46:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.291 12:46:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:32.291 12:46:37 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:32.291 12:46:37 -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.291 12:46:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.291 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.291 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.291 12:46:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 58984788 kB' 'MemUsed: 6674220 kB' 'SwapCached: 0 kB' 'Active: 2395796 kB' 'Inactive: 107576 kB' 'Active(anon): 2086276 kB' 'Inactive(anon): 0 kB' 'Active(file): 309520 kB' 'Inactive(file): 107576 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2392052 kB' 'Mapped: 109816 kB' 'AnonPages: 114520 kB' 'Shmem: 1974956 kB' 'KernelStack: 11912 kB' 'PageTables: 3608 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 158032 kB' 'Slab: 538044 kB' 'SReclaimable: 158032 kB' 'SUnreclaim: 380012 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:32.291 12:46:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.291 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.291 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.291 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.291 12:46:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.291 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.291 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.291 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.291 12:46:37 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.291 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.291 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.291 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.291 12:46:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.291 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.291 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.291 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.291 12:46:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.291 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.291 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.291 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.291 12:46:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.291 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.291 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.291 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.291 12:46:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.291 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.291 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.291 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.292 12:46:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.292 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.292 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.292 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.292 12:46:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.292 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.292 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.292 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.292 12:46:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.292 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.292 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.292 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.292 12:46:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.292 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.292 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.292 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.292 12:46:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.292 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.292 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.292 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.292 12:46:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.292 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.292 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.292 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.292 12:46:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.292 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.292 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.292 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.292 12:46:37 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.292 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.292 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.292 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.292 12:46:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.292 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.292 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.292 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.292 12:46:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.292 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.292 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.292 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.292 12:46:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.292 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.292 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.292 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.292 12:46:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.292 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.292 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.292 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.292 12:46:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.292 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.292 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.292 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.292 12:46:37 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.292 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.292 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.292 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.292 12:46:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.292 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.292 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.292 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.292 12:46:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.292 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.292 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.292 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.292 12:46:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.292 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.292 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.292 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.292 12:46:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.292 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.292 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.292 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.292 12:46:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.292 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.292 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.292 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.292 12:46:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.292 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.292 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.292 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.292 12:46:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.292 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.292 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.292 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.292 12:46:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.292 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.292 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.292 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.292 12:46:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.292 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.292 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.292 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.292 12:46:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.292 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.292 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.292 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.292 12:46:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.292 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.292 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.292 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.292 12:46:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.292 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.292 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.292 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.292 12:46:37 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.292 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.292 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.292 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.292 12:46:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.292 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.292 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.292 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.292 12:46:37 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.292 12:46:37 -- setup/common.sh@32 -- # continue 00:03:32.292 12:46:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.292 12:46:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.292 12:46:37 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.292 12:46:37 -- setup/common.sh@33 -- # echo 0 00:03:32.292 12:46:37 -- setup/common.sh@33 -- # return 0 00:03:32.292 12:46:37 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:32.292 12:46:37 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:32.292 12:46:37 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:32.292 12:46:37 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:32.292 12:46:37 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:32.292 node0=1024 expecting 1024 00:03:32.292 12:46:37 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:32.292 00:03:32.292 real 0m7.550s 00:03:32.292 user 0m2.980s 00:03:32.292 sys 0m4.680s 00:03:32.292 12:46:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:32.292 12:46:37 -- common/autotest_common.sh@10 -- # set +x 00:03:32.292 ************************************ 00:03:32.292 END TEST no_shrink_alloc 00:03:32.292 ************************************ 00:03:32.292 12:46:37 -- setup/hugepages.sh@217 -- # clear_hp 00:03:32.292 12:46:37 -- setup/hugepages.sh@37 -- # local node hp 00:03:32.292 12:46:37 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:32.292 12:46:37 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:32.292 12:46:37 -- setup/hugepages.sh@41 -- # echo 0 00:03:32.292 12:46:37 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:32.292 12:46:37 -- setup/hugepages.sh@41 -- # echo 0 00:03:32.292 12:46:37 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:32.292 12:46:37 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:32.292 12:46:37 -- setup/hugepages.sh@41 -- # echo 0 00:03:32.292 12:46:37 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:32.292 12:46:37 -- setup/hugepages.sh@41 -- # echo 0 00:03:32.292 12:46:37 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:32.554 12:46:37 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:32.554 00:03:32.554 real 0m28.021s 00:03:32.554 user 0m10.919s 00:03:32.554 sys 0m17.221s 00:03:32.554 12:46:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:32.554 12:46:37 -- common/autotest_common.sh@10 -- # set +x 00:03:32.554 ************************************ 00:03:32.554 END TEST hugepages 00:03:32.554 ************************************ 00:03:32.554 12:46:37 -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:32.554 12:46:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:32.554 12:46:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:32.554 12:46:37 -- common/autotest_common.sh@10 -- # set +x 00:03:32.554 ************************************ 00:03:32.554 START TEST driver 00:03:32.554 ************************************ 00:03:32.555 12:46:37 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:32.817 * Looking for test storage... 00:03:32.817 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:32.817 12:46:37 -- setup/driver.sh@68 -- # setup reset 00:03:32.817 12:46:37 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:32.817 12:46:37 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:38.110 12:46:42 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:38.110 12:46:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:38.110 12:46:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:38.110 12:46:42 -- common/autotest_common.sh@10 -- # set +x 00:03:38.110 ************************************ 00:03:38.110 START TEST guess_driver 00:03:38.110 ************************************ 00:03:38.110 12:46:42 -- common/autotest_common.sh@1111 -- # guess_driver 00:03:38.110 12:46:42 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:38.110 12:46:42 -- setup/driver.sh@47 -- # local fail=0 00:03:38.110 12:46:42 -- setup/driver.sh@49 -- # pick_driver 00:03:38.110 12:46:42 -- setup/driver.sh@36 -- # vfio 00:03:38.110 12:46:42 -- setup/driver.sh@21 -- # local iommu_grups 00:03:38.110 12:46:42 -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:38.110 12:46:42 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:38.110 12:46:42 -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:38.110 12:46:42 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:38.110 12:46:42 -- setup/driver.sh@29 -- # (( 322 > 0 )) 00:03:38.110 12:46:42 -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:38.110 12:46:42 -- setup/driver.sh@14 -- # mod vfio_pci 00:03:38.110 12:46:42 -- setup/driver.sh@12 -- # dep vfio_pci 00:03:38.110 12:46:42 -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:38.110 12:46:42 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:38.110 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:38.110 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:38.110 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:38.110 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:38.110 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:38.110 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:38.110 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:38.110 12:46:42 -- setup/driver.sh@30 -- # return 0 00:03:38.110 12:46:42 -- setup/driver.sh@37 -- # echo vfio-pci 00:03:38.110 12:46:42 -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:38.110 12:46:42 -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:38.110 12:46:42 -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:38.110 Looking for driver=vfio-pci 00:03:38.110 12:46:42 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:38.110 12:46:42 -- setup/driver.sh@45 -- # setup output config 00:03:38.110 12:46:42 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:38.110 12:46:42 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:41.411 12:46:46 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:41.411 12:46:46 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:41.411 12:46:46 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:41.411 12:46:46 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:41.411 12:46:46 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:41.412 12:46:46 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:41.412 12:46:46 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:41.412 12:46:46 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:41.412 12:46:46 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:41.412 12:46:46 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:41.412 12:46:46 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:41.412 12:46:46 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:41.412 12:46:46 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:41.412 12:46:46 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:41.412 12:46:46 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:41.412 12:46:46 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:41.412 12:46:46 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:41.412 12:46:46 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:41.412 12:46:46 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:41.412 12:46:46 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:41.412 12:46:46 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:41.412 12:46:46 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:41.412 12:46:46 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:41.412 12:46:46 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:41.412 12:46:46 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:41.412 12:46:46 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:41.412 12:46:46 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:41.412 12:46:46 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:41.412 12:46:46 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:41.412 12:46:46 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:41.412 12:46:46 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:41.412 12:46:46 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:41.412 12:46:46 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:41.412 12:46:46 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:41.412 12:46:46 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:41.412 12:46:46 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:41.412 12:46:46 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:41.412 12:46:46 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:41.412 12:46:46 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:41.412 12:46:46 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:41.412 12:46:46 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:41.412 12:46:46 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:41.412 12:46:46 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:41.412 12:46:46 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:41.412 12:46:46 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:41.412 12:46:46 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:41.412 12:46:46 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:41.412 12:46:46 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:41.412 12:46:46 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:41.412 12:46:46 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:41.412 12:46:46 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:41.674 12:46:46 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:41.674 12:46:46 -- setup/driver.sh@65 -- # setup reset 00:03:41.674 12:46:46 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:41.674 12:46:46 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:45.886 00:03:45.886 real 0m8.317s 00:03:45.886 user 0m2.573s 00:03:45.886 sys 0m4.844s 00:03:45.886 12:46:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:45.886 12:46:50 -- common/autotest_common.sh@10 -- # set +x 00:03:45.886 ************************************ 00:03:45.886 END TEST guess_driver 00:03:45.886 ************************************ 00:03:46.146 00:03:46.146 real 0m13.444s 00:03:46.146 user 0m4.138s 00:03:46.146 sys 0m7.629s 00:03:46.146 12:46:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:46.146 12:46:50 -- common/autotest_common.sh@10 -- # set +x 00:03:46.146 ************************************ 00:03:46.146 END TEST driver 00:03:46.146 ************************************ 00:03:46.146 12:46:51 -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:46.146 12:46:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:46.146 12:46:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:46.146 12:46:51 -- common/autotest_common.sh@10 -- # set +x 00:03:46.146 ************************************ 00:03:46.146 START TEST devices 00:03:46.146 ************************************ 00:03:46.146 12:46:51 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:46.408 * Looking for test storage... 00:03:46.408 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:46.408 12:46:51 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:46.408 12:46:51 -- setup/devices.sh@192 -- # setup reset 00:03:46.408 12:46:51 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:46.408 12:46:51 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:50.618 12:46:55 -- setup/devices.sh@194 -- # get_zoned_devs 00:03:50.618 12:46:55 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:03:50.618 12:46:55 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:03:50.618 12:46:55 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:03:50.618 12:46:55 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:50.618 12:46:55 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:03:50.618 12:46:55 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:03:50.618 12:46:55 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:50.618 12:46:55 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:50.618 12:46:55 -- setup/devices.sh@196 -- # blocks=() 00:03:50.618 12:46:55 -- setup/devices.sh@196 -- # declare -a blocks 00:03:50.618 12:46:55 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:50.618 12:46:55 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:50.618 12:46:55 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:50.618 12:46:55 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:50.618 12:46:55 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:50.618 12:46:55 -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:50.618 12:46:55 -- setup/devices.sh@202 -- # pci=0000:65:00.0 00:03:50.618 12:46:55 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:03:50.618 12:46:55 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:50.618 12:46:55 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:03:50.618 12:46:55 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:03:50.618 No valid GPT data, bailing 00:03:50.618 12:46:55 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:50.618 12:46:55 -- scripts/common.sh@391 -- # pt= 00:03:50.618 12:46:55 -- scripts/common.sh@392 -- # return 1 00:03:50.618 12:46:55 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:50.618 12:46:55 -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:50.618 12:46:55 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:50.618 12:46:55 -- setup/common.sh@80 -- # echo 1920383410176 00:03:50.618 12:46:55 -- setup/devices.sh@204 -- # (( 1920383410176 >= min_disk_size )) 00:03:50.618 12:46:55 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:50.618 12:46:55 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:65:00.0 00:03:50.618 12:46:55 -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:03:50.618 12:46:55 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:50.618 12:46:55 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:50.618 12:46:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:50.618 12:46:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:50.618 12:46:55 -- common/autotest_common.sh@10 -- # set +x 00:03:50.618 ************************************ 00:03:50.618 START TEST nvme_mount 00:03:50.618 ************************************ 00:03:50.618 12:46:55 -- common/autotest_common.sh@1111 -- # nvme_mount 00:03:50.618 12:46:55 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:50.618 12:46:55 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:50.618 12:46:55 -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:50.618 12:46:55 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:50.618 12:46:55 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:50.618 12:46:55 -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:50.618 12:46:55 -- setup/common.sh@40 -- # local part_no=1 00:03:50.618 12:46:55 -- setup/common.sh@41 -- # local size=1073741824 00:03:50.618 12:46:55 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:50.618 12:46:55 -- setup/common.sh@44 -- # parts=() 00:03:50.618 12:46:55 -- setup/common.sh@44 -- # local parts 00:03:50.618 12:46:55 -- setup/common.sh@46 -- # (( part = 1 )) 00:03:50.618 12:46:55 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:50.618 12:46:55 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:50.618 12:46:55 -- setup/common.sh@46 -- # (( part++ )) 00:03:50.618 12:46:55 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:50.618 12:46:55 -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:50.618 12:46:55 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:50.618 12:46:55 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:51.563 Creating new GPT entries in memory. 00:03:51.563 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:51.563 other utilities. 00:03:51.563 12:46:56 -- setup/common.sh@57 -- # (( part = 1 )) 00:03:51.563 12:46:56 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:51.563 12:46:56 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:51.563 12:46:56 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:51.563 12:46:56 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:52.511 Creating new GPT entries in memory. 00:03:52.511 The operation has completed successfully. 00:03:52.511 12:46:57 -- setup/common.sh@57 -- # (( part++ )) 00:03:52.511 12:46:57 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:52.511 12:46:57 -- setup/common.sh@62 -- # wait 3736781 00:03:52.511 12:46:57 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:52.511 12:46:57 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:03:52.511 12:46:57 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:52.511 12:46:57 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:52.511 12:46:57 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:52.511 12:46:57 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:52.511 12:46:57 -- setup/devices.sh@105 -- # verify 0000:65:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:52.511 12:46:57 -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:03:52.511 12:46:57 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:52.511 12:46:57 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:52.511 12:46:57 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:52.511 12:46:57 -- setup/devices.sh@53 -- # local found=0 00:03:52.511 12:46:57 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:52.511 12:46:57 -- setup/devices.sh@56 -- # : 00:03:52.511 12:46:57 -- setup/devices.sh@59 -- # local pci status 00:03:52.511 12:46:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.511 12:46:57 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:03:52.511 12:46:57 -- setup/devices.sh@47 -- # setup output config 00:03:52.511 12:46:57 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:52.511 12:46:57 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:55.902 12:47:00 -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:55.902 12:47:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.903 12:47:00 -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:55.903 12:47:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.903 12:47:00 -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:55.903 12:47:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.903 12:47:00 -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:55.903 12:47:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.903 12:47:00 -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:55.903 12:47:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.903 12:47:00 -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:55.903 12:47:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.903 12:47:00 -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:55.903 12:47:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.903 12:47:00 -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:55.903 12:47:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.903 12:47:00 -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:55.903 12:47:00 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:55.903 12:47:00 -- setup/devices.sh@63 -- # found=1 00:03:55.903 12:47:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.903 12:47:00 -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:55.903 12:47:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.903 12:47:00 -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:55.903 12:47:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.903 12:47:00 -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:55.903 12:47:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.903 12:47:00 -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:55.903 12:47:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.903 12:47:00 -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:55.903 12:47:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.903 12:47:00 -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:55.903 12:47:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.903 12:47:00 -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:55.903 12:47:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.903 12:47:00 -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:55.903 12:47:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.163 12:47:01 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:56.163 12:47:01 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:56.163 12:47:01 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:56.163 12:47:01 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:56.163 12:47:01 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:56.163 12:47:01 -- setup/devices.sh@110 -- # cleanup_nvme 00:03:56.163 12:47:01 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:56.163 12:47:01 -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:56.423 12:47:01 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:56.423 12:47:01 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:56.423 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:56.423 12:47:01 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:56.423 12:47:01 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:56.684 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:56.684 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:03:56.684 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:56.684 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:56.684 12:47:01 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:03:56.684 12:47:01 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:03:56.684 12:47:01 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:56.684 12:47:01 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:56.684 12:47:01 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:56.684 12:47:01 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:56.684 12:47:01 -- setup/devices.sh@116 -- # verify 0000:65:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:56.684 12:47:01 -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:03:56.684 12:47:01 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:56.684 12:47:01 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:56.684 12:47:01 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:56.684 12:47:01 -- setup/devices.sh@53 -- # local found=0 00:03:56.684 12:47:01 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:56.684 12:47:01 -- setup/devices.sh@56 -- # : 00:03:56.684 12:47:01 -- setup/devices.sh@59 -- # local pci status 00:03:56.684 12:47:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.684 12:47:01 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:03:56.684 12:47:01 -- setup/devices.sh@47 -- # setup output config 00:03:56.684 12:47:01 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:56.684 12:47:01 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:59.988 12:47:04 -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:59.988 12:47:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.988 12:47:04 -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:59.988 12:47:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.988 12:47:04 -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:59.988 12:47:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.988 12:47:04 -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:59.988 12:47:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.988 12:47:04 -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:59.988 12:47:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.988 12:47:04 -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:59.988 12:47:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.988 12:47:04 -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:59.988 12:47:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.988 12:47:04 -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:59.988 12:47:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.988 12:47:04 -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:59.988 12:47:04 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:03:59.988 12:47:04 -- setup/devices.sh@63 -- # found=1 00:03:59.988 12:47:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.988 12:47:04 -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:59.988 12:47:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.989 12:47:04 -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:59.989 12:47:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.989 12:47:04 -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:59.989 12:47:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.989 12:47:04 -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:59.989 12:47:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.989 12:47:04 -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:59.989 12:47:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.989 12:47:04 -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:59.989 12:47:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.989 12:47:04 -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:59.989 12:47:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.989 12:47:04 -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:59.989 12:47:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.249 12:47:05 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:00.249 12:47:05 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:00.249 12:47:05 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:00.249 12:47:05 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:00.249 12:47:05 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:00.249 12:47:05 -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:00.249 12:47:05 -- setup/devices.sh@125 -- # verify 0000:65:00.0 data@nvme0n1 '' '' 00:04:00.249 12:47:05 -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:00.249 12:47:05 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:00.249 12:47:05 -- setup/devices.sh@50 -- # local mount_point= 00:04:00.249 12:47:05 -- setup/devices.sh@51 -- # local test_file= 00:04:00.249 12:47:05 -- setup/devices.sh@53 -- # local found=0 00:04:00.249 12:47:05 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:00.249 12:47:05 -- setup/devices.sh@59 -- # local pci status 00:04:00.249 12:47:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.249 12:47:05 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:00.249 12:47:05 -- setup/devices.sh@47 -- # setup output config 00:04:00.249 12:47:05 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:00.249 12:47:05 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:03.549 12:47:08 -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:03.549 12:47:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.549 12:47:08 -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:03.549 12:47:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.549 12:47:08 -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:03.549 12:47:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.549 12:47:08 -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:03.549 12:47:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.549 12:47:08 -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:03.549 12:47:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.549 12:47:08 -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:03.549 12:47:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.549 12:47:08 -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:03.549 12:47:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.549 12:47:08 -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:03.549 12:47:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.809 12:47:08 -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:03.809 12:47:08 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:03.809 12:47:08 -- setup/devices.sh@63 -- # found=1 00:04:03.809 12:47:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.809 12:47:08 -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:03.809 12:47:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.809 12:47:08 -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:03.809 12:47:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.809 12:47:08 -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:03.809 12:47:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.809 12:47:08 -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:03.809 12:47:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.809 12:47:08 -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:03.809 12:47:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.809 12:47:08 -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:03.809 12:47:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.809 12:47:08 -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:03.809 12:47:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.809 12:47:08 -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:03.809 12:47:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.071 12:47:08 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:04.071 12:47:08 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:04.071 12:47:08 -- setup/devices.sh@68 -- # return 0 00:04:04.071 12:47:08 -- setup/devices.sh@128 -- # cleanup_nvme 00:04:04.071 12:47:08 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:04.071 12:47:08 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:04.071 12:47:08 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:04.071 12:47:08 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:04.071 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:04.071 00:04:04.071 real 0m13.562s 00:04:04.071 user 0m4.274s 00:04:04.071 sys 0m7.144s 00:04:04.071 12:47:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:04.071 12:47:08 -- common/autotest_common.sh@10 -- # set +x 00:04:04.071 ************************************ 00:04:04.071 END TEST nvme_mount 00:04:04.071 ************************************ 00:04:04.071 12:47:09 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:04.071 12:47:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:04.071 12:47:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:04.071 12:47:09 -- common/autotest_common.sh@10 -- # set +x 00:04:04.333 ************************************ 00:04:04.333 START TEST dm_mount 00:04:04.333 ************************************ 00:04:04.333 12:47:09 -- common/autotest_common.sh@1111 -- # dm_mount 00:04:04.333 12:47:09 -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:04.333 12:47:09 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:04.333 12:47:09 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:04.333 12:47:09 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:04.333 12:47:09 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:04.333 12:47:09 -- setup/common.sh@40 -- # local part_no=2 00:04:04.333 12:47:09 -- setup/common.sh@41 -- # local size=1073741824 00:04:04.333 12:47:09 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:04.333 12:47:09 -- setup/common.sh@44 -- # parts=() 00:04:04.333 12:47:09 -- setup/common.sh@44 -- # local parts 00:04:04.333 12:47:09 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:04.333 12:47:09 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:04.333 12:47:09 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:04.333 12:47:09 -- setup/common.sh@46 -- # (( part++ )) 00:04:04.333 12:47:09 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:04.333 12:47:09 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:04.333 12:47:09 -- setup/common.sh@46 -- # (( part++ )) 00:04:04.333 12:47:09 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:04.333 12:47:09 -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:04.333 12:47:09 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:04.333 12:47:09 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:05.275 Creating new GPT entries in memory. 00:04:05.275 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:05.275 other utilities. 00:04:05.275 12:47:10 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:05.275 12:47:10 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:05.275 12:47:10 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:05.275 12:47:10 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:05.275 12:47:10 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:06.220 Creating new GPT entries in memory. 00:04:06.220 The operation has completed successfully. 00:04:06.220 12:47:11 -- setup/common.sh@57 -- # (( part++ )) 00:04:06.220 12:47:11 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:06.220 12:47:11 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:06.220 12:47:11 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:06.220 12:47:11 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:07.605 The operation has completed successfully. 00:04:07.605 12:47:12 -- setup/common.sh@57 -- # (( part++ )) 00:04:07.605 12:47:12 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:07.605 12:47:12 -- setup/common.sh@62 -- # wait 3741942 00:04:07.605 12:47:12 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:07.605 12:47:12 -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:07.605 12:47:12 -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:07.605 12:47:12 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:07.605 12:47:12 -- setup/devices.sh@160 -- # for t in {1..5} 00:04:07.605 12:47:12 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:07.605 12:47:12 -- setup/devices.sh@161 -- # break 00:04:07.605 12:47:12 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:07.605 12:47:12 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:07.605 12:47:12 -- setup/devices.sh@165 -- # dm=/dev/dm-1 00:04:07.605 12:47:12 -- setup/devices.sh@166 -- # dm=dm-1 00:04:07.605 12:47:12 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-1 ]] 00:04:07.605 12:47:12 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-1 ]] 00:04:07.605 12:47:12 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:07.605 12:47:12 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:04:07.605 12:47:12 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:07.605 12:47:12 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:07.605 12:47:12 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:07.605 12:47:12 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:07.605 12:47:12 -- setup/devices.sh@174 -- # verify 0000:65:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:07.605 12:47:12 -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:07.605 12:47:12 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:07.605 12:47:12 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:07.605 12:47:12 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:07.605 12:47:12 -- setup/devices.sh@53 -- # local found=0 00:04:07.605 12:47:12 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:07.605 12:47:12 -- setup/devices.sh@56 -- # : 00:04:07.605 12:47:12 -- setup/devices.sh@59 -- # local pci status 00:04:07.605 12:47:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.606 12:47:12 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:07.606 12:47:12 -- setup/devices.sh@47 -- # setup output config 00:04:07.606 12:47:12 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:07.606 12:47:12 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:10.907 12:47:15 -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:10.907 12:47:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.907 12:47:15 -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:10.907 12:47:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.907 12:47:15 -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:10.907 12:47:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.907 12:47:15 -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:10.907 12:47:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.907 12:47:15 -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:10.907 12:47:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.907 12:47:15 -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:10.907 12:47:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.907 12:47:15 -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:10.907 12:47:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.907 12:47:15 -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:10.907 12:47:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.907 12:47:15 -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:10.907 12:47:15 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-1,holder@nvme0n1p2:dm-1,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:10.907 12:47:15 -- setup/devices.sh@63 -- # found=1 00:04:10.907 12:47:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.907 12:47:15 -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:10.907 12:47:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.907 12:47:15 -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:10.907 12:47:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.907 12:47:15 -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:10.907 12:47:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.907 12:47:15 -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:10.907 12:47:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.907 12:47:15 -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:10.907 12:47:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.907 12:47:15 -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:10.907 12:47:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.907 12:47:15 -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:10.907 12:47:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.907 12:47:15 -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:10.907 12:47:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.211 12:47:15 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:11.211 12:47:15 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:11.211 12:47:15 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:11.211 12:47:16 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:11.211 12:47:16 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:11.211 12:47:16 -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:11.211 12:47:16 -- setup/devices.sh@184 -- # verify 0000:65:00.0 holder@nvme0n1p1:dm-1,holder@nvme0n1p2:dm-1 '' '' 00:04:11.211 12:47:16 -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:11.211 12:47:16 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-1,holder@nvme0n1p2:dm-1 00:04:11.211 12:47:16 -- setup/devices.sh@50 -- # local mount_point= 00:04:11.211 12:47:16 -- setup/devices.sh@51 -- # local test_file= 00:04:11.211 12:47:16 -- setup/devices.sh@53 -- # local found=0 00:04:11.211 12:47:16 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:11.211 12:47:16 -- setup/devices.sh@59 -- # local pci status 00:04:11.211 12:47:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.211 12:47:16 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:11.211 12:47:16 -- setup/devices.sh@47 -- # setup output config 00:04:11.211 12:47:16 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:11.211 12:47:16 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:14.511 12:47:19 -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:14.511 12:47:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.511 12:47:19 -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:14.511 12:47:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.511 12:47:19 -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:14.511 12:47:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.511 12:47:19 -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:14.511 12:47:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.511 12:47:19 -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:14.511 12:47:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.511 12:47:19 -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:14.511 12:47:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.511 12:47:19 -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:14.511 12:47:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.511 12:47:19 -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:14.511 12:47:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.511 12:47:19 -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:14.511 12:47:19 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-1,holder@nvme0n1p2:dm-1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\1\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\1* ]] 00:04:14.511 12:47:19 -- setup/devices.sh@63 -- # found=1 00:04:14.511 12:47:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.511 12:47:19 -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:14.511 12:47:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.511 12:47:19 -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:14.511 12:47:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.511 12:47:19 -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:14.511 12:47:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.511 12:47:19 -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:14.511 12:47:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.511 12:47:19 -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:14.511 12:47:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.511 12:47:19 -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:14.511 12:47:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.511 12:47:19 -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:14.511 12:47:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.511 12:47:19 -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:14.511 12:47:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.511 12:47:19 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:14.511 12:47:19 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:14.511 12:47:19 -- setup/devices.sh@68 -- # return 0 00:04:14.511 12:47:19 -- setup/devices.sh@187 -- # cleanup_dm 00:04:14.511 12:47:19 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:14.511 12:47:19 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:14.511 12:47:19 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:14.511 12:47:19 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:14.511 12:47:19 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:14.511 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:14.511 12:47:19 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:14.511 12:47:19 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:14.511 00:04:14.511 real 0m10.367s 00:04:14.511 user 0m2.711s 00:04:14.511 sys 0m4.638s 00:04:14.511 12:47:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:14.511 12:47:19 -- common/autotest_common.sh@10 -- # set +x 00:04:14.511 ************************************ 00:04:14.511 END TEST dm_mount 00:04:14.511 ************************************ 00:04:14.772 12:47:19 -- setup/devices.sh@1 -- # cleanup 00:04:14.772 12:47:19 -- setup/devices.sh@11 -- # cleanup_nvme 00:04:14.772 12:47:19 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:14.772 12:47:19 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:14.772 12:47:19 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:14.772 12:47:19 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:14.772 12:47:19 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:15.033 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:15.033 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:04:15.033 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:15.033 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:15.033 12:47:19 -- setup/devices.sh@12 -- # cleanup_dm 00:04:15.033 12:47:19 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:15.033 12:47:19 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:15.033 12:47:19 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:15.033 12:47:19 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:15.033 12:47:19 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:15.033 12:47:19 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:15.033 00:04:15.033 real 0m28.732s 00:04:15.033 user 0m8.653s 00:04:15.033 sys 0m14.741s 00:04:15.033 12:47:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:15.033 12:47:19 -- common/autotest_common.sh@10 -- # set +x 00:04:15.033 ************************************ 00:04:15.033 END TEST devices 00:04:15.033 ************************************ 00:04:15.033 00:04:15.033 real 1m37.303s 00:04:15.033 user 0m32.698s 00:04:15.033 sys 0m55.317s 00:04:15.033 12:47:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:15.033 12:47:19 -- common/autotest_common.sh@10 -- # set +x 00:04:15.033 ************************************ 00:04:15.033 END TEST setup.sh 00:04:15.034 ************************************ 00:04:15.034 12:47:19 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:18.336 Hugepages 00:04:18.336 node hugesize free / total 00:04:18.336 node0 1048576kB 0 / 0 00:04:18.336 node0 2048kB 2048 / 2048 00:04:18.336 node1 1048576kB 0 / 0 00:04:18.336 node1 2048kB 0 / 0 00:04:18.336 00:04:18.336 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:18.336 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:04:18.336 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:04:18.336 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:04:18.336 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:04:18.336 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:04:18.336 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:04:18.336 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:04:18.336 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:04:18.336 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:04:18.336 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:04:18.336 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:04:18.336 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:04:18.336 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:04:18.336 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:04:18.336 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:04:18.336 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:04:18.336 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:04:18.336 12:47:23 -- spdk/autotest.sh@130 -- # uname -s 00:04:18.336 12:47:23 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:18.336 12:47:23 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:18.337 12:47:23 -- common/autotest_common.sh@1517 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:21.635 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:21.895 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:21.895 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:21.895 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:21.895 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:21.895 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:21.895 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:21.895 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:21.895 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:21.895 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:21.895 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:21.895 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:21.895 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:21.895 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:21.895 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:21.895 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:23.811 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:04:24.071 12:47:28 -- common/autotest_common.sh@1518 -- # sleep 1 00:04:25.011 12:47:29 -- common/autotest_common.sh@1519 -- # bdfs=() 00:04:25.011 12:47:29 -- common/autotest_common.sh@1519 -- # local bdfs 00:04:25.011 12:47:29 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:25.011 12:47:29 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:25.011 12:47:29 -- common/autotest_common.sh@1499 -- # bdfs=() 00:04:25.011 12:47:29 -- common/autotest_common.sh@1499 -- # local bdfs 00:04:25.011 12:47:29 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:25.011 12:47:29 -- common/autotest_common.sh@1500 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:25.011 12:47:29 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:04:25.011 12:47:30 -- common/autotest_common.sh@1501 -- # (( 1 == 0 )) 00:04:25.011 12:47:30 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:65:00.0 00:04:25.011 12:47:30 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:28.376 Waiting for block devices as requested 00:04:28.376 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:04:28.636 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:04:28.636 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:04:28.636 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:04:28.896 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:04:28.896 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:04:28.896 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:04:29.155 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:04:29.155 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:04:29.415 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:04:29.415 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:04:29.415 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:04:29.415 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:04:29.683 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:04:29.683 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:04:29.683 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:04:29.683 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:04:29.950 12:47:34 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:29.950 12:47:34 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:04:29.950 12:47:34 -- common/autotest_common.sh@1488 -- # readlink -f /sys/class/nvme/nvme0 00:04:29.950 12:47:34 -- common/autotest_common.sh@1488 -- # grep 0000:65:00.0/nvme/nvme 00:04:29.950 12:47:34 -- common/autotest_common.sh@1488 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:04:29.950 12:47:34 -- common/autotest_common.sh@1489 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:04:29.950 12:47:34 -- common/autotest_common.sh@1493 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:04:29.950 12:47:34 -- common/autotest_common.sh@1493 -- # printf '%s\n' nvme0 00:04:29.950 12:47:34 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:29.950 12:47:34 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:29.950 12:47:34 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:29.950 12:47:34 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:29.950 12:47:34 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:29.950 12:47:35 -- common/autotest_common.sh@1531 -- # oacs=' 0x5f' 00:04:29.950 12:47:35 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:29.950 12:47:35 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:29.950 12:47:35 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:29.950 12:47:35 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:29.950 12:47:35 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:30.212 12:47:35 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:30.212 12:47:35 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:30.212 12:47:35 -- common/autotest_common.sh@1543 -- # continue 00:04:30.212 12:47:35 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:30.212 12:47:35 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:30.212 12:47:35 -- common/autotest_common.sh@10 -- # set +x 00:04:30.212 12:47:35 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:30.212 12:47:35 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:30.212 12:47:35 -- common/autotest_common.sh@10 -- # set +x 00:04:30.212 12:47:35 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:33.541 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:33.541 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:33.541 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:33.541 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:33.541 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:33.541 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:33.541 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:33.541 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:33.541 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:33.541 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:33.541 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:33.541 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:33.541 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:33.541 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:33.541 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:33.541 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:33.541 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:04:34.114 12:47:38 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:34.114 12:47:38 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:34.114 12:47:38 -- common/autotest_common.sh@10 -- # set +x 00:04:34.114 12:47:38 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:34.114 12:47:38 -- common/autotest_common.sh@1577 -- # mapfile -t bdfs 00:04:34.114 12:47:38 -- common/autotest_common.sh@1577 -- # get_nvme_bdfs_by_id 0x0a54 00:04:34.114 12:47:38 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:34.114 12:47:38 -- common/autotest_common.sh@1563 -- # local bdfs 00:04:34.114 12:47:38 -- common/autotest_common.sh@1565 -- # get_nvme_bdfs 00:04:34.114 12:47:38 -- common/autotest_common.sh@1499 -- # bdfs=() 00:04:34.114 12:47:38 -- common/autotest_common.sh@1499 -- # local bdfs 00:04:34.114 12:47:38 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:34.114 12:47:38 -- common/autotest_common.sh@1500 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:34.114 12:47:38 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:04:34.114 12:47:38 -- common/autotest_common.sh@1501 -- # (( 1 == 0 )) 00:04:34.114 12:47:38 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:65:00.0 00:04:34.114 12:47:38 -- common/autotest_common.sh@1565 -- # for bdf in $(get_nvme_bdfs) 00:04:34.114 12:47:38 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:04:34.114 12:47:38 -- common/autotest_common.sh@1566 -- # device=0xa80a 00:04:34.114 12:47:38 -- common/autotest_common.sh@1567 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:04:34.114 12:47:38 -- common/autotest_common.sh@1572 -- # printf '%s\n' 00:04:34.114 12:47:38 -- common/autotest_common.sh@1578 -- # [[ -z '' ]] 00:04:34.114 12:47:38 -- common/autotest_common.sh@1579 -- # return 0 00:04:34.114 12:47:39 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:34.114 12:47:39 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:34.114 12:47:39 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:34.114 12:47:39 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:34.114 12:47:39 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:34.114 12:47:39 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:34.114 12:47:39 -- common/autotest_common.sh@10 -- # set +x 00:04:34.114 12:47:39 -- spdk/autotest.sh@164 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:34.114 12:47:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:34.114 12:47:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:34.114 12:47:39 -- common/autotest_common.sh@10 -- # set +x 00:04:34.375 ************************************ 00:04:34.375 START TEST env 00:04:34.375 ************************************ 00:04:34.375 12:47:39 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:34.375 * Looking for test storage... 00:04:34.375 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:34.375 12:47:39 -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:34.375 12:47:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:34.375 12:47:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:34.375 12:47:39 -- common/autotest_common.sh@10 -- # set +x 00:04:34.375 ************************************ 00:04:34.375 START TEST env_memory 00:04:34.375 ************************************ 00:04:34.375 12:47:39 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:34.636 00:04:34.636 00:04:34.636 CUnit - A unit testing framework for C - Version 2.1-3 00:04:34.636 http://cunit.sourceforge.net/ 00:04:34.636 00:04:34.636 00:04:34.636 Suite: memory 00:04:34.636 Test: alloc and free memory map ...[2024-04-26 12:47:39.480474] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:34.636 passed 00:04:34.636 Test: mem map translation ...[2024-04-26 12:47:39.506026] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:34.636 [2024-04-26 12:47:39.506054] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:34.636 [2024-04-26 12:47:39.506102] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:34.636 [2024-04-26 12:47:39.506109] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:34.636 passed 00:04:34.636 Test: mem map registration ...[2024-04-26 12:47:39.561525] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:34.636 [2024-04-26 12:47:39.561554] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:34.636 passed 00:04:34.636 Test: mem map adjacent registrations ...passed 00:04:34.636 00:04:34.636 Run Summary: Type Total Ran Passed Failed Inactive 00:04:34.636 suites 1 1 n/a 0 0 00:04:34.636 tests 4 4 4 0 0 00:04:34.636 asserts 152 152 152 0 n/a 00:04:34.636 00:04:34.636 Elapsed time = 0.193 seconds 00:04:34.636 00:04:34.636 real 0m0.207s 00:04:34.636 user 0m0.197s 00:04:34.636 sys 0m0.008s 00:04:34.636 12:47:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:34.636 12:47:39 -- common/autotest_common.sh@10 -- # set +x 00:04:34.636 ************************************ 00:04:34.636 END TEST env_memory 00:04:34.636 ************************************ 00:04:34.636 12:47:39 -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:34.636 12:47:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:34.636 12:47:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:34.636 12:47:39 -- common/autotest_common.sh@10 -- # set +x 00:04:34.897 ************************************ 00:04:34.897 START TEST env_vtophys 00:04:34.897 ************************************ 00:04:34.897 12:47:39 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:34.897 EAL: lib.eal log level changed from notice to debug 00:04:34.897 EAL: Detected lcore 0 as core 0 on socket 0 00:04:34.897 EAL: Detected lcore 1 as core 1 on socket 0 00:04:34.897 EAL: Detected lcore 2 as core 2 on socket 0 00:04:34.897 EAL: Detected lcore 3 as core 3 on socket 0 00:04:34.897 EAL: Detected lcore 4 as core 4 on socket 0 00:04:34.897 EAL: Detected lcore 5 as core 5 on socket 0 00:04:34.897 EAL: Detected lcore 6 as core 6 on socket 0 00:04:34.897 EAL: Detected lcore 7 as core 7 on socket 0 00:04:34.897 EAL: Detected lcore 8 as core 8 on socket 0 00:04:34.897 EAL: Detected lcore 9 as core 9 on socket 0 00:04:34.897 EAL: Detected lcore 10 as core 10 on socket 0 00:04:34.897 EAL: Detected lcore 11 as core 11 on socket 0 00:04:34.897 EAL: Detected lcore 12 as core 12 on socket 0 00:04:34.897 EAL: Detected lcore 13 as core 13 on socket 0 00:04:34.897 EAL: Detected lcore 14 as core 14 on socket 0 00:04:34.897 EAL: Detected lcore 15 as core 15 on socket 0 00:04:34.897 EAL: Detected lcore 16 as core 16 on socket 0 00:04:34.897 EAL: Detected lcore 17 as core 17 on socket 0 00:04:34.897 EAL: Detected lcore 18 as core 18 on socket 0 00:04:34.897 EAL: Detected lcore 19 as core 19 on socket 0 00:04:34.897 EAL: Detected lcore 20 as core 20 on socket 0 00:04:34.897 EAL: Detected lcore 21 as core 21 on socket 0 00:04:34.897 EAL: Detected lcore 22 as core 22 on socket 0 00:04:34.897 EAL: Detected lcore 23 as core 23 on socket 0 00:04:34.897 EAL: Detected lcore 24 as core 24 on socket 0 00:04:34.897 EAL: Detected lcore 25 as core 25 on socket 0 00:04:34.897 EAL: Detected lcore 26 as core 26 on socket 0 00:04:34.897 EAL: Detected lcore 27 as core 27 on socket 0 00:04:34.897 EAL: Detected lcore 28 as core 28 on socket 0 00:04:34.897 EAL: Detected lcore 29 as core 29 on socket 0 00:04:34.897 EAL: Detected lcore 30 as core 30 on socket 0 00:04:34.897 EAL: Detected lcore 31 as core 31 on socket 0 00:04:34.897 EAL: Detected lcore 32 as core 32 on socket 0 00:04:34.897 EAL: Detected lcore 33 as core 33 on socket 0 00:04:34.897 EAL: Detected lcore 34 as core 34 on socket 0 00:04:34.897 EAL: Detected lcore 35 as core 35 on socket 0 00:04:34.897 EAL: Detected lcore 36 as core 0 on socket 1 00:04:34.897 EAL: Detected lcore 37 as core 1 on socket 1 00:04:34.897 EAL: Detected lcore 38 as core 2 on socket 1 00:04:34.897 EAL: Detected lcore 39 as core 3 on socket 1 00:04:34.897 EAL: Detected lcore 40 as core 4 on socket 1 00:04:34.897 EAL: Detected lcore 41 as core 5 on socket 1 00:04:34.897 EAL: Detected lcore 42 as core 6 on socket 1 00:04:34.897 EAL: Detected lcore 43 as core 7 on socket 1 00:04:34.897 EAL: Detected lcore 44 as core 8 on socket 1 00:04:34.897 EAL: Detected lcore 45 as core 9 on socket 1 00:04:34.897 EAL: Detected lcore 46 as core 10 on socket 1 00:04:34.897 EAL: Detected lcore 47 as core 11 on socket 1 00:04:34.897 EAL: Detected lcore 48 as core 12 on socket 1 00:04:34.897 EAL: Detected lcore 49 as core 13 on socket 1 00:04:34.897 EAL: Detected lcore 50 as core 14 on socket 1 00:04:34.897 EAL: Detected lcore 51 as core 15 on socket 1 00:04:34.897 EAL: Detected lcore 52 as core 16 on socket 1 00:04:34.897 EAL: Detected lcore 53 as core 17 on socket 1 00:04:34.897 EAL: Detected lcore 54 as core 18 on socket 1 00:04:34.897 EAL: Detected lcore 55 as core 19 on socket 1 00:04:34.897 EAL: Detected lcore 56 as core 20 on socket 1 00:04:34.897 EAL: Detected lcore 57 as core 21 on socket 1 00:04:34.897 EAL: Detected lcore 58 as core 22 on socket 1 00:04:34.897 EAL: Detected lcore 59 as core 23 on socket 1 00:04:34.897 EAL: Detected lcore 60 as core 24 on socket 1 00:04:34.897 EAL: Detected lcore 61 as core 25 on socket 1 00:04:34.897 EAL: Detected lcore 62 as core 26 on socket 1 00:04:34.897 EAL: Detected lcore 63 as core 27 on socket 1 00:04:34.897 EAL: Detected lcore 64 as core 28 on socket 1 00:04:34.897 EAL: Detected lcore 65 as core 29 on socket 1 00:04:34.897 EAL: Detected lcore 66 as core 30 on socket 1 00:04:34.897 EAL: Detected lcore 67 as core 31 on socket 1 00:04:34.897 EAL: Detected lcore 68 as core 32 on socket 1 00:04:34.897 EAL: Detected lcore 69 as core 33 on socket 1 00:04:34.897 EAL: Detected lcore 70 as core 34 on socket 1 00:04:34.897 EAL: Detected lcore 71 as core 35 on socket 1 00:04:34.897 EAL: Detected lcore 72 as core 0 on socket 0 00:04:34.897 EAL: Detected lcore 73 as core 1 on socket 0 00:04:34.897 EAL: Detected lcore 74 as core 2 on socket 0 00:04:34.897 EAL: Detected lcore 75 as core 3 on socket 0 00:04:34.897 EAL: Detected lcore 76 as core 4 on socket 0 00:04:34.897 EAL: Detected lcore 77 as core 5 on socket 0 00:04:34.897 EAL: Detected lcore 78 as core 6 on socket 0 00:04:34.897 EAL: Detected lcore 79 as core 7 on socket 0 00:04:34.897 EAL: Detected lcore 80 as core 8 on socket 0 00:04:34.897 EAL: Detected lcore 81 as core 9 on socket 0 00:04:34.897 EAL: Detected lcore 82 as core 10 on socket 0 00:04:34.897 EAL: Detected lcore 83 as core 11 on socket 0 00:04:34.898 EAL: Detected lcore 84 as core 12 on socket 0 00:04:34.898 EAL: Detected lcore 85 as core 13 on socket 0 00:04:34.898 EAL: Detected lcore 86 as core 14 on socket 0 00:04:34.898 EAL: Detected lcore 87 as core 15 on socket 0 00:04:34.898 EAL: Detected lcore 88 as core 16 on socket 0 00:04:34.898 EAL: Detected lcore 89 as core 17 on socket 0 00:04:34.898 EAL: Detected lcore 90 as core 18 on socket 0 00:04:34.898 EAL: Detected lcore 91 as core 19 on socket 0 00:04:34.898 EAL: Detected lcore 92 as core 20 on socket 0 00:04:34.898 EAL: Detected lcore 93 as core 21 on socket 0 00:04:34.898 EAL: Detected lcore 94 as core 22 on socket 0 00:04:34.898 EAL: Detected lcore 95 as core 23 on socket 0 00:04:34.898 EAL: Detected lcore 96 as core 24 on socket 0 00:04:34.898 EAL: Detected lcore 97 as core 25 on socket 0 00:04:34.898 EAL: Detected lcore 98 as core 26 on socket 0 00:04:34.898 EAL: Detected lcore 99 as core 27 on socket 0 00:04:34.898 EAL: Detected lcore 100 as core 28 on socket 0 00:04:34.898 EAL: Detected lcore 101 as core 29 on socket 0 00:04:34.898 EAL: Detected lcore 102 as core 30 on socket 0 00:04:34.898 EAL: Detected lcore 103 as core 31 on socket 0 00:04:34.898 EAL: Detected lcore 104 as core 32 on socket 0 00:04:34.898 EAL: Detected lcore 105 as core 33 on socket 0 00:04:34.898 EAL: Detected lcore 106 as core 34 on socket 0 00:04:34.898 EAL: Detected lcore 107 as core 35 on socket 0 00:04:34.898 EAL: Detected lcore 108 as core 0 on socket 1 00:04:34.898 EAL: Detected lcore 109 as core 1 on socket 1 00:04:34.898 EAL: Detected lcore 110 as core 2 on socket 1 00:04:34.898 EAL: Detected lcore 111 as core 3 on socket 1 00:04:34.898 EAL: Detected lcore 112 as core 4 on socket 1 00:04:34.898 EAL: Detected lcore 113 as core 5 on socket 1 00:04:34.898 EAL: Detected lcore 114 as core 6 on socket 1 00:04:34.898 EAL: Detected lcore 115 as core 7 on socket 1 00:04:34.898 EAL: Detected lcore 116 as core 8 on socket 1 00:04:34.898 EAL: Detected lcore 117 as core 9 on socket 1 00:04:34.898 EAL: Detected lcore 118 as core 10 on socket 1 00:04:34.898 EAL: Detected lcore 119 as core 11 on socket 1 00:04:34.898 EAL: Detected lcore 120 as core 12 on socket 1 00:04:34.898 EAL: Detected lcore 121 as core 13 on socket 1 00:04:34.898 EAL: Detected lcore 122 as core 14 on socket 1 00:04:34.898 EAL: Detected lcore 123 as core 15 on socket 1 00:04:34.898 EAL: Detected lcore 124 as core 16 on socket 1 00:04:34.898 EAL: Detected lcore 125 as core 17 on socket 1 00:04:34.898 EAL: Detected lcore 126 as core 18 on socket 1 00:04:34.898 EAL: Detected lcore 127 as core 19 on socket 1 00:04:34.898 EAL: Skipped lcore 128 as core 20 on socket 1 00:04:34.898 EAL: Skipped lcore 129 as core 21 on socket 1 00:04:34.898 EAL: Skipped lcore 130 as core 22 on socket 1 00:04:34.898 EAL: Skipped lcore 131 as core 23 on socket 1 00:04:34.898 EAL: Skipped lcore 132 as core 24 on socket 1 00:04:34.898 EAL: Skipped lcore 133 as core 25 on socket 1 00:04:34.898 EAL: Skipped lcore 134 as core 26 on socket 1 00:04:34.898 EAL: Skipped lcore 135 as core 27 on socket 1 00:04:34.898 EAL: Skipped lcore 136 as core 28 on socket 1 00:04:34.898 EAL: Skipped lcore 137 as core 29 on socket 1 00:04:34.898 EAL: Skipped lcore 138 as core 30 on socket 1 00:04:34.898 EAL: Skipped lcore 139 as core 31 on socket 1 00:04:34.898 EAL: Skipped lcore 140 as core 32 on socket 1 00:04:34.898 EAL: Skipped lcore 141 as core 33 on socket 1 00:04:34.898 EAL: Skipped lcore 142 as core 34 on socket 1 00:04:34.898 EAL: Skipped lcore 143 as core 35 on socket 1 00:04:34.898 EAL: Maximum logical cores by configuration: 128 00:04:34.898 EAL: Detected CPU lcores: 128 00:04:34.898 EAL: Detected NUMA nodes: 2 00:04:34.898 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:04:34.898 EAL: Detected shared linkage of DPDK 00:04:34.898 EAL: No shared files mode enabled, IPC will be disabled 00:04:34.898 EAL: Bus pci wants IOVA as 'DC' 00:04:34.898 EAL: Buses did not request a specific IOVA mode. 00:04:34.898 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:34.898 EAL: Selected IOVA mode 'VA' 00:04:34.898 EAL: No free 2048 kB hugepages reported on node 1 00:04:34.898 EAL: Probing VFIO support... 00:04:34.898 EAL: IOMMU type 1 (Type 1) is supported 00:04:34.898 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:34.898 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:34.898 EAL: VFIO support initialized 00:04:34.898 EAL: Ask a virtual area of 0x2e000 bytes 00:04:34.898 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:34.898 EAL: Setting up physically contiguous memory... 00:04:34.898 EAL: Setting maximum number of open files to 524288 00:04:34.898 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:34.898 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:34.898 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:34.898 EAL: Ask a virtual area of 0x61000 bytes 00:04:34.898 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:34.898 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:34.898 EAL: Ask a virtual area of 0x400000000 bytes 00:04:34.898 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:34.898 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:34.898 EAL: Ask a virtual area of 0x61000 bytes 00:04:34.898 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:34.898 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:34.898 EAL: Ask a virtual area of 0x400000000 bytes 00:04:34.898 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:34.898 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:34.898 EAL: Ask a virtual area of 0x61000 bytes 00:04:34.898 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:34.898 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:34.898 EAL: Ask a virtual area of 0x400000000 bytes 00:04:34.898 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:34.898 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:34.898 EAL: Ask a virtual area of 0x61000 bytes 00:04:34.898 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:34.898 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:34.898 EAL: Ask a virtual area of 0x400000000 bytes 00:04:34.898 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:34.898 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:34.898 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:34.898 EAL: Ask a virtual area of 0x61000 bytes 00:04:34.898 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:34.898 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:34.898 EAL: Ask a virtual area of 0x400000000 bytes 00:04:34.898 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:34.898 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:34.898 EAL: Ask a virtual area of 0x61000 bytes 00:04:34.898 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:34.898 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:34.898 EAL: Ask a virtual area of 0x400000000 bytes 00:04:34.898 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:34.898 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:34.898 EAL: Ask a virtual area of 0x61000 bytes 00:04:34.898 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:34.898 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:34.898 EAL: Ask a virtual area of 0x400000000 bytes 00:04:34.898 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:34.898 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:34.898 EAL: Ask a virtual area of 0x61000 bytes 00:04:34.898 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:34.898 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:34.898 EAL: Ask a virtual area of 0x400000000 bytes 00:04:34.898 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:34.898 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:34.898 EAL: Hugepages will be freed exactly as allocated. 00:04:34.898 EAL: No shared files mode enabled, IPC is disabled 00:04:34.898 EAL: No shared files mode enabled, IPC is disabled 00:04:34.898 EAL: TSC frequency is ~2400000 KHz 00:04:34.898 EAL: Main lcore 0 is ready (tid=7f39c2cf9a00;cpuset=[0]) 00:04:34.898 EAL: Trying to obtain current memory policy. 00:04:34.898 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:34.898 EAL: Restoring previous memory policy: 0 00:04:34.898 EAL: request: mp_malloc_sync 00:04:34.898 EAL: No shared files mode enabled, IPC is disabled 00:04:34.898 EAL: Heap on socket 0 was expanded by 2MB 00:04:34.898 EAL: No shared files mode enabled, IPC is disabled 00:04:34.898 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:34.898 EAL: Mem event callback 'spdk:(nil)' registered 00:04:34.898 00:04:34.898 00:04:34.898 CUnit - A unit testing framework for C - Version 2.1-3 00:04:34.898 http://cunit.sourceforge.net/ 00:04:34.898 00:04:34.898 00:04:34.898 Suite: components_suite 00:04:34.898 Test: vtophys_malloc_test ...passed 00:04:34.898 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:34.898 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:34.898 EAL: Restoring previous memory policy: 4 00:04:34.898 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.898 EAL: request: mp_malloc_sync 00:04:34.898 EAL: No shared files mode enabled, IPC is disabled 00:04:34.898 EAL: Heap on socket 0 was expanded by 4MB 00:04:34.898 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.898 EAL: request: mp_malloc_sync 00:04:34.898 EAL: No shared files mode enabled, IPC is disabled 00:04:34.898 EAL: Heap on socket 0 was shrunk by 4MB 00:04:34.898 EAL: Trying to obtain current memory policy. 00:04:34.898 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:34.898 EAL: Restoring previous memory policy: 4 00:04:34.898 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.898 EAL: request: mp_malloc_sync 00:04:34.898 EAL: No shared files mode enabled, IPC is disabled 00:04:34.898 EAL: Heap on socket 0 was expanded by 6MB 00:04:34.898 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.898 EAL: request: mp_malloc_sync 00:04:34.898 EAL: No shared files mode enabled, IPC is disabled 00:04:34.898 EAL: Heap on socket 0 was shrunk by 6MB 00:04:34.898 EAL: Trying to obtain current memory policy. 00:04:34.898 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:34.898 EAL: Restoring previous memory policy: 4 00:04:34.899 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.899 EAL: request: mp_malloc_sync 00:04:34.899 EAL: No shared files mode enabled, IPC is disabled 00:04:34.899 EAL: Heap on socket 0 was expanded by 10MB 00:04:34.899 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.899 EAL: request: mp_malloc_sync 00:04:34.899 EAL: No shared files mode enabled, IPC is disabled 00:04:34.899 EAL: Heap on socket 0 was shrunk by 10MB 00:04:34.899 EAL: Trying to obtain current memory policy. 00:04:34.899 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:34.899 EAL: Restoring previous memory policy: 4 00:04:34.899 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.899 EAL: request: mp_malloc_sync 00:04:34.899 EAL: No shared files mode enabled, IPC is disabled 00:04:34.899 EAL: Heap on socket 0 was expanded by 18MB 00:04:34.899 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.899 EAL: request: mp_malloc_sync 00:04:34.899 EAL: No shared files mode enabled, IPC is disabled 00:04:34.899 EAL: Heap on socket 0 was shrunk by 18MB 00:04:34.899 EAL: Trying to obtain current memory policy. 00:04:34.899 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:34.899 EAL: Restoring previous memory policy: 4 00:04:34.899 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.899 EAL: request: mp_malloc_sync 00:04:34.899 EAL: No shared files mode enabled, IPC is disabled 00:04:34.899 EAL: Heap on socket 0 was expanded by 34MB 00:04:34.899 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.899 EAL: request: mp_malloc_sync 00:04:34.899 EAL: No shared files mode enabled, IPC is disabled 00:04:34.899 EAL: Heap on socket 0 was shrunk by 34MB 00:04:34.899 EAL: Trying to obtain current memory policy. 00:04:34.899 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:34.899 EAL: Restoring previous memory policy: 4 00:04:34.899 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.899 EAL: request: mp_malloc_sync 00:04:34.899 EAL: No shared files mode enabled, IPC is disabled 00:04:34.899 EAL: Heap on socket 0 was expanded by 66MB 00:04:34.899 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.899 EAL: request: mp_malloc_sync 00:04:34.899 EAL: No shared files mode enabled, IPC is disabled 00:04:34.899 EAL: Heap on socket 0 was shrunk by 66MB 00:04:34.899 EAL: Trying to obtain current memory policy. 00:04:34.899 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:34.899 EAL: Restoring previous memory policy: 4 00:04:34.899 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.899 EAL: request: mp_malloc_sync 00:04:34.899 EAL: No shared files mode enabled, IPC is disabled 00:04:34.899 EAL: Heap on socket 0 was expanded by 130MB 00:04:35.160 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.160 EAL: request: mp_malloc_sync 00:04:35.160 EAL: No shared files mode enabled, IPC is disabled 00:04:35.160 EAL: Heap on socket 0 was shrunk by 130MB 00:04:35.160 EAL: Trying to obtain current memory policy. 00:04:35.160 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:35.160 EAL: Restoring previous memory policy: 4 00:04:35.160 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.160 EAL: request: mp_malloc_sync 00:04:35.160 EAL: No shared files mode enabled, IPC is disabled 00:04:35.160 EAL: Heap on socket 0 was expanded by 258MB 00:04:35.160 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.160 EAL: request: mp_malloc_sync 00:04:35.160 EAL: No shared files mode enabled, IPC is disabled 00:04:35.160 EAL: Heap on socket 0 was shrunk by 258MB 00:04:35.160 EAL: Trying to obtain current memory policy. 00:04:35.160 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:35.160 EAL: Restoring previous memory policy: 4 00:04:35.160 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.160 EAL: request: mp_malloc_sync 00:04:35.160 EAL: No shared files mode enabled, IPC is disabled 00:04:35.160 EAL: Heap on socket 0 was expanded by 514MB 00:04:35.160 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.420 EAL: request: mp_malloc_sync 00:04:35.420 EAL: No shared files mode enabled, IPC is disabled 00:04:35.420 EAL: Heap on socket 0 was shrunk by 514MB 00:04:35.420 EAL: Trying to obtain current memory policy. 00:04:35.420 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:35.420 EAL: Restoring previous memory policy: 4 00:04:35.420 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.420 EAL: request: mp_malloc_sync 00:04:35.420 EAL: No shared files mode enabled, IPC is disabled 00:04:35.420 EAL: Heap on socket 0 was expanded by 1026MB 00:04:35.420 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.681 EAL: request: mp_malloc_sync 00:04:35.681 EAL: No shared files mode enabled, IPC is disabled 00:04:35.681 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:35.681 passed 00:04:35.681 00:04:35.681 Run Summary: Type Total Ran Passed Failed Inactive 00:04:35.681 suites 1 1 n/a 0 0 00:04:35.681 tests 2 2 2 0 0 00:04:35.681 asserts 497 497 497 0 n/a 00:04:35.681 00:04:35.681 Elapsed time = 0.648 seconds 00:04:35.681 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.681 EAL: request: mp_malloc_sync 00:04:35.681 EAL: No shared files mode enabled, IPC is disabled 00:04:35.681 EAL: Heap on socket 0 was shrunk by 2MB 00:04:35.681 EAL: No shared files mode enabled, IPC is disabled 00:04:35.681 EAL: No shared files mode enabled, IPC is disabled 00:04:35.681 EAL: No shared files mode enabled, IPC is disabled 00:04:35.681 00:04:35.681 real 0m0.761s 00:04:35.681 user 0m0.409s 00:04:35.681 sys 0m0.330s 00:04:35.681 12:47:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:35.681 12:47:40 -- common/autotest_common.sh@10 -- # set +x 00:04:35.681 ************************************ 00:04:35.681 END TEST env_vtophys 00:04:35.681 ************************************ 00:04:35.681 12:47:40 -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:35.681 12:47:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:35.681 12:47:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:35.681 12:47:40 -- common/autotest_common.sh@10 -- # set +x 00:04:35.942 ************************************ 00:04:35.942 START TEST env_pci 00:04:35.942 ************************************ 00:04:35.942 12:47:40 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:35.942 00:04:35.942 00:04:35.942 CUnit - A unit testing framework for C - Version 2.1-3 00:04:35.942 http://cunit.sourceforge.net/ 00:04:35.942 00:04:35.942 00:04:35.942 Suite: pci 00:04:35.942 Test: pci_hook ...[2024-04-26 12:47:40.784491] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3753312 has claimed it 00:04:35.942 EAL: Cannot find device (10000:00:01.0) 00:04:35.942 EAL: Failed to attach device on primary process 00:04:35.942 passed 00:04:35.942 00:04:35.942 Run Summary: Type Total Ran Passed Failed Inactive 00:04:35.942 suites 1 1 n/a 0 0 00:04:35.942 tests 1 1 1 0 0 00:04:35.942 asserts 25 25 25 0 n/a 00:04:35.942 00:04:35.942 Elapsed time = 0.038 seconds 00:04:35.942 00:04:35.942 real 0m0.059s 00:04:35.942 user 0m0.018s 00:04:35.942 sys 0m0.040s 00:04:35.942 12:47:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:35.942 12:47:40 -- common/autotest_common.sh@10 -- # set +x 00:04:35.942 ************************************ 00:04:35.942 END TEST env_pci 00:04:35.942 ************************************ 00:04:35.942 12:47:40 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:35.942 12:47:40 -- env/env.sh@15 -- # uname 00:04:35.942 12:47:40 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:35.942 12:47:40 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:35.942 12:47:40 -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:35.942 12:47:40 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:04:35.942 12:47:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:35.942 12:47:40 -- common/autotest_common.sh@10 -- # set +x 00:04:36.202 ************************************ 00:04:36.202 START TEST env_dpdk_post_init 00:04:36.202 ************************************ 00:04:36.202 12:47:41 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:36.202 EAL: Detected CPU lcores: 128 00:04:36.202 EAL: Detected NUMA nodes: 2 00:04:36.202 EAL: Detected shared linkage of DPDK 00:04:36.202 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:36.202 EAL: Selected IOVA mode 'VA' 00:04:36.202 EAL: No free 2048 kB hugepages reported on node 1 00:04:36.202 EAL: VFIO support initialized 00:04:36.202 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:36.202 EAL: Using IOMMU type 1 (Type 1) 00:04:36.461 EAL: Ignore mapping IO port bar(1) 00:04:36.461 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:04:36.461 EAL: Ignore mapping IO port bar(1) 00:04:36.721 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:04:36.721 EAL: Ignore mapping IO port bar(1) 00:04:36.980 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:04:36.980 EAL: Ignore mapping IO port bar(1) 00:04:37.240 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:04:37.240 EAL: Ignore mapping IO port bar(1) 00:04:37.240 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:04:37.500 EAL: Ignore mapping IO port bar(1) 00:04:37.500 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:04:37.761 EAL: Ignore mapping IO port bar(1) 00:04:37.761 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:04:38.021 EAL: Ignore mapping IO port bar(1) 00:04:38.021 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:04:38.281 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:04:38.281 EAL: Ignore mapping IO port bar(1) 00:04:38.540 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:04:38.540 EAL: Ignore mapping IO port bar(1) 00:04:38.800 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:04:38.800 EAL: Ignore mapping IO port bar(1) 00:04:38.800 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:04:39.059 EAL: Ignore mapping IO port bar(1) 00:04:39.059 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:04:39.318 EAL: Ignore mapping IO port bar(1) 00:04:39.318 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:04:39.578 EAL: Ignore mapping IO port bar(1) 00:04:39.578 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:04:39.578 EAL: Ignore mapping IO port bar(1) 00:04:39.837 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:04:39.837 EAL: Ignore mapping IO port bar(1) 00:04:40.096 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:04:40.096 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:04:40.096 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:04:40.096 Starting DPDK initialization... 00:04:40.096 Starting SPDK post initialization... 00:04:40.096 SPDK NVMe probe 00:04:40.096 Attaching to 0000:65:00.0 00:04:40.096 Attached to 0000:65:00.0 00:04:40.096 Cleaning up... 00:04:42.009 00:04:42.009 real 0m5.717s 00:04:42.009 user 0m0.181s 00:04:42.009 sys 0m0.077s 00:04:42.009 12:47:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:42.009 12:47:46 -- common/autotest_common.sh@10 -- # set +x 00:04:42.009 ************************************ 00:04:42.009 END TEST env_dpdk_post_init 00:04:42.009 ************************************ 00:04:42.009 12:47:46 -- env/env.sh@26 -- # uname 00:04:42.009 12:47:46 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:42.009 12:47:46 -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:42.009 12:47:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:42.009 12:47:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:42.009 12:47:46 -- common/autotest_common.sh@10 -- # set +x 00:04:42.009 ************************************ 00:04:42.009 START TEST env_mem_callbacks 00:04:42.009 ************************************ 00:04:42.009 12:47:46 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:42.009 EAL: Detected CPU lcores: 128 00:04:42.009 EAL: Detected NUMA nodes: 2 00:04:42.009 EAL: Detected shared linkage of DPDK 00:04:42.009 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:42.009 EAL: Selected IOVA mode 'VA' 00:04:42.009 EAL: No free 2048 kB hugepages reported on node 1 00:04:42.009 EAL: VFIO support initialized 00:04:42.009 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:42.009 00:04:42.009 00:04:42.009 CUnit - A unit testing framework for C - Version 2.1-3 00:04:42.009 http://cunit.sourceforge.net/ 00:04:42.009 00:04:42.009 00:04:42.009 Suite: memory 00:04:42.009 Test: test ... 00:04:42.009 register 0x200000200000 2097152 00:04:42.009 malloc 3145728 00:04:42.009 register 0x200000400000 4194304 00:04:42.009 buf 0x200000500000 len 3145728 PASSED 00:04:42.009 malloc 64 00:04:42.009 buf 0x2000004fff40 len 64 PASSED 00:04:42.009 malloc 4194304 00:04:42.009 register 0x200000800000 6291456 00:04:42.009 buf 0x200000a00000 len 4194304 PASSED 00:04:42.009 free 0x200000500000 3145728 00:04:42.009 free 0x2000004fff40 64 00:04:42.009 unregister 0x200000400000 4194304 PASSED 00:04:42.009 free 0x200000a00000 4194304 00:04:42.009 unregister 0x200000800000 6291456 PASSED 00:04:42.009 malloc 8388608 00:04:42.009 register 0x200000400000 10485760 00:04:42.009 buf 0x200000600000 len 8388608 PASSED 00:04:42.009 free 0x200000600000 8388608 00:04:42.009 unregister 0x200000400000 10485760 PASSED 00:04:42.009 passed 00:04:42.009 00:04:42.009 Run Summary: Type Total Ran Passed Failed Inactive 00:04:42.009 suites 1 1 n/a 0 0 00:04:42.009 tests 1 1 1 0 0 00:04:42.009 asserts 15 15 15 0 n/a 00:04:42.009 00:04:42.009 Elapsed time = 0.005 seconds 00:04:42.009 00:04:42.009 real 0m0.058s 00:04:42.009 user 0m0.023s 00:04:42.009 sys 0m0.035s 00:04:42.009 12:47:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:42.009 12:47:47 -- common/autotest_common.sh@10 -- # set +x 00:04:42.009 ************************************ 00:04:42.009 END TEST env_mem_callbacks 00:04:42.009 ************************************ 00:04:42.009 00:04:42.009 real 0m7.877s 00:04:42.009 user 0m1.233s 00:04:42.009 sys 0m1.085s 00:04:42.009 12:47:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:42.009 12:47:47 -- common/autotest_common.sh@10 -- # set +x 00:04:42.009 ************************************ 00:04:42.009 END TEST env 00:04:42.009 ************************************ 00:04:42.269 12:47:47 -- spdk/autotest.sh@165 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:42.269 12:47:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:42.269 12:47:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:42.269 12:47:47 -- common/autotest_common.sh@10 -- # set +x 00:04:42.269 ************************************ 00:04:42.269 START TEST rpc 00:04:42.269 ************************************ 00:04:42.269 12:47:47 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:42.269 * Looking for test storage... 00:04:42.530 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:42.530 12:47:47 -- rpc/rpc.sh@65 -- # spdk_pid=3754758 00:04:42.530 12:47:47 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:42.530 12:47:47 -- rpc/rpc.sh@67 -- # waitforlisten 3754758 00:04:42.530 12:47:47 -- common/autotest_common.sh@817 -- # '[' -z 3754758 ']' 00:04:42.530 12:47:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:42.530 12:47:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:42.530 12:47:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:42.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:42.530 12:47:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:42.530 12:47:47 -- common/autotest_common.sh@10 -- # set +x 00:04:42.530 12:47:47 -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:42.530 [2024-04-26 12:47:47.384820] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:04:42.530 [2024-04-26 12:47:47.384875] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3754758 ] 00:04:42.530 EAL: No free 2048 kB hugepages reported on node 1 00:04:42.530 [2024-04-26 12:47:47.445772] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:42.530 [2024-04-26 12:47:47.510572] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:42.530 [2024-04-26 12:47:47.510608] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3754758' to capture a snapshot of events at runtime. 00:04:42.530 [2024-04-26 12:47:47.510616] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:42.530 [2024-04-26 12:47:47.510622] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:42.530 [2024-04-26 12:47:47.510628] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3754758 for offline analysis/debug. 00:04:42.530 [2024-04-26 12:47:47.510646] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.100 12:47:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:43.100 12:47:48 -- common/autotest_common.sh@850 -- # return 0 00:04:43.100 12:47:48 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:43.100 12:47:48 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:43.100 12:47:48 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:43.100 12:47:48 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:43.100 12:47:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:43.100 12:47:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:43.100 12:47:48 -- common/autotest_common.sh@10 -- # set +x 00:04:43.361 ************************************ 00:04:43.361 START TEST rpc_integrity 00:04:43.361 ************************************ 00:04:43.361 12:47:48 -- common/autotest_common.sh@1111 -- # rpc_integrity 00:04:43.361 12:47:48 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:43.361 12:47:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:43.361 12:47:48 -- common/autotest_common.sh@10 -- # set +x 00:04:43.361 12:47:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:43.361 12:47:48 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:43.361 12:47:48 -- rpc/rpc.sh@13 -- # jq length 00:04:43.361 12:47:48 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:43.361 12:47:48 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:43.361 12:47:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:43.361 12:47:48 -- common/autotest_common.sh@10 -- # set +x 00:04:43.361 12:47:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:43.361 12:47:48 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:43.361 12:47:48 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:43.361 12:47:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:43.361 12:47:48 -- common/autotest_common.sh@10 -- # set +x 00:04:43.361 12:47:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:43.361 12:47:48 -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:43.361 { 00:04:43.361 "name": "Malloc0", 00:04:43.361 "aliases": [ 00:04:43.361 "d02e287f-8c9d-4881-8ef8-8a12a21630fc" 00:04:43.361 ], 00:04:43.361 "product_name": "Malloc disk", 00:04:43.361 "block_size": 512, 00:04:43.361 "num_blocks": 16384, 00:04:43.361 "uuid": "d02e287f-8c9d-4881-8ef8-8a12a21630fc", 00:04:43.361 "assigned_rate_limits": { 00:04:43.361 "rw_ios_per_sec": 0, 00:04:43.361 "rw_mbytes_per_sec": 0, 00:04:43.361 "r_mbytes_per_sec": 0, 00:04:43.361 "w_mbytes_per_sec": 0 00:04:43.361 }, 00:04:43.361 "claimed": false, 00:04:43.361 "zoned": false, 00:04:43.361 "supported_io_types": { 00:04:43.361 "read": true, 00:04:43.361 "write": true, 00:04:43.361 "unmap": true, 00:04:43.361 "write_zeroes": true, 00:04:43.361 "flush": true, 00:04:43.361 "reset": true, 00:04:43.361 "compare": false, 00:04:43.361 "compare_and_write": false, 00:04:43.361 "abort": true, 00:04:43.361 "nvme_admin": false, 00:04:43.361 "nvme_io": false 00:04:43.361 }, 00:04:43.361 "memory_domains": [ 00:04:43.361 { 00:04:43.361 "dma_device_id": "system", 00:04:43.361 "dma_device_type": 1 00:04:43.361 }, 00:04:43.361 { 00:04:43.361 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:43.361 "dma_device_type": 2 00:04:43.361 } 00:04:43.361 ], 00:04:43.361 "driver_specific": {} 00:04:43.361 } 00:04:43.361 ]' 00:04:43.361 12:47:48 -- rpc/rpc.sh@17 -- # jq length 00:04:43.361 12:47:48 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:43.361 12:47:48 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:43.361 12:47:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:43.361 12:47:48 -- common/autotest_common.sh@10 -- # set +x 00:04:43.622 [2024-04-26 12:47:48.422114] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:43.622 [2024-04-26 12:47:48.422146] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:43.622 [2024-04-26 12:47:48.422158] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1e69610 00:04:43.622 [2024-04-26 12:47:48.422165] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:43.622 [2024-04-26 12:47:48.423483] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:43.622 [2024-04-26 12:47:48.423504] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:43.622 Passthru0 00:04:43.622 12:47:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:43.622 12:47:48 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:43.622 12:47:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:43.622 12:47:48 -- common/autotest_common.sh@10 -- # set +x 00:04:43.622 12:47:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:43.622 12:47:48 -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:43.622 { 00:04:43.622 "name": "Malloc0", 00:04:43.622 "aliases": [ 00:04:43.622 "d02e287f-8c9d-4881-8ef8-8a12a21630fc" 00:04:43.622 ], 00:04:43.622 "product_name": "Malloc disk", 00:04:43.622 "block_size": 512, 00:04:43.622 "num_blocks": 16384, 00:04:43.622 "uuid": "d02e287f-8c9d-4881-8ef8-8a12a21630fc", 00:04:43.622 "assigned_rate_limits": { 00:04:43.622 "rw_ios_per_sec": 0, 00:04:43.622 "rw_mbytes_per_sec": 0, 00:04:43.622 "r_mbytes_per_sec": 0, 00:04:43.622 "w_mbytes_per_sec": 0 00:04:43.622 }, 00:04:43.622 "claimed": true, 00:04:43.622 "claim_type": "exclusive_write", 00:04:43.622 "zoned": false, 00:04:43.622 "supported_io_types": { 00:04:43.622 "read": true, 00:04:43.622 "write": true, 00:04:43.622 "unmap": true, 00:04:43.622 "write_zeroes": true, 00:04:43.622 "flush": true, 00:04:43.622 "reset": true, 00:04:43.622 "compare": false, 00:04:43.622 "compare_and_write": false, 00:04:43.622 "abort": true, 00:04:43.622 "nvme_admin": false, 00:04:43.622 "nvme_io": false 00:04:43.622 }, 00:04:43.622 "memory_domains": [ 00:04:43.622 { 00:04:43.622 "dma_device_id": "system", 00:04:43.622 "dma_device_type": 1 00:04:43.622 }, 00:04:43.622 { 00:04:43.622 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:43.622 "dma_device_type": 2 00:04:43.622 } 00:04:43.622 ], 00:04:43.622 "driver_specific": {} 00:04:43.622 }, 00:04:43.622 { 00:04:43.622 "name": "Passthru0", 00:04:43.622 "aliases": [ 00:04:43.622 "0417081c-ce93-577e-9a0b-e8fe046b0c1a" 00:04:43.622 ], 00:04:43.622 "product_name": "passthru", 00:04:43.622 "block_size": 512, 00:04:43.622 "num_blocks": 16384, 00:04:43.622 "uuid": "0417081c-ce93-577e-9a0b-e8fe046b0c1a", 00:04:43.622 "assigned_rate_limits": { 00:04:43.622 "rw_ios_per_sec": 0, 00:04:43.622 "rw_mbytes_per_sec": 0, 00:04:43.622 "r_mbytes_per_sec": 0, 00:04:43.622 "w_mbytes_per_sec": 0 00:04:43.622 }, 00:04:43.622 "claimed": false, 00:04:43.622 "zoned": false, 00:04:43.622 "supported_io_types": { 00:04:43.622 "read": true, 00:04:43.622 "write": true, 00:04:43.622 "unmap": true, 00:04:43.622 "write_zeroes": true, 00:04:43.622 "flush": true, 00:04:43.622 "reset": true, 00:04:43.622 "compare": false, 00:04:43.622 "compare_and_write": false, 00:04:43.622 "abort": true, 00:04:43.622 "nvme_admin": false, 00:04:43.622 "nvme_io": false 00:04:43.622 }, 00:04:43.622 "memory_domains": [ 00:04:43.622 { 00:04:43.622 "dma_device_id": "system", 00:04:43.622 "dma_device_type": 1 00:04:43.622 }, 00:04:43.622 { 00:04:43.622 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:43.622 "dma_device_type": 2 00:04:43.622 } 00:04:43.622 ], 00:04:43.622 "driver_specific": { 00:04:43.622 "passthru": { 00:04:43.622 "name": "Passthru0", 00:04:43.622 "base_bdev_name": "Malloc0" 00:04:43.622 } 00:04:43.622 } 00:04:43.622 } 00:04:43.622 ]' 00:04:43.622 12:47:48 -- rpc/rpc.sh@21 -- # jq length 00:04:43.622 12:47:48 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:43.622 12:47:48 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:43.622 12:47:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:43.622 12:47:48 -- common/autotest_common.sh@10 -- # set +x 00:04:43.622 12:47:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:43.622 12:47:48 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:43.622 12:47:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:43.622 12:47:48 -- common/autotest_common.sh@10 -- # set +x 00:04:43.622 12:47:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:43.622 12:47:48 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:43.622 12:47:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:43.622 12:47:48 -- common/autotest_common.sh@10 -- # set +x 00:04:43.622 12:47:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:43.622 12:47:48 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:43.622 12:47:48 -- rpc/rpc.sh@26 -- # jq length 00:04:43.622 12:47:48 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:43.622 00:04:43.622 real 0m0.292s 00:04:43.622 user 0m0.189s 00:04:43.622 sys 0m0.032s 00:04:43.622 12:47:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:43.622 12:47:48 -- common/autotest_common.sh@10 -- # set +x 00:04:43.622 ************************************ 00:04:43.622 END TEST rpc_integrity 00:04:43.622 ************************************ 00:04:43.622 12:47:48 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:43.622 12:47:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:43.622 12:47:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:43.622 12:47:48 -- common/autotest_common.sh@10 -- # set +x 00:04:43.883 ************************************ 00:04:43.883 START TEST rpc_plugins 00:04:43.883 ************************************ 00:04:43.883 12:47:48 -- common/autotest_common.sh@1111 -- # rpc_plugins 00:04:43.883 12:47:48 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:43.883 12:47:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:43.883 12:47:48 -- common/autotest_common.sh@10 -- # set +x 00:04:43.883 12:47:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:43.883 12:47:48 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:43.883 12:47:48 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:43.883 12:47:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:43.883 12:47:48 -- common/autotest_common.sh@10 -- # set +x 00:04:43.883 12:47:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:43.883 12:47:48 -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:43.883 { 00:04:43.883 "name": "Malloc1", 00:04:43.883 "aliases": [ 00:04:43.883 "c9a002dd-1aab-4e75-8470-cabf7bfa53fa" 00:04:43.883 ], 00:04:43.883 "product_name": "Malloc disk", 00:04:43.883 "block_size": 4096, 00:04:43.883 "num_blocks": 256, 00:04:43.883 "uuid": "c9a002dd-1aab-4e75-8470-cabf7bfa53fa", 00:04:43.883 "assigned_rate_limits": { 00:04:43.883 "rw_ios_per_sec": 0, 00:04:43.883 "rw_mbytes_per_sec": 0, 00:04:43.883 "r_mbytes_per_sec": 0, 00:04:43.883 "w_mbytes_per_sec": 0 00:04:43.883 }, 00:04:43.883 "claimed": false, 00:04:43.883 "zoned": false, 00:04:43.883 "supported_io_types": { 00:04:43.883 "read": true, 00:04:43.883 "write": true, 00:04:43.883 "unmap": true, 00:04:43.883 "write_zeroes": true, 00:04:43.883 "flush": true, 00:04:43.883 "reset": true, 00:04:43.883 "compare": false, 00:04:43.883 "compare_and_write": false, 00:04:43.883 "abort": true, 00:04:43.883 "nvme_admin": false, 00:04:43.883 "nvme_io": false 00:04:43.883 }, 00:04:43.883 "memory_domains": [ 00:04:43.883 { 00:04:43.883 "dma_device_id": "system", 00:04:43.883 "dma_device_type": 1 00:04:43.883 }, 00:04:43.883 { 00:04:43.883 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:43.883 "dma_device_type": 2 00:04:43.883 } 00:04:43.883 ], 00:04:43.883 "driver_specific": {} 00:04:43.883 } 00:04:43.883 ]' 00:04:43.883 12:47:48 -- rpc/rpc.sh@32 -- # jq length 00:04:43.883 12:47:48 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:43.883 12:47:48 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:43.883 12:47:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:43.883 12:47:48 -- common/autotest_common.sh@10 -- # set +x 00:04:43.883 12:47:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:43.883 12:47:48 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:43.883 12:47:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:43.883 12:47:48 -- common/autotest_common.sh@10 -- # set +x 00:04:43.883 12:47:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:43.883 12:47:48 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:43.883 12:47:48 -- rpc/rpc.sh@36 -- # jq length 00:04:43.883 12:47:48 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:43.883 00:04:43.883 real 0m0.140s 00:04:43.883 user 0m0.096s 00:04:43.883 sys 0m0.014s 00:04:43.883 12:47:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:43.883 12:47:48 -- common/autotest_common.sh@10 -- # set +x 00:04:43.883 ************************************ 00:04:43.883 END TEST rpc_plugins 00:04:43.883 ************************************ 00:04:43.883 12:47:48 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:43.883 12:47:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:43.883 12:47:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:43.883 12:47:48 -- common/autotest_common.sh@10 -- # set +x 00:04:44.142 ************************************ 00:04:44.142 START TEST rpc_trace_cmd_test 00:04:44.142 ************************************ 00:04:44.142 12:47:49 -- common/autotest_common.sh@1111 -- # rpc_trace_cmd_test 00:04:44.142 12:47:49 -- rpc/rpc.sh@40 -- # local info 00:04:44.142 12:47:49 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:44.142 12:47:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:44.142 12:47:49 -- common/autotest_common.sh@10 -- # set +x 00:04:44.142 12:47:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:44.142 12:47:49 -- rpc/rpc.sh@42 -- # info='{ 00:04:44.142 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3754758", 00:04:44.142 "tpoint_group_mask": "0x8", 00:04:44.142 "iscsi_conn": { 00:04:44.142 "mask": "0x2", 00:04:44.142 "tpoint_mask": "0x0" 00:04:44.142 }, 00:04:44.142 "scsi": { 00:04:44.142 "mask": "0x4", 00:04:44.142 "tpoint_mask": "0x0" 00:04:44.142 }, 00:04:44.142 "bdev": { 00:04:44.142 "mask": "0x8", 00:04:44.142 "tpoint_mask": "0xffffffffffffffff" 00:04:44.142 }, 00:04:44.142 "nvmf_rdma": { 00:04:44.142 "mask": "0x10", 00:04:44.142 "tpoint_mask": "0x0" 00:04:44.142 }, 00:04:44.142 "nvmf_tcp": { 00:04:44.142 "mask": "0x20", 00:04:44.142 "tpoint_mask": "0x0" 00:04:44.142 }, 00:04:44.142 "ftl": { 00:04:44.142 "mask": "0x40", 00:04:44.142 "tpoint_mask": "0x0" 00:04:44.142 }, 00:04:44.142 "blobfs": { 00:04:44.142 "mask": "0x80", 00:04:44.142 "tpoint_mask": "0x0" 00:04:44.142 }, 00:04:44.142 "dsa": { 00:04:44.142 "mask": "0x200", 00:04:44.142 "tpoint_mask": "0x0" 00:04:44.142 }, 00:04:44.142 "thread": { 00:04:44.142 "mask": "0x400", 00:04:44.142 "tpoint_mask": "0x0" 00:04:44.142 }, 00:04:44.142 "nvme_pcie": { 00:04:44.142 "mask": "0x800", 00:04:44.142 "tpoint_mask": "0x0" 00:04:44.142 }, 00:04:44.142 "iaa": { 00:04:44.142 "mask": "0x1000", 00:04:44.142 "tpoint_mask": "0x0" 00:04:44.142 }, 00:04:44.142 "nvme_tcp": { 00:04:44.142 "mask": "0x2000", 00:04:44.142 "tpoint_mask": "0x0" 00:04:44.142 }, 00:04:44.142 "bdev_nvme": { 00:04:44.142 "mask": "0x4000", 00:04:44.142 "tpoint_mask": "0x0" 00:04:44.142 }, 00:04:44.142 "sock": { 00:04:44.142 "mask": "0x8000", 00:04:44.142 "tpoint_mask": "0x0" 00:04:44.142 } 00:04:44.142 }' 00:04:44.142 12:47:49 -- rpc/rpc.sh@43 -- # jq length 00:04:44.142 12:47:49 -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:44.142 12:47:49 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:44.142 12:47:49 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:44.142 12:47:49 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:44.402 12:47:49 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:44.402 12:47:49 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:44.402 12:47:49 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:44.402 12:47:49 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:44.402 12:47:49 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:44.402 00:04:44.402 real 0m0.223s 00:04:44.402 user 0m0.189s 00:04:44.402 sys 0m0.025s 00:04:44.402 12:47:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:44.402 12:47:49 -- common/autotest_common.sh@10 -- # set +x 00:04:44.402 ************************************ 00:04:44.402 END TEST rpc_trace_cmd_test 00:04:44.402 ************************************ 00:04:44.402 12:47:49 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:44.402 12:47:49 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:44.402 12:47:49 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:44.402 12:47:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:44.402 12:47:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:44.402 12:47:49 -- common/autotest_common.sh@10 -- # set +x 00:04:44.662 ************************************ 00:04:44.662 START TEST rpc_daemon_integrity 00:04:44.662 ************************************ 00:04:44.662 12:47:49 -- common/autotest_common.sh@1111 -- # rpc_integrity 00:04:44.662 12:47:49 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:44.662 12:47:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:44.662 12:47:49 -- common/autotest_common.sh@10 -- # set +x 00:04:44.663 12:47:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:44.663 12:47:49 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:44.663 12:47:49 -- rpc/rpc.sh@13 -- # jq length 00:04:44.663 12:47:49 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:44.663 12:47:49 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:44.663 12:47:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:44.663 12:47:49 -- common/autotest_common.sh@10 -- # set +x 00:04:44.663 12:47:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:44.663 12:47:49 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:44.663 12:47:49 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:44.663 12:47:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:44.663 12:47:49 -- common/autotest_common.sh@10 -- # set +x 00:04:44.663 12:47:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:44.663 12:47:49 -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:44.663 { 00:04:44.663 "name": "Malloc2", 00:04:44.663 "aliases": [ 00:04:44.663 "229516b2-5e74-4ca6-a2c9-3439d69140ba" 00:04:44.663 ], 00:04:44.663 "product_name": "Malloc disk", 00:04:44.663 "block_size": 512, 00:04:44.663 "num_blocks": 16384, 00:04:44.663 "uuid": "229516b2-5e74-4ca6-a2c9-3439d69140ba", 00:04:44.663 "assigned_rate_limits": { 00:04:44.663 "rw_ios_per_sec": 0, 00:04:44.663 "rw_mbytes_per_sec": 0, 00:04:44.663 "r_mbytes_per_sec": 0, 00:04:44.663 "w_mbytes_per_sec": 0 00:04:44.663 }, 00:04:44.663 "claimed": false, 00:04:44.663 "zoned": false, 00:04:44.663 "supported_io_types": { 00:04:44.663 "read": true, 00:04:44.663 "write": true, 00:04:44.663 "unmap": true, 00:04:44.663 "write_zeroes": true, 00:04:44.663 "flush": true, 00:04:44.663 "reset": true, 00:04:44.663 "compare": false, 00:04:44.663 "compare_and_write": false, 00:04:44.663 "abort": true, 00:04:44.663 "nvme_admin": false, 00:04:44.663 "nvme_io": false 00:04:44.663 }, 00:04:44.663 "memory_domains": [ 00:04:44.663 { 00:04:44.663 "dma_device_id": "system", 00:04:44.663 "dma_device_type": 1 00:04:44.663 }, 00:04:44.663 { 00:04:44.663 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:44.663 "dma_device_type": 2 00:04:44.663 } 00:04:44.663 ], 00:04:44.663 "driver_specific": {} 00:04:44.663 } 00:04:44.663 ]' 00:04:44.663 12:47:49 -- rpc/rpc.sh@17 -- # jq length 00:04:44.663 12:47:49 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:44.663 12:47:49 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:44.663 12:47:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:44.663 12:47:49 -- common/autotest_common.sh@10 -- # set +x 00:04:44.663 [2024-04-26 12:47:49.633379] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:44.663 [2024-04-26 12:47:49.633405] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:44.663 [2024-04-26 12:47:49.633418] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1e6d460 00:04:44.663 [2024-04-26 12:47:49.633425] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:44.663 [2024-04-26 12:47:49.634630] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:44.663 [2024-04-26 12:47:49.634650] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:44.663 Passthru0 00:04:44.663 12:47:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:44.663 12:47:49 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:44.663 12:47:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:44.663 12:47:49 -- common/autotest_common.sh@10 -- # set +x 00:04:44.663 12:47:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:44.663 12:47:49 -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:44.663 { 00:04:44.663 "name": "Malloc2", 00:04:44.663 "aliases": [ 00:04:44.663 "229516b2-5e74-4ca6-a2c9-3439d69140ba" 00:04:44.663 ], 00:04:44.663 "product_name": "Malloc disk", 00:04:44.663 "block_size": 512, 00:04:44.663 "num_blocks": 16384, 00:04:44.663 "uuid": "229516b2-5e74-4ca6-a2c9-3439d69140ba", 00:04:44.663 "assigned_rate_limits": { 00:04:44.663 "rw_ios_per_sec": 0, 00:04:44.663 "rw_mbytes_per_sec": 0, 00:04:44.663 "r_mbytes_per_sec": 0, 00:04:44.663 "w_mbytes_per_sec": 0 00:04:44.663 }, 00:04:44.663 "claimed": true, 00:04:44.663 "claim_type": "exclusive_write", 00:04:44.663 "zoned": false, 00:04:44.663 "supported_io_types": { 00:04:44.663 "read": true, 00:04:44.663 "write": true, 00:04:44.663 "unmap": true, 00:04:44.663 "write_zeroes": true, 00:04:44.663 "flush": true, 00:04:44.663 "reset": true, 00:04:44.663 "compare": false, 00:04:44.663 "compare_and_write": false, 00:04:44.663 "abort": true, 00:04:44.663 "nvme_admin": false, 00:04:44.663 "nvme_io": false 00:04:44.663 }, 00:04:44.663 "memory_domains": [ 00:04:44.663 { 00:04:44.663 "dma_device_id": "system", 00:04:44.663 "dma_device_type": 1 00:04:44.663 }, 00:04:44.663 { 00:04:44.663 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:44.663 "dma_device_type": 2 00:04:44.663 } 00:04:44.663 ], 00:04:44.663 "driver_specific": {} 00:04:44.663 }, 00:04:44.663 { 00:04:44.663 "name": "Passthru0", 00:04:44.663 "aliases": [ 00:04:44.663 "7eacf1a0-1562-590b-af76-1bfbc7c0154d" 00:04:44.663 ], 00:04:44.663 "product_name": "passthru", 00:04:44.663 "block_size": 512, 00:04:44.663 "num_blocks": 16384, 00:04:44.663 "uuid": "7eacf1a0-1562-590b-af76-1bfbc7c0154d", 00:04:44.663 "assigned_rate_limits": { 00:04:44.663 "rw_ios_per_sec": 0, 00:04:44.663 "rw_mbytes_per_sec": 0, 00:04:44.663 "r_mbytes_per_sec": 0, 00:04:44.663 "w_mbytes_per_sec": 0 00:04:44.663 }, 00:04:44.663 "claimed": false, 00:04:44.663 "zoned": false, 00:04:44.663 "supported_io_types": { 00:04:44.663 "read": true, 00:04:44.663 "write": true, 00:04:44.663 "unmap": true, 00:04:44.663 "write_zeroes": true, 00:04:44.663 "flush": true, 00:04:44.663 "reset": true, 00:04:44.663 "compare": false, 00:04:44.663 "compare_and_write": false, 00:04:44.663 "abort": true, 00:04:44.663 "nvme_admin": false, 00:04:44.663 "nvme_io": false 00:04:44.663 }, 00:04:44.663 "memory_domains": [ 00:04:44.663 { 00:04:44.663 "dma_device_id": "system", 00:04:44.663 "dma_device_type": 1 00:04:44.663 }, 00:04:44.663 { 00:04:44.663 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:44.663 "dma_device_type": 2 00:04:44.663 } 00:04:44.663 ], 00:04:44.663 "driver_specific": { 00:04:44.663 "passthru": { 00:04:44.663 "name": "Passthru0", 00:04:44.663 "base_bdev_name": "Malloc2" 00:04:44.663 } 00:04:44.663 } 00:04:44.663 } 00:04:44.663 ]' 00:04:44.663 12:47:49 -- rpc/rpc.sh@21 -- # jq length 00:04:44.663 12:47:49 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:44.663 12:47:49 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:44.663 12:47:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:44.663 12:47:49 -- common/autotest_common.sh@10 -- # set +x 00:04:44.663 12:47:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:44.663 12:47:49 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:44.663 12:47:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:44.663 12:47:49 -- common/autotest_common.sh@10 -- # set +x 00:04:44.923 12:47:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:44.923 12:47:49 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:44.923 12:47:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:44.923 12:47:49 -- common/autotest_common.sh@10 -- # set +x 00:04:44.923 12:47:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:44.923 12:47:49 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:44.923 12:47:49 -- rpc/rpc.sh@26 -- # jq length 00:04:44.923 12:47:49 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:44.923 00:04:44.923 real 0m0.291s 00:04:44.923 user 0m0.185s 00:04:44.923 sys 0m0.040s 00:04:44.923 12:47:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:44.923 12:47:49 -- common/autotest_common.sh@10 -- # set +x 00:04:44.923 ************************************ 00:04:44.923 END TEST rpc_daemon_integrity 00:04:44.923 ************************************ 00:04:44.923 12:47:49 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:44.923 12:47:49 -- rpc/rpc.sh@84 -- # killprocess 3754758 00:04:44.924 12:47:49 -- common/autotest_common.sh@936 -- # '[' -z 3754758 ']' 00:04:44.924 12:47:49 -- common/autotest_common.sh@940 -- # kill -0 3754758 00:04:44.924 12:47:49 -- common/autotest_common.sh@941 -- # uname 00:04:44.924 12:47:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:44.924 12:47:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3754758 00:04:44.924 12:47:49 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:44.924 12:47:49 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:44.924 12:47:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3754758' 00:04:44.924 killing process with pid 3754758 00:04:44.924 12:47:49 -- common/autotest_common.sh@955 -- # kill 3754758 00:04:44.924 12:47:49 -- common/autotest_common.sh@960 -- # wait 3754758 00:04:45.183 00:04:45.183 real 0m2.853s 00:04:45.183 user 0m3.839s 00:04:45.183 sys 0m0.793s 00:04:45.183 12:47:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:45.183 12:47:50 -- common/autotest_common.sh@10 -- # set +x 00:04:45.183 ************************************ 00:04:45.183 END TEST rpc 00:04:45.183 ************************************ 00:04:45.183 12:47:50 -- spdk/autotest.sh@166 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:45.184 12:47:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:45.184 12:47:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:45.184 12:47:50 -- common/autotest_common.sh@10 -- # set +x 00:04:45.444 ************************************ 00:04:45.444 START TEST skip_rpc 00:04:45.444 ************************************ 00:04:45.444 12:47:50 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:45.444 * Looking for test storage... 00:04:45.444 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:45.444 12:47:50 -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:45.444 12:47:50 -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:45.444 12:47:50 -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:45.444 12:47:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:45.444 12:47:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:45.444 12:47:50 -- common/autotest_common.sh@10 -- # set +x 00:04:45.705 ************************************ 00:04:45.705 START TEST skip_rpc 00:04:45.705 ************************************ 00:04:45.705 12:47:50 -- common/autotest_common.sh@1111 -- # test_skip_rpc 00:04:45.705 12:47:50 -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3755650 00:04:45.705 12:47:50 -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:45.705 12:47:50 -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:45.705 12:47:50 -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:45.705 [2024-04-26 12:47:50.576399] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:04:45.705 [2024-04-26 12:47:50.576453] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3755650 ] 00:04:45.705 EAL: No free 2048 kB hugepages reported on node 1 00:04:45.705 [2024-04-26 12:47:50.643267] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:45.705 [2024-04-26 12:47:50.717601] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.996 12:47:55 -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:50.996 12:47:55 -- common/autotest_common.sh@638 -- # local es=0 00:04:50.996 12:47:55 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:50.996 12:47:55 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:04:50.996 12:47:55 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:50.996 12:47:55 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:04:50.996 12:47:55 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:50.996 12:47:55 -- common/autotest_common.sh@641 -- # rpc_cmd spdk_get_version 00:04:50.996 12:47:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:50.996 12:47:55 -- common/autotest_common.sh@10 -- # set +x 00:04:50.996 12:47:55 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:04:50.996 12:47:55 -- common/autotest_common.sh@641 -- # es=1 00:04:50.996 12:47:55 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:04:50.996 12:47:55 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:04:50.996 12:47:55 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:04:50.996 12:47:55 -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:50.996 12:47:55 -- rpc/skip_rpc.sh@23 -- # killprocess 3755650 00:04:50.996 12:47:55 -- common/autotest_common.sh@936 -- # '[' -z 3755650 ']' 00:04:50.996 12:47:55 -- common/autotest_common.sh@940 -- # kill -0 3755650 00:04:50.996 12:47:55 -- common/autotest_common.sh@941 -- # uname 00:04:50.996 12:47:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:50.996 12:47:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3755650 00:04:50.996 12:47:55 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:50.996 12:47:55 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:50.996 12:47:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3755650' 00:04:50.996 killing process with pid 3755650 00:04:50.996 12:47:55 -- common/autotest_common.sh@955 -- # kill 3755650 00:04:50.996 12:47:55 -- common/autotest_common.sh@960 -- # wait 3755650 00:04:50.996 00:04:50.996 real 0m5.277s 00:04:50.996 user 0m5.078s 00:04:50.996 sys 0m0.235s 00:04:50.996 12:47:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:50.996 12:47:55 -- common/autotest_common.sh@10 -- # set +x 00:04:50.996 ************************************ 00:04:50.996 END TEST skip_rpc 00:04:50.996 ************************************ 00:04:50.996 12:47:55 -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:50.996 12:47:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:50.996 12:47:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:50.996 12:47:55 -- common/autotest_common.sh@10 -- # set +x 00:04:50.996 ************************************ 00:04:50.996 START TEST skip_rpc_with_json 00:04:50.996 ************************************ 00:04:50.996 12:47:55 -- common/autotest_common.sh@1111 -- # test_skip_rpc_with_json 00:04:50.996 12:47:55 -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:50.996 12:47:55 -- rpc/skip_rpc.sh@28 -- # local spdk_pid=3756731 00:04:50.996 12:47:55 -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:50.996 12:47:55 -- rpc/skip_rpc.sh@31 -- # waitforlisten 3756731 00:04:50.996 12:47:55 -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:50.996 12:47:55 -- common/autotest_common.sh@817 -- # '[' -z 3756731 ']' 00:04:50.996 12:47:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:50.996 12:47:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:50.996 12:47:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:50.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:50.996 12:47:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:50.996 12:47:55 -- common/autotest_common.sh@10 -- # set +x 00:04:50.996 [2024-04-26 12:47:56.051031] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:04:50.996 [2024-04-26 12:47:56.051088] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3756731 ] 00:04:51.256 EAL: No free 2048 kB hugepages reported on node 1 00:04:51.257 [2024-04-26 12:47:56.115260] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.257 [2024-04-26 12:47:56.181550] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.827 12:47:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:51.827 12:47:56 -- common/autotest_common.sh@850 -- # return 0 00:04:51.827 12:47:56 -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:51.827 12:47:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:51.827 12:47:56 -- common/autotest_common.sh@10 -- # set +x 00:04:51.827 [2024-04-26 12:47:56.828780] nvmf_rpc.c:2513:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:51.827 request: 00:04:51.827 { 00:04:51.827 "trtype": "tcp", 00:04:51.827 "method": "nvmf_get_transports", 00:04:51.827 "req_id": 1 00:04:51.827 } 00:04:51.827 Got JSON-RPC error response 00:04:51.827 response: 00:04:51.827 { 00:04:51.827 "code": -19, 00:04:51.827 "message": "No such device" 00:04:51.827 } 00:04:51.827 12:47:56 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:04:51.827 12:47:56 -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:51.827 12:47:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:51.827 12:47:56 -- common/autotest_common.sh@10 -- # set +x 00:04:51.827 [2024-04-26 12:47:56.840898] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:51.827 12:47:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:51.827 12:47:56 -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:51.827 12:47:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:51.827 12:47:56 -- common/autotest_common.sh@10 -- # set +x 00:04:52.089 12:47:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:52.089 12:47:56 -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:52.089 { 00:04:52.089 "subsystems": [ 00:04:52.089 { 00:04:52.089 "subsystem": "keyring", 00:04:52.089 "config": [] 00:04:52.089 }, 00:04:52.089 { 00:04:52.089 "subsystem": "iobuf", 00:04:52.089 "config": [ 00:04:52.089 { 00:04:52.089 "method": "iobuf_set_options", 00:04:52.089 "params": { 00:04:52.089 "small_pool_count": 8192, 00:04:52.089 "large_pool_count": 1024, 00:04:52.089 "small_bufsize": 8192, 00:04:52.089 "large_bufsize": 135168 00:04:52.089 } 00:04:52.089 } 00:04:52.089 ] 00:04:52.089 }, 00:04:52.089 { 00:04:52.089 "subsystem": "sock", 00:04:52.089 "config": [ 00:04:52.089 { 00:04:52.089 "method": "sock_impl_set_options", 00:04:52.089 "params": { 00:04:52.089 "impl_name": "posix", 00:04:52.089 "recv_buf_size": 2097152, 00:04:52.089 "send_buf_size": 2097152, 00:04:52.089 "enable_recv_pipe": true, 00:04:52.089 "enable_quickack": false, 00:04:52.089 "enable_placement_id": 0, 00:04:52.089 "enable_zerocopy_send_server": true, 00:04:52.089 "enable_zerocopy_send_client": false, 00:04:52.089 "zerocopy_threshold": 0, 00:04:52.089 "tls_version": 0, 00:04:52.089 "enable_ktls": false 00:04:52.089 } 00:04:52.089 }, 00:04:52.089 { 00:04:52.089 "method": "sock_impl_set_options", 00:04:52.089 "params": { 00:04:52.089 "impl_name": "ssl", 00:04:52.089 "recv_buf_size": 4096, 00:04:52.089 "send_buf_size": 4096, 00:04:52.089 "enable_recv_pipe": true, 00:04:52.089 "enable_quickack": false, 00:04:52.089 "enable_placement_id": 0, 00:04:52.089 "enable_zerocopy_send_server": true, 00:04:52.089 "enable_zerocopy_send_client": false, 00:04:52.089 "zerocopy_threshold": 0, 00:04:52.089 "tls_version": 0, 00:04:52.089 "enable_ktls": false 00:04:52.089 } 00:04:52.089 } 00:04:52.089 ] 00:04:52.089 }, 00:04:52.089 { 00:04:52.089 "subsystem": "vmd", 00:04:52.089 "config": [] 00:04:52.089 }, 00:04:52.089 { 00:04:52.089 "subsystem": "accel", 00:04:52.089 "config": [ 00:04:52.089 { 00:04:52.089 "method": "accel_set_options", 00:04:52.089 "params": { 00:04:52.089 "small_cache_size": 128, 00:04:52.089 "large_cache_size": 16, 00:04:52.089 "task_count": 2048, 00:04:52.089 "sequence_count": 2048, 00:04:52.089 "buf_count": 2048 00:04:52.089 } 00:04:52.089 } 00:04:52.089 ] 00:04:52.089 }, 00:04:52.089 { 00:04:52.089 "subsystem": "bdev", 00:04:52.089 "config": [ 00:04:52.089 { 00:04:52.089 "method": "bdev_set_options", 00:04:52.089 "params": { 00:04:52.089 "bdev_io_pool_size": 65535, 00:04:52.089 "bdev_io_cache_size": 256, 00:04:52.089 "bdev_auto_examine": true, 00:04:52.089 "iobuf_small_cache_size": 128, 00:04:52.089 "iobuf_large_cache_size": 16 00:04:52.089 } 00:04:52.089 }, 00:04:52.089 { 00:04:52.089 "method": "bdev_raid_set_options", 00:04:52.089 "params": { 00:04:52.089 "process_window_size_kb": 1024 00:04:52.089 } 00:04:52.089 }, 00:04:52.089 { 00:04:52.089 "method": "bdev_iscsi_set_options", 00:04:52.089 "params": { 00:04:52.089 "timeout_sec": 30 00:04:52.089 } 00:04:52.089 }, 00:04:52.089 { 00:04:52.089 "method": "bdev_nvme_set_options", 00:04:52.089 "params": { 00:04:52.089 "action_on_timeout": "none", 00:04:52.089 "timeout_us": 0, 00:04:52.089 "timeout_admin_us": 0, 00:04:52.089 "keep_alive_timeout_ms": 10000, 00:04:52.089 "arbitration_burst": 0, 00:04:52.089 "low_priority_weight": 0, 00:04:52.089 "medium_priority_weight": 0, 00:04:52.089 "high_priority_weight": 0, 00:04:52.089 "nvme_adminq_poll_period_us": 10000, 00:04:52.089 "nvme_ioq_poll_period_us": 0, 00:04:52.089 "io_queue_requests": 0, 00:04:52.089 "delay_cmd_submit": true, 00:04:52.089 "transport_retry_count": 4, 00:04:52.089 "bdev_retry_count": 3, 00:04:52.089 "transport_ack_timeout": 0, 00:04:52.089 "ctrlr_loss_timeout_sec": 0, 00:04:52.089 "reconnect_delay_sec": 0, 00:04:52.089 "fast_io_fail_timeout_sec": 0, 00:04:52.089 "disable_auto_failback": false, 00:04:52.089 "generate_uuids": false, 00:04:52.089 "transport_tos": 0, 00:04:52.089 "nvme_error_stat": false, 00:04:52.089 "rdma_srq_size": 0, 00:04:52.089 "io_path_stat": false, 00:04:52.089 "allow_accel_sequence": false, 00:04:52.089 "rdma_max_cq_size": 0, 00:04:52.089 "rdma_cm_event_timeout_ms": 0, 00:04:52.089 "dhchap_digests": [ 00:04:52.089 "sha256", 00:04:52.089 "sha384", 00:04:52.089 "sha512" 00:04:52.089 ], 00:04:52.090 "dhchap_dhgroups": [ 00:04:52.090 "null", 00:04:52.090 "ffdhe2048", 00:04:52.090 "ffdhe3072", 00:04:52.090 "ffdhe4096", 00:04:52.090 "ffdhe6144", 00:04:52.090 "ffdhe8192" 00:04:52.090 ] 00:04:52.090 } 00:04:52.090 }, 00:04:52.090 { 00:04:52.090 "method": "bdev_nvme_set_hotplug", 00:04:52.090 "params": { 00:04:52.090 "period_us": 100000, 00:04:52.090 "enable": false 00:04:52.090 } 00:04:52.090 }, 00:04:52.090 { 00:04:52.090 "method": "bdev_wait_for_examine" 00:04:52.090 } 00:04:52.090 ] 00:04:52.090 }, 00:04:52.090 { 00:04:52.090 "subsystem": "scsi", 00:04:52.090 "config": null 00:04:52.090 }, 00:04:52.090 { 00:04:52.090 "subsystem": "scheduler", 00:04:52.090 "config": [ 00:04:52.090 { 00:04:52.090 "method": "framework_set_scheduler", 00:04:52.090 "params": { 00:04:52.090 "name": "static" 00:04:52.090 } 00:04:52.090 } 00:04:52.090 ] 00:04:52.090 }, 00:04:52.090 { 00:04:52.090 "subsystem": "vhost_scsi", 00:04:52.090 "config": [] 00:04:52.090 }, 00:04:52.090 { 00:04:52.090 "subsystem": "vhost_blk", 00:04:52.090 "config": [] 00:04:52.090 }, 00:04:52.090 { 00:04:52.090 "subsystem": "ublk", 00:04:52.090 "config": [] 00:04:52.090 }, 00:04:52.090 { 00:04:52.090 "subsystem": "nbd", 00:04:52.090 "config": [] 00:04:52.090 }, 00:04:52.090 { 00:04:52.090 "subsystem": "nvmf", 00:04:52.090 "config": [ 00:04:52.090 { 00:04:52.090 "method": "nvmf_set_config", 00:04:52.090 "params": { 00:04:52.090 "discovery_filter": "match_any", 00:04:52.090 "admin_cmd_passthru": { 00:04:52.090 "identify_ctrlr": false 00:04:52.090 } 00:04:52.090 } 00:04:52.090 }, 00:04:52.090 { 00:04:52.090 "method": "nvmf_set_max_subsystems", 00:04:52.090 "params": { 00:04:52.090 "max_subsystems": 1024 00:04:52.090 } 00:04:52.090 }, 00:04:52.090 { 00:04:52.090 "method": "nvmf_set_crdt", 00:04:52.090 "params": { 00:04:52.090 "crdt1": 0, 00:04:52.090 "crdt2": 0, 00:04:52.090 "crdt3": 0 00:04:52.090 } 00:04:52.090 }, 00:04:52.090 { 00:04:52.090 "method": "nvmf_create_transport", 00:04:52.090 "params": { 00:04:52.090 "trtype": "TCP", 00:04:52.090 "max_queue_depth": 128, 00:04:52.090 "max_io_qpairs_per_ctrlr": 127, 00:04:52.090 "in_capsule_data_size": 4096, 00:04:52.090 "max_io_size": 131072, 00:04:52.090 "io_unit_size": 131072, 00:04:52.090 "max_aq_depth": 128, 00:04:52.090 "num_shared_buffers": 511, 00:04:52.090 "buf_cache_size": 4294967295, 00:04:52.090 "dif_insert_or_strip": false, 00:04:52.090 "zcopy": false, 00:04:52.090 "c2h_success": true, 00:04:52.090 "sock_priority": 0, 00:04:52.090 "abort_timeout_sec": 1, 00:04:52.090 "ack_timeout": 0, 00:04:52.090 "data_wr_pool_size": 0 00:04:52.090 } 00:04:52.090 } 00:04:52.090 ] 00:04:52.090 }, 00:04:52.090 { 00:04:52.090 "subsystem": "iscsi", 00:04:52.090 "config": [ 00:04:52.090 { 00:04:52.090 "method": "iscsi_set_options", 00:04:52.090 "params": { 00:04:52.090 "node_base": "iqn.2016-06.io.spdk", 00:04:52.090 "max_sessions": 128, 00:04:52.090 "max_connections_per_session": 2, 00:04:52.090 "max_queue_depth": 64, 00:04:52.090 "default_time2wait": 2, 00:04:52.090 "default_time2retain": 20, 00:04:52.090 "first_burst_length": 8192, 00:04:52.090 "immediate_data": true, 00:04:52.090 "allow_duplicated_isid": false, 00:04:52.090 "error_recovery_level": 0, 00:04:52.090 "nop_timeout": 60, 00:04:52.090 "nop_in_interval": 30, 00:04:52.090 "disable_chap": false, 00:04:52.090 "require_chap": false, 00:04:52.090 "mutual_chap": false, 00:04:52.090 "chap_group": 0, 00:04:52.090 "max_large_datain_per_connection": 64, 00:04:52.090 "max_r2t_per_connection": 4, 00:04:52.090 "pdu_pool_size": 36864, 00:04:52.090 "immediate_data_pool_size": 16384, 00:04:52.090 "data_out_pool_size": 2048 00:04:52.090 } 00:04:52.090 } 00:04:52.090 ] 00:04:52.090 } 00:04:52.090 ] 00:04:52.090 } 00:04:52.090 12:47:57 -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:52.090 12:47:57 -- rpc/skip_rpc.sh@40 -- # killprocess 3756731 00:04:52.090 12:47:57 -- common/autotest_common.sh@936 -- # '[' -z 3756731 ']' 00:04:52.090 12:47:57 -- common/autotest_common.sh@940 -- # kill -0 3756731 00:04:52.090 12:47:57 -- common/autotest_common.sh@941 -- # uname 00:04:52.090 12:47:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:52.090 12:47:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3756731 00:04:52.090 12:47:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:52.090 12:47:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:52.090 12:47:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3756731' 00:04:52.090 killing process with pid 3756731 00:04:52.090 12:47:57 -- common/autotest_common.sh@955 -- # kill 3756731 00:04:52.090 12:47:57 -- common/autotest_common.sh@960 -- # wait 3756731 00:04:52.351 12:47:57 -- rpc/skip_rpc.sh@47 -- # local spdk_pid=3757032 00:04:52.351 12:47:57 -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:52.351 12:47:57 -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:57.668 12:48:02 -- rpc/skip_rpc.sh@50 -- # killprocess 3757032 00:04:57.668 12:48:02 -- common/autotest_common.sh@936 -- # '[' -z 3757032 ']' 00:04:57.668 12:48:02 -- common/autotest_common.sh@940 -- # kill -0 3757032 00:04:57.668 12:48:02 -- common/autotest_common.sh@941 -- # uname 00:04:57.668 12:48:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:57.668 12:48:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3757032 00:04:57.668 12:48:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:57.668 12:48:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:57.668 12:48:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3757032' 00:04:57.668 killing process with pid 3757032 00:04:57.668 12:48:02 -- common/autotest_common.sh@955 -- # kill 3757032 00:04:57.668 12:48:02 -- common/autotest_common.sh@960 -- # wait 3757032 00:04:57.668 12:48:02 -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:57.668 12:48:02 -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:57.668 00:04:57.668 real 0m6.547s 00:04:57.668 user 0m6.451s 00:04:57.668 sys 0m0.514s 00:04:57.668 12:48:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:57.668 12:48:02 -- common/autotest_common.sh@10 -- # set +x 00:04:57.668 ************************************ 00:04:57.668 END TEST skip_rpc_with_json 00:04:57.668 ************************************ 00:04:57.668 12:48:02 -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:57.668 12:48:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:57.668 12:48:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:57.668 12:48:02 -- common/autotest_common.sh@10 -- # set +x 00:04:57.668 ************************************ 00:04:57.668 START TEST skip_rpc_with_delay 00:04:57.668 ************************************ 00:04:57.668 12:48:02 -- common/autotest_common.sh@1111 -- # test_skip_rpc_with_delay 00:04:57.668 12:48:02 -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:57.668 12:48:02 -- common/autotest_common.sh@638 -- # local es=0 00:04:57.668 12:48:02 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:57.668 12:48:02 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:57.668 12:48:02 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:57.668 12:48:02 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:57.929 12:48:02 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:57.929 12:48:02 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:57.929 12:48:02 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:57.929 12:48:02 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:57.929 12:48:02 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:57.929 12:48:02 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:57.929 [2024-04-26 12:48:02.787351] app.c: 751:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:57.929 [2024-04-26 12:48:02.787452] app.c: 630:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:57.929 12:48:02 -- common/autotest_common.sh@641 -- # es=1 00:04:57.929 12:48:02 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:04:57.929 12:48:02 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:04:57.929 12:48:02 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:04:57.929 00:04:57.929 real 0m0.076s 00:04:57.929 user 0m0.050s 00:04:57.929 sys 0m0.025s 00:04:57.929 12:48:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:57.929 12:48:02 -- common/autotest_common.sh@10 -- # set +x 00:04:57.929 ************************************ 00:04:57.929 END TEST skip_rpc_with_delay 00:04:57.929 ************************************ 00:04:57.929 12:48:02 -- rpc/skip_rpc.sh@77 -- # uname 00:04:57.929 12:48:02 -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:57.930 12:48:02 -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:57.930 12:48:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:57.930 12:48:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:57.930 12:48:02 -- common/autotest_common.sh@10 -- # set +x 00:04:58.191 ************************************ 00:04:58.191 START TEST exit_on_failed_rpc_init 00:04:58.191 ************************************ 00:04:58.191 12:48:02 -- common/autotest_common.sh@1111 -- # test_exit_on_failed_rpc_init 00:04:58.191 12:48:02 -- rpc/skip_rpc.sh@62 -- # local spdk_pid=3758388 00:04:58.191 12:48:02 -- rpc/skip_rpc.sh@63 -- # waitforlisten 3758388 00:04:58.191 12:48:02 -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:58.191 12:48:02 -- common/autotest_common.sh@817 -- # '[' -z 3758388 ']' 00:04:58.191 12:48:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:58.191 12:48:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:58.191 12:48:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:58.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:58.191 12:48:03 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:58.191 12:48:03 -- common/autotest_common.sh@10 -- # set +x 00:04:58.191 [2024-04-26 12:48:03.052978] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:04:58.191 [2024-04-26 12:48:03.053034] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3758388 ] 00:04:58.191 EAL: No free 2048 kB hugepages reported on node 1 00:04:58.191 [2024-04-26 12:48:03.120619] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:58.191 [2024-04-26 12:48:03.194140] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.135 12:48:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:59.135 12:48:03 -- common/autotest_common.sh@850 -- # return 0 00:04:59.135 12:48:03 -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:59.135 12:48:03 -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:59.135 12:48:03 -- common/autotest_common.sh@638 -- # local es=0 00:04:59.135 12:48:03 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:59.135 12:48:03 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:59.135 12:48:03 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:59.135 12:48:03 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:59.135 12:48:03 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:59.135 12:48:03 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:59.135 12:48:03 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:59.135 12:48:03 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:59.135 12:48:03 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:59.135 12:48:03 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:59.135 [2024-04-26 12:48:03.878949] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:04:59.135 [2024-04-26 12:48:03.878998] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3758555 ] 00:04:59.135 EAL: No free 2048 kB hugepages reported on node 1 00:04:59.135 [2024-04-26 12:48:03.954092] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.135 [2024-04-26 12:48:04.016250] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:59.135 [2024-04-26 12:48:04.016312] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:59.135 [2024-04-26 12:48:04.016322] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:59.135 [2024-04-26 12:48:04.016329] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:59.135 12:48:04 -- common/autotest_common.sh@641 -- # es=234 00:04:59.135 12:48:04 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:04:59.135 12:48:04 -- common/autotest_common.sh@650 -- # es=106 00:04:59.135 12:48:04 -- common/autotest_common.sh@651 -- # case "$es" in 00:04:59.135 12:48:04 -- common/autotest_common.sh@658 -- # es=1 00:04:59.135 12:48:04 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:04:59.135 12:48:04 -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:59.135 12:48:04 -- rpc/skip_rpc.sh@70 -- # killprocess 3758388 00:04:59.135 12:48:04 -- common/autotest_common.sh@936 -- # '[' -z 3758388 ']' 00:04:59.135 12:48:04 -- common/autotest_common.sh@940 -- # kill -0 3758388 00:04:59.135 12:48:04 -- common/autotest_common.sh@941 -- # uname 00:04:59.135 12:48:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:59.135 12:48:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3758388 00:04:59.135 12:48:04 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:59.135 12:48:04 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:59.135 12:48:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3758388' 00:04:59.135 killing process with pid 3758388 00:04:59.135 12:48:04 -- common/autotest_common.sh@955 -- # kill 3758388 00:04:59.135 12:48:04 -- common/autotest_common.sh@960 -- # wait 3758388 00:04:59.404 00:04:59.404 real 0m1.342s 00:04:59.404 user 0m1.572s 00:04:59.404 sys 0m0.368s 00:04:59.404 12:48:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:59.404 12:48:04 -- common/autotest_common.sh@10 -- # set +x 00:04:59.404 ************************************ 00:04:59.404 END TEST exit_on_failed_rpc_init 00:04:59.404 ************************************ 00:04:59.404 12:48:04 -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:59.404 00:04:59.404 real 0m14.109s 00:04:59.404 user 0m13.484s 00:04:59.404 sys 0m1.622s 00:04:59.404 12:48:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:59.404 12:48:04 -- common/autotest_common.sh@10 -- # set +x 00:04:59.404 ************************************ 00:04:59.404 END TEST skip_rpc 00:04:59.404 ************************************ 00:04:59.404 12:48:04 -- spdk/autotest.sh@167 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:59.404 12:48:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:59.404 12:48:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:59.404 12:48:04 -- common/autotest_common.sh@10 -- # set +x 00:04:59.665 ************************************ 00:04:59.665 START TEST rpc_client 00:04:59.665 ************************************ 00:04:59.665 12:48:04 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:59.665 * Looking for test storage... 00:04:59.665 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:59.665 12:48:04 -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:59.665 OK 00:04:59.665 12:48:04 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:59.665 00:04:59.665 real 0m0.134s 00:04:59.665 user 0m0.061s 00:04:59.665 sys 0m0.083s 00:04:59.665 12:48:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:59.665 12:48:04 -- common/autotest_common.sh@10 -- # set +x 00:04:59.665 ************************************ 00:04:59.665 END TEST rpc_client 00:04:59.665 ************************************ 00:04:59.927 12:48:04 -- spdk/autotest.sh@168 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:59.927 12:48:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:59.927 12:48:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:59.927 12:48:04 -- common/autotest_common.sh@10 -- # set +x 00:04:59.927 ************************************ 00:04:59.927 START TEST json_config 00:04:59.927 ************************************ 00:04:59.927 12:48:04 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:59.927 12:48:04 -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:00.189 12:48:04 -- nvmf/common.sh@7 -- # uname -s 00:05:00.189 12:48:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:00.189 12:48:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:00.189 12:48:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:00.189 12:48:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:00.189 12:48:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:00.189 12:48:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:00.189 12:48:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:00.189 12:48:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:00.189 12:48:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:00.189 12:48:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:00.189 12:48:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:00.189 12:48:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:00.189 12:48:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:00.189 12:48:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:00.189 12:48:05 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:00.189 12:48:05 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:00.189 12:48:05 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:00.189 12:48:05 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:00.189 12:48:05 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:00.189 12:48:05 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:00.189 12:48:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:00.189 12:48:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:00.189 12:48:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:00.189 12:48:05 -- paths/export.sh@5 -- # export PATH 00:05:00.190 12:48:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:00.190 12:48:05 -- nvmf/common.sh@47 -- # : 0 00:05:00.190 12:48:05 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:00.190 12:48:05 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:00.190 12:48:05 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:00.190 12:48:05 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:00.190 12:48:05 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:00.190 12:48:05 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:00.190 12:48:05 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:00.190 12:48:05 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:00.190 12:48:05 -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:00.190 12:48:05 -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:00.190 12:48:05 -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:00.190 12:48:05 -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:00.190 12:48:05 -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:00.190 12:48:05 -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:00.190 12:48:05 -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:00.190 12:48:05 -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:00.190 12:48:05 -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:00.190 12:48:05 -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:00.190 12:48:05 -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:00.190 12:48:05 -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:00.190 12:48:05 -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:00.190 12:48:05 -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:00.190 12:48:05 -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:00.190 12:48:05 -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:05:00.190 INFO: JSON configuration test init 00:05:00.190 12:48:05 -- json_config/json_config.sh@357 -- # json_config_test_init 00:05:00.190 12:48:05 -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:05:00.190 12:48:05 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:00.190 12:48:05 -- common/autotest_common.sh@10 -- # set +x 00:05:00.190 12:48:05 -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:05:00.190 12:48:05 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:00.190 12:48:05 -- common/autotest_common.sh@10 -- # set +x 00:05:00.190 12:48:05 -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:05:00.190 12:48:05 -- json_config/common.sh@9 -- # local app=target 00:05:00.190 12:48:05 -- json_config/common.sh@10 -- # shift 00:05:00.190 12:48:05 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:00.190 12:48:05 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:00.190 12:48:05 -- json_config/common.sh@15 -- # local app_extra_params= 00:05:00.190 12:48:05 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:00.190 12:48:05 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:00.190 12:48:05 -- json_config/common.sh@22 -- # app_pid["$app"]=3759010 00:05:00.190 12:48:05 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:00.190 Waiting for target to run... 00:05:00.190 12:48:05 -- json_config/common.sh@25 -- # waitforlisten 3759010 /var/tmp/spdk_tgt.sock 00:05:00.190 12:48:05 -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:00.190 12:48:05 -- common/autotest_common.sh@817 -- # '[' -z 3759010 ']' 00:05:00.190 12:48:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:00.190 12:48:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:00.190 12:48:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:00.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:00.190 12:48:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:00.190 12:48:05 -- common/autotest_common.sh@10 -- # set +x 00:05:00.190 [2024-04-26 12:48:05.090641] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:05:00.190 [2024-04-26 12:48:05.090700] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3759010 ] 00:05:00.190 EAL: No free 2048 kB hugepages reported on node 1 00:05:00.452 [2024-04-26 12:48:05.511793] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.713 [2024-04-26 12:48:05.571048] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.975 12:48:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:00.975 12:48:05 -- common/autotest_common.sh@850 -- # return 0 00:05:00.975 12:48:05 -- json_config/common.sh@26 -- # echo '' 00:05:00.975 00:05:00.975 12:48:05 -- json_config/json_config.sh@269 -- # create_accel_config 00:05:00.975 12:48:05 -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:05:00.975 12:48:05 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:00.975 12:48:05 -- common/autotest_common.sh@10 -- # set +x 00:05:00.975 12:48:05 -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:05:00.975 12:48:05 -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:05:00.975 12:48:05 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:00.975 12:48:05 -- common/autotest_common.sh@10 -- # set +x 00:05:00.975 12:48:05 -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:00.975 12:48:05 -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:05:00.975 12:48:05 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:01.548 12:48:06 -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:05:01.548 12:48:06 -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:01.548 12:48:06 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:01.548 12:48:06 -- common/autotest_common.sh@10 -- # set +x 00:05:01.548 12:48:06 -- json_config/json_config.sh@45 -- # local ret=0 00:05:01.548 12:48:06 -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:01.548 12:48:06 -- json_config/json_config.sh@46 -- # local enabled_types 00:05:01.548 12:48:06 -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:01.548 12:48:06 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:01.548 12:48:06 -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:01.810 12:48:06 -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:01.810 12:48:06 -- json_config/json_config.sh@48 -- # local get_types 00:05:01.810 12:48:06 -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:01.810 12:48:06 -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:05:01.810 12:48:06 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:01.810 12:48:06 -- common/autotest_common.sh@10 -- # set +x 00:05:01.810 12:48:06 -- json_config/json_config.sh@55 -- # return 0 00:05:01.810 12:48:06 -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:05:01.810 12:48:06 -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:01.810 12:48:06 -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:01.810 12:48:06 -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:05:01.810 12:48:06 -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:05:01.810 12:48:06 -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:05:01.810 12:48:06 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:01.810 12:48:06 -- common/autotest_common.sh@10 -- # set +x 00:05:01.810 12:48:06 -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:01.810 12:48:06 -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:05:01.810 12:48:06 -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:05:01.810 12:48:06 -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:01.810 12:48:06 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:01.810 MallocForNvmf0 00:05:01.810 12:48:06 -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:01.810 12:48:06 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:02.071 MallocForNvmf1 00:05:02.071 12:48:06 -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:02.071 12:48:06 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:02.332 [2024-04-26 12:48:07.137965] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:02.332 12:48:07 -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:02.332 12:48:07 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:02.332 12:48:07 -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:02.332 12:48:07 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:02.592 12:48:07 -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:02.592 12:48:07 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:02.592 12:48:07 -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:02.592 12:48:07 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:02.854 [2024-04-26 12:48:07.731901] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:02.854 12:48:07 -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:05:02.854 12:48:07 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:02.854 12:48:07 -- common/autotest_common.sh@10 -- # set +x 00:05:02.854 12:48:07 -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:05:02.854 12:48:07 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:02.854 12:48:07 -- common/autotest_common.sh@10 -- # set +x 00:05:02.854 12:48:07 -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:05:02.854 12:48:07 -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:02.854 12:48:07 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:03.115 MallocBdevForConfigChangeCheck 00:05:03.115 12:48:07 -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:05:03.115 12:48:07 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:03.115 12:48:07 -- common/autotest_common.sh@10 -- # set +x 00:05:03.115 12:48:08 -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:05:03.115 12:48:08 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:03.376 12:48:08 -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:05:03.376 INFO: shutting down applications... 00:05:03.376 12:48:08 -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:05:03.376 12:48:08 -- json_config/json_config.sh@368 -- # json_config_clear target 00:05:03.376 12:48:08 -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:05:03.376 12:48:08 -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:03.638 Calling clear_iscsi_subsystem 00:05:03.638 Calling clear_nvmf_subsystem 00:05:03.638 Calling clear_nbd_subsystem 00:05:03.638 Calling clear_ublk_subsystem 00:05:03.638 Calling clear_vhost_blk_subsystem 00:05:03.638 Calling clear_vhost_scsi_subsystem 00:05:03.638 Calling clear_bdev_subsystem 00:05:03.899 12:48:08 -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:03.899 12:48:08 -- json_config/json_config.sh@343 -- # count=100 00:05:03.899 12:48:08 -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:05:03.899 12:48:08 -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:03.899 12:48:08 -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:03.899 12:48:08 -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:04.160 12:48:09 -- json_config/json_config.sh@345 -- # break 00:05:04.160 12:48:09 -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:05:04.160 12:48:09 -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:05:04.160 12:48:09 -- json_config/common.sh@31 -- # local app=target 00:05:04.160 12:48:09 -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:04.160 12:48:09 -- json_config/common.sh@35 -- # [[ -n 3759010 ]] 00:05:04.160 12:48:09 -- json_config/common.sh@38 -- # kill -SIGINT 3759010 00:05:04.160 12:48:09 -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:04.160 12:48:09 -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:04.160 12:48:09 -- json_config/common.sh@41 -- # kill -0 3759010 00:05:04.160 12:48:09 -- json_config/common.sh@45 -- # sleep 0.5 00:05:04.733 12:48:09 -- json_config/common.sh@40 -- # (( i++ )) 00:05:04.733 12:48:09 -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:04.733 12:48:09 -- json_config/common.sh@41 -- # kill -0 3759010 00:05:04.733 12:48:09 -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:04.733 12:48:09 -- json_config/common.sh@43 -- # break 00:05:04.733 12:48:09 -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:04.733 12:48:09 -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:04.733 SPDK target shutdown done 00:05:04.733 12:48:09 -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:05:04.733 INFO: relaunching applications... 00:05:04.734 12:48:09 -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:04.734 12:48:09 -- json_config/common.sh@9 -- # local app=target 00:05:04.734 12:48:09 -- json_config/common.sh@10 -- # shift 00:05:04.734 12:48:09 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:04.734 12:48:09 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:04.734 12:48:09 -- json_config/common.sh@15 -- # local app_extra_params= 00:05:04.734 12:48:09 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:04.734 12:48:09 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:04.734 12:48:09 -- json_config/common.sh@22 -- # app_pid["$app"]=3760267 00:05:04.734 12:48:09 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:04.734 Waiting for target to run... 00:05:04.734 12:48:09 -- json_config/common.sh@25 -- # waitforlisten 3760267 /var/tmp/spdk_tgt.sock 00:05:04.734 12:48:09 -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:04.734 12:48:09 -- common/autotest_common.sh@817 -- # '[' -z 3760267 ']' 00:05:04.734 12:48:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:04.734 12:48:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:04.734 12:48:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:04.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:04.734 12:48:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:04.734 12:48:09 -- common/autotest_common.sh@10 -- # set +x 00:05:04.734 [2024-04-26 12:48:09.577626] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:05:04.734 [2024-04-26 12:48:09.577695] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3760267 ] 00:05:04.734 EAL: No free 2048 kB hugepages reported on node 1 00:05:04.995 [2024-04-26 12:48:09.871068] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:04.995 [2024-04-26 12:48:09.928584] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.567 [2024-04-26 12:48:10.416528] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:05.567 [2024-04-26 12:48:10.448890] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:05.567 12:48:10 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:05.567 12:48:10 -- common/autotest_common.sh@850 -- # return 0 00:05:05.567 12:48:10 -- json_config/common.sh@26 -- # echo '' 00:05:05.567 00:05:05.567 12:48:10 -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:05:05.567 12:48:10 -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:05.567 INFO: Checking if target configuration is the same... 00:05:05.567 12:48:10 -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:05.567 12:48:10 -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:05:05.567 12:48:10 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:05.567 + '[' 2 -ne 2 ']' 00:05:05.567 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:05.567 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:05.567 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:05.567 +++ basename /dev/fd/62 00:05:05.567 ++ mktemp /tmp/62.XXX 00:05:05.567 + tmp_file_1=/tmp/62.hMT 00:05:05.567 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:05.567 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:05.567 + tmp_file_2=/tmp/spdk_tgt_config.json.fnd 00:05:05.567 + ret=0 00:05:05.567 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:05.828 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:05.828 + diff -u /tmp/62.hMT /tmp/spdk_tgt_config.json.fnd 00:05:05.828 + echo 'INFO: JSON config files are the same' 00:05:05.828 INFO: JSON config files are the same 00:05:05.828 + rm /tmp/62.hMT /tmp/spdk_tgt_config.json.fnd 00:05:05.828 + exit 0 00:05:05.828 12:48:10 -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:05:05.828 12:48:10 -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:05.828 INFO: changing configuration and checking if this can be detected... 00:05:05.828 12:48:10 -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:05.828 12:48:10 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:06.089 12:48:10 -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:06.089 12:48:10 -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:05:06.089 12:48:10 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:06.089 + '[' 2 -ne 2 ']' 00:05:06.089 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:06.089 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:06.089 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:06.089 +++ basename /dev/fd/62 00:05:06.089 ++ mktemp /tmp/62.XXX 00:05:06.089 + tmp_file_1=/tmp/62.5uo 00:05:06.089 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:06.089 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:06.089 + tmp_file_2=/tmp/spdk_tgt_config.json.IT3 00:05:06.089 + ret=0 00:05:06.089 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:06.350 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:06.350 + diff -u /tmp/62.5uo /tmp/spdk_tgt_config.json.IT3 00:05:06.350 + ret=1 00:05:06.350 + echo '=== Start of file: /tmp/62.5uo ===' 00:05:06.350 + cat /tmp/62.5uo 00:05:06.350 + echo '=== End of file: /tmp/62.5uo ===' 00:05:06.350 + echo '' 00:05:06.350 + echo '=== Start of file: /tmp/spdk_tgt_config.json.IT3 ===' 00:05:06.350 + cat /tmp/spdk_tgt_config.json.IT3 00:05:06.350 + echo '=== End of file: /tmp/spdk_tgt_config.json.IT3 ===' 00:05:06.350 + echo '' 00:05:06.350 + rm /tmp/62.5uo /tmp/spdk_tgt_config.json.IT3 00:05:06.350 + exit 1 00:05:06.350 12:48:11 -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:05:06.350 INFO: configuration change detected. 00:05:06.350 12:48:11 -- json_config/json_config.sh@394 -- # json_config_test_fini 00:05:06.350 12:48:11 -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:05:06.350 12:48:11 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:06.350 12:48:11 -- common/autotest_common.sh@10 -- # set +x 00:05:06.350 12:48:11 -- json_config/json_config.sh@307 -- # local ret=0 00:05:06.350 12:48:11 -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:05:06.350 12:48:11 -- json_config/json_config.sh@317 -- # [[ -n 3760267 ]] 00:05:06.350 12:48:11 -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:05:06.350 12:48:11 -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:05:06.350 12:48:11 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:06.350 12:48:11 -- common/autotest_common.sh@10 -- # set +x 00:05:06.350 12:48:11 -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:05:06.350 12:48:11 -- json_config/json_config.sh@193 -- # uname -s 00:05:06.350 12:48:11 -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:05:06.350 12:48:11 -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:05:06.350 12:48:11 -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:05:06.350 12:48:11 -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:05:06.350 12:48:11 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:06.350 12:48:11 -- common/autotest_common.sh@10 -- # set +x 00:05:06.350 12:48:11 -- json_config/json_config.sh@323 -- # killprocess 3760267 00:05:06.350 12:48:11 -- common/autotest_common.sh@936 -- # '[' -z 3760267 ']' 00:05:06.350 12:48:11 -- common/autotest_common.sh@940 -- # kill -0 3760267 00:05:06.350 12:48:11 -- common/autotest_common.sh@941 -- # uname 00:05:06.350 12:48:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:06.350 12:48:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3760267 00:05:06.611 12:48:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:06.611 12:48:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:06.611 12:48:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3760267' 00:05:06.611 killing process with pid 3760267 00:05:06.611 12:48:11 -- common/autotest_common.sh@955 -- # kill 3760267 00:05:06.611 12:48:11 -- common/autotest_common.sh@960 -- # wait 3760267 00:05:06.872 12:48:11 -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:06.872 12:48:11 -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:05:06.872 12:48:11 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:06.872 12:48:11 -- common/autotest_common.sh@10 -- # set +x 00:05:06.872 12:48:11 -- json_config/json_config.sh@328 -- # return 0 00:05:06.872 12:48:11 -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:05:06.872 INFO: Success 00:05:06.872 00:05:06.872 real 0m6.865s 00:05:06.872 user 0m8.194s 00:05:06.872 sys 0m1.816s 00:05:06.872 12:48:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:06.872 12:48:11 -- common/autotest_common.sh@10 -- # set +x 00:05:06.872 ************************************ 00:05:06.872 END TEST json_config 00:05:06.872 ************************************ 00:05:06.872 12:48:11 -- spdk/autotest.sh@169 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:06.872 12:48:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:06.872 12:48:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:06.872 12:48:11 -- common/autotest_common.sh@10 -- # set +x 00:05:07.133 ************************************ 00:05:07.133 START TEST json_config_extra_key 00:05:07.133 ************************************ 00:05:07.133 12:48:11 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:07.133 12:48:12 -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:07.133 12:48:12 -- nvmf/common.sh@7 -- # uname -s 00:05:07.133 12:48:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:07.133 12:48:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:07.133 12:48:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:07.133 12:48:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:07.133 12:48:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:07.133 12:48:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:07.133 12:48:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:07.133 12:48:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:07.133 12:48:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:07.133 12:48:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:07.133 12:48:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:07.133 12:48:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:07.133 12:48:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:07.133 12:48:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:07.133 12:48:12 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:07.133 12:48:12 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:07.133 12:48:12 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:07.134 12:48:12 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:07.134 12:48:12 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:07.134 12:48:12 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:07.134 12:48:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:07.134 12:48:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:07.134 12:48:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:07.134 12:48:12 -- paths/export.sh@5 -- # export PATH 00:05:07.134 12:48:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:07.134 12:48:12 -- nvmf/common.sh@47 -- # : 0 00:05:07.134 12:48:12 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:07.134 12:48:12 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:07.134 12:48:12 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:07.134 12:48:12 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:07.134 12:48:12 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:07.134 12:48:12 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:07.134 12:48:12 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:07.134 12:48:12 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:07.134 12:48:12 -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:07.134 12:48:12 -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:07.134 12:48:12 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:07.134 12:48:12 -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:07.134 12:48:12 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:07.134 12:48:12 -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:07.134 12:48:12 -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:07.134 12:48:12 -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:07.134 12:48:12 -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:07.134 12:48:12 -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:07.134 12:48:12 -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:07.134 INFO: launching applications... 00:05:07.134 12:48:12 -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:07.134 12:48:12 -- json_config/common.sh@9 -- # local app=target 00:05:07.134 12:48:12 -- json_config/common.sh@10 -- # shift 00:05:07.134 12:48:12 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:07.134 12:48:12 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:07.134 12:48:12 -- json_config/common.sh@15 -- # local app_extra_params= 00:05:07.134 12:48:12 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:07.134 12:48:12 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:07.134 12:48:12 -- json_config/common.sh@22 -- # app_pid["$app"]=3761064 00:05:07.134 12:48:12 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:07.134 Waiting for target to run... 00:05:07.134 12:48:12 -- json_config/common.sh@25 -- # waitforlisten 3761064 /var/tmp/spdk_tgt.sock 00:05:07.134 12:48:12 -- common/autotest_common.sh@817 -- # '[' -z 3761064 ']' 00:05:07.134 12:48:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:07.134 12:48:12 -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:07.134 12:48:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:07.134 12:48:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:07.134 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:07.134 12:48:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:07.134 12:48:12 -- common/autotest_common.sh@10 -- # set +x 00:05:07.134 [2024-04-26 12:48:12.110726] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:05:07.134 [2024-04-26 12:48:12.110774] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3761064 ] 00:05:07.134 EAL: No free 2048 kB hugepages reported on node 1 00:05:07.395 [2024-04-26 12:48:12.383320] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.395 [2024-04-26 12:48:12.433202] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.965 12:48:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:07.965 12:48:12 -- common/autotest_common.sh@850 -- # return 0 00:05:07.965 12:48:12 -- json_config/common.sh@26 -- # echo '' 00:05:07.965 00:05:07.965 12:48:12 -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:07.965 INFO: shutting down applications... 00:05:07.965 12:48:12 -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:07.965 12:48:12 -- json_config/common.sh@31 -- # local app=target 00:05:07.965 12:48:12 -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:07.965 12:48:12 -- json_config/common.sh@35 -- # [[ -n 3761064 ]] 00:05:07.965 12:48:12 -- json_config/common.sh@38 -- # kill -SIGINT 3761064 00:05:07.965 12:48:12 -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:07.965 12:48:12 -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:07.965 12:48:12 -- json_config/common.sh@41 -- # kill -0 3761064 00:05:07.965 12:48:12 -- json_config/common.sh@45 -- # sleep 0.5 00:05:08.535 12:48:13 -- json_config/common.sh@40 -- # (( i++ )) 00:05:08.535 12:48:13 -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:08.535 12:48:13 -- json_config/common.sh@41 -- # kill -0 3761064 00:05:08.535 12:48:13 -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:08.535 12:48:13 -- json_config/common.sh@43 -- # break 00:05:08.535 12:48:13 -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:08.535 12:48:13 -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:08.535 SPDK target shutdown done 00:05:08.535 12:48:13 -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:08.535 Success 00:05:08.535 00:05:08.535 real 0m1.413s 00:05:08.535 user 0m1.043s 00:05:08.535 sys 0m0.370s 00:05:08.535 12:48:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:08.535 12:48:13 -- common/autotest_common.sh@10 -- # set +x 00:05:08.535 ************************************ 00:05:08.535 END TEST json_config_extra_key 00:05:08.535 ************************************ 00:05:08.535 12:48:13 -- spdk/autotest.sh@170 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:08.535 12:48:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:08.535 12:48:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:08.535 12:48:13 -- common/autotest_common.sh@10 -- # set +x 00:05:08.535 ************************************ 00:05:08.535 START TEST alias_rpc 00:05:08.535 ************************************ 00:05:08.535 12:48:13 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:08.795 * Looking for test storage... 00:05:08.795 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:08.795 12:48:13 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:08.795 12:48:13 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3761450 00:05:08.795 12:48:13 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3761450 00:05:08.795 12:48:13 -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:08.795 12:48:13 -- common/autotest_common.sh@817 -- # '[' -z 3761450 ']' 00:05:08.795 12:48:13 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:08.795 12:48:13 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:08.795 12:48:13 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:08.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:08.795 12:48:13 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:08.795 12:48:13 -- common/autotest_common.sh@10 -- # set +x 00:05:08.795 [2024-04-26 12:48:13.703235] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:05:08.795 [2024-04-26 12:48:13.703290] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3761450 ] 00:05:08.795 EAL: No free 2048 kB hugepages reported on node 1 00:05:08.795 [2024-04-26 12:48:13.764993] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:08.795 [2024-04-26 12:48:13.832831] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.735 12:48:14 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:09.735 12:48:14 -- common/autotest_common.sh@850 -- # return 0 00:05:09.735 12:48:14 -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:09.735 12:48:14 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3761450 00:05:09.735 12:48:14 -- common/autotest_common.sh@936 -- # '[' -z 3761450 ']' 00:05:09.735 12:48:14 -- common/autotest_common.sh@940 -- # kill -0 3761450 00:05:09.735 12:48:14 -- common/autotest_common.sh@941 -- # uname 00:05:09.735 12:48:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:09.735 12:48:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3761450 00:05:09.735 12:48:14 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:09.735 12:48:14 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:09.735 12:48:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3761450' 00:05:09.735 killing process with pid 3761450 00:05:09.735 12:48:14 -- common/autotest_common.sh@955 -- # kill 3761450 00:05:09.735 12:48:14 -- common/autotest_common.sh@960 -- # wait 3761450 00:05:09.995 00:05:09.995 real 0m1.354s 00:05:09.995 user 0m1.479s 00:05:09.995 sys 0m0.365s 00:05:09.995 12:48:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:09.995 12:48:14 -- common/autotest_common.sh@10 -- # set +x 00:05:09.995 ************************************ 00:05:09.995 END TEST alias_rpc 00:05:09.995 ************************************ 00:05:09.995 12:48:14 -- spdk/autotest.sh@172 -- # [[ 0 -eq 0 ]] 00:05:09.995 12:48:14 -- spdk/autotest.sh@173 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:09.995 12:48:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:09.995 12:48:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:09.995 12:48:14 -- common/autotest_common.sh@10 -- # set +x 00:05:10.256 ************************************ 00:05:10.256 START TEST spdkcli_tcp 00:05:10.256 ************************************ 00:05:10.256 12:48:15 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:10.256 * Looking for test storage... 00:05:10.256 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:10.256 12:48:15 -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:10.256 12:48:15 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:10.256 12:48:15 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:10.256 12:48:15 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:10.256 12:48:15 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:10.256 12:48:15 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:10.256 12:48:15 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:10.256 12:48:15 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:10.256 12:48:15 -- common/autotest_common.sh@10 -- # set +x 00:05:10.256 12:48:15 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3761855 00:05:10.256 12:48:15 -- spdkcli/tcp.sh@27 -- # waitforlisten 3761855 00:05:10.256 12:48:15 -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:10.256 12:48:15 -- common/autotest_common.sh@817 -- # '[' -z 3761855 ']' 00:05:10.256 12:48:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:10.256 12:48:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:10.256 12:48:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:10.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:10.256 12:48:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:10.256 12:48:15 -- common/autotest_common.sh@10 -- # set +x 00:05:10.256 [2024-04-26 12:48:15.264677] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:05:10.256 [2024-04-26 12:48:15.264743] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3761855 ] 00:05:10.256 EAL: No free 2048 kB hugepages reported on node 1 00:05:10.517 [2024-04-26 12:48:15.331893] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:10.517 [2024-04-26 12:48:15.405312] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:10.517 [2024-04-26 12:48:15.405314] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.089 12:48:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:11.089 12:48:16 -- common/autotest_common.sh@850 -- # return 0 00:05:11.089 12:48:16 -- spdkcli/tcp.sh@31 -- # socat_pid=3761868 00:05:11.089 12:48:16 -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:11.089 12:48:16 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:11.350 [ 00:05:11.350 "bdev_malloc_delete", 00:05:11.350 "bdev_malloc_create", 00:05:11.350 "bdev_null_resize", 00:05:11.350 "bdev_null_delete", 00:05:11.350 "bdev_null_create", 00:05:11.350 "bdev_nvme_cuse_unregister", 00:05:11.350 "bdev_nvme_cuse_register", 00:05:11.350 "bdev_opal_new_user", 00:05:11.350 "bdev_opal_set_lock_state", 00:05:11.350 "bdev_opal_delete", 00:05:11.350 "bdev_opal_get_info", 00:05:11.350 "bdev_opal_create", 00:05:11.350 "bdev_nvme_opal_revert", 00:05:11.350 "bdev_nvme_opal_init", 00:05:11.350 "bdev_nvme_send_cmd", 00:05:11.350 "bdev_nvme_get_path_iostat", 00:05:11.350 "bdev_nvme_get_mdns_discovery_info", 00:05:11.350 "bdev_nvme_stop_mdns_discovery", 00:05:11.350 "bdev_nvme_start_mdns_discovery", 00:05:11.350 "bdev_nvme_set_multipath_policy", 00:05:11.350 "bdev_nvme_set_preferred_path", 00:05:11.350 "bdev_nvme_get_io_paths", 00:05:11.350 "bdev_nvme_remove_error_injection", 00:05:11.350 "bdev_nvme_add_error_injection", 00:05:11.350 "bdev_nvme_get_discovery_info", 00:05:11.350 "bdev_nvme_stop_discovery", 00:05:11.350 "bdev_nvme_start_discovery", 00:05:11.350 "bdev_nvme_get_controller_health_info", 00:05:11.350 "bdev_nvme_disable_controller", 00:05:11.350 "bdev_nvme_enable_controller", 00:05:11.350 "bdev_nvme_reset_controller", 00:05:11.350 "bdev_nvme_get_transport_statistics", 00:05:11.350 "bdev_nvme_apply_firmware", 00:05:11.350 "bdev_nvme_detach_controller", 00:05:11.350 "bdev_nvme_get_controllers", 00:05:11.350 "bdev_nvme_attach_controller", 00:05:11.350 "bdev_nvme_set_hotplug", 00:05:11.350 "bdev_nvme_set_options", 00:05:11.350 "bdev_passthru_delete", 00:05:11.350 "bdev_passthru_create", 00:05:11.350 "bdev_lvol_grow_lvstore", 00:05:11.350 "bdev_lvol_get_lvols", 00:05:11.350 "bdev_lvol_get_lvstores", 00:05:11.350 "bdev_lvol_delete", 00:05:11.350 "bdev_lvol_set_read_only", 00:05:11.350 "bdev_lvol_resize", 00:05:11.350 "bdev_lvol_decouple_parent", 00:05:11.350 "bdev_lvol_inflate", 00:05:11.350 "bdev_lvol_rename", 00:05:11.350 "bdev_lvol_clone_bdev", 00:05:11.350 "bdev_lvol_clone", 00:05:11.350 "bdev_lvol_snapshot", 00:05:11.350 "bdev_lvol_create", 00:05:11.350 "bdev_lvol_delete_lvstore", 00:05:11.350 "bdev_lvol_rename_lvstore", 00:05:11.350 "bdev_lvol_create_lvstore", 00:05:11.350 "bdev_raid_set_options", 00:05:11.350 "bdev_raid_remove_base_bdev", 00:05:11.350 "bdev_raid_add_base_bdev", 00:05:11.350 "bdev_raid_delete", 00:05:11.350 "bdev_raid_create", 00:05:11.350 "bdev_raid_get_bdevs", 00:05:11.350 "bdev_error_inject_error", 00:05:11.350 "bdev_error_delete", 00:05:11.350 "bdev_error_create", 00:05:11.350 "bdev_split_delete", 00:05:11.350 "bdev_split_create", 00:05:11.350 "bdev_delay_delete", 00:05:11.350 "bdev_delay_create", 00:05:11.350 "bdev_delay_update_latency", 00:05:11.350 "bdev_zone_block_delete", 00:05:11.350 "bdev_zone_block_create", 00:05:11.350 "blobfs_create", 00:05:11.350 "blobfs_detect", 00:05:11.350 "blobfs_set_cache_size", 00:05:11.350 "bdev_aio_delete", 00:05:11.350 "bdev_aio_rescan", 00:05:11.350 "bdev_aio_create", 00:05:11.350 "bdev_ftl_set_property", 00:05:11.350 "bdev_ftl_get_properties", 00:05:11.350 "bdev_ftl_get_stats", 00:05:11.350 "bdev_ftl_unmap", 00:05:11.350 "bdev_ftl_unload", 00:05:11.350 "bdev_ftl_delete", 00:05:11.350 "bdev_ftl_load", 00:05:11.350 "bdev_ftl_create", 00:05:11.350 "bdev_virtio_attach_controller", 00:05:11.350 "bdev_virtio_scsi_get_devices", 00:05:11.350 "bdev_virtio_detach_controller", 00:05:11.350 "bdev_virtio_blk_set_hotplug", 00:05:11.350 "bdev_iscsi_delete", 00:05:11.350 "bdev_iscsi_create", 00:05:11.350 "bdev_iscsi_set_options", 00:05:11.350 "accel_error_inject_error", 00:05:11.350 "ioat_scan_accel_module", 00:05:11.350 "dsa_scan_accel_module", 00:05:11.350 "iaa_scan_accel_module", 00:05:11.350 "keyring_file_remove_key", 00:05:11.350 "keyring_file_add_key", 00:05:11.350 "iscsi_get_histogram", 00:05:11.350 "iscsi_enable_histogram", 00:05:11.350 "iscsi_set_options", 00:05:11.350 "iscsi_get_auth_groups", 00:05:11.350 "iscsi_auth_group_remove_secret", 00:05:11.350 "iscsi_auth_group_add_secret", 00:05:11.350 "iscsi_delete_auth_group", 00:05:11.350 "iscsi_create_auth_group", 00:05:11.350 "iscsi_set_discovery_auth", 00:05:11.350 "iscsi_get_options", 00:05:11.350 "iscsi_target_node_request_logout", 00:05:11.350 "iscsi_target_node_set_redirect", 00:05:11.350 "iscsi_target_node_set_auth", 00:05:11.350 "iscsi_target_node_add_lun", 00:05:11.350 "iscsi_get_stats", 00:05:11.350 "iscsi_get_connections", 00:05:11.350 "iscsi_portal_group_set_auth", 00:05:11.350 "iscsi_start_portal_group", 00:05:11.350 "iscsi_delete_portal_group", 00:05:11.350 "iscsi_create_portal_group", 00:05:11.350 "iscsi_get_portal_groups", 00:05:11.350 "iscsi_delete_target_node", 00:05:11.350 "iscsi_target_node_remove_pg_ig_maps", 00:05:11.350 "iscsi_target_node_add_pg_ig_maps", 00:05:11.350 "iscsi_create_target_node", 00:05:11.350 "iscsi_get_target_nodes", 00:05:11.350 "iscsi_delete_initiator_group", 00:05:11.350 "iscsi_initiator_group_remove_initiators", 00:05:11.350 "iscsi_initiator_group_add_initiators", 00:05:11.350 "iscsi_create_initiator_group", 00:05:11.350 "iscsi_get_initiator_groups", 00:05:11.350 "nvmf_set_crdt", 00:05:11.350 "nvmf_set_config", 00:05:11.350 "nvmf_set_max_subsystems", 00:05:11.350 "nvmf_subsystem_get_listeners", 00:05:11.350 "nvmf_subsystem_get_qpairs", 00:05:11.350 "nvmf_subsystem_get_controllers", 00:05:11.350 "nvmf_get_stats", 00:05:11.350 "nvmf_get_transports", 00:05:11.350 "nvmf_create_transport", 00:05:11.350 "nvmf_get_targets", 00:05:11.350 "nvmf_delete_target", 00:05:11.350 "nvmf_create_target", 00:05:11.350 "nvmf_subsystem_allow_any_host", 00:05:11.350 "nvmf_subsystem_remove_host", 00:05:11.350 "nvmf_subsystem_add_host", 00:05:11.350 "nvmf_ns_remove_host", 00:05:11.350 "nvmf_ns_add_host", 00:05:11.350 "nvmf_subsystem_remove_ns", 00:05:11.350 "nvmf_subsystem_add_ns", 00:05:11.350 "nvmf_subsystem_listener_set_ana_state", 00:05:11.350 "nvmf_discovery_get_referrals", 00:05:11.350 "nvmf_discovery_remove_referral", 00:05:11.350 "nvmf_discovery_add_referral", 00:05:11.350 "nvmf_subsystem_remove_listener", 00:05:11.350 "nvmf_subsystem_add_listener", 00:05:11.350 "nvmf_delete_subsystem", 00:05:11.350 "nvmf_create_subsystem", 00:05:11.350 "nvmf_get_subsystems", 00:05:11.350 "env_dpdk_get_mem_stats", 00:05:11.350 "nbd_get_disks", 00:05:11.350 "nbd_stop_disk", 00:05:11.350 "nbd_start_disk", 00:05:11.350 "ublk_recover_disk", 00:05:11.350 "ublk_get_disks", 00:05:11.350 "ublk_stop_disk", 00:05:11.350 "ublk_start_disk", 00:05:11.350 "ublk_destroy_target", 00:05:11.350 "ublk_create_target", 00:05:11.350 "virtio_blk_create_transport", 00:05:11.350 "virtio_blk_get_transports", 00:05:11.350 "vhost_controller_set_coalescing", 00:05:11.350 "vhost_get_controllers", 00:05:11.350 "vhost_delete_controller", 00:05:11.350 "vhost_create_blk_controller", 00:05:11.350 "vhost_scsi_controller_remove_target", 00:05:11.350 "vhost_scsi_controller_add_target", 00:05:11.350 "vhost_start_scsi_controller", 00:05:11.350 "vhost_create_scsi_controller", 00:05:11.350 "thread_set_cpumask", 00:05:11.350 "framework_get_scheduler", 00:05:11.350 "framework_set_scheduler", 00:05:11.350 "framework_get_reactors", 00:05:11.350 "thread_get_io_channels", 00:05:11.350 "thread_get_pollers", 00:05:11.350 "thread_get_stats", 00:05:11.350 "framework_monitor_context_switch", 00:05:11.350 "spdk_kill_instance", 00:05:11.350 "log_enable_timestamps", 00:05:11.350 "log_get_flags", 00:05:11.350 "log_clear_flag", 00:05:11.350 "log_set_flag", 00:05:11.350 "log_get_level", 00:05:11.350 "log_set_level", 00:05:11.350 "log_get_print_level", 00:05:11.350 "log_set_print_level", 00:05:11.350 "framework_enable_cpumask_locks", 00:05:11.350 "framework_disable_cpumask_locks", 00:05:11.350 "framework_wait_init", 00:05:11.350 "framework_start_init", 00:05:11.350 "scsi_get_devices", 00:05:11.350 "bdev_get_histogram", 00:05:11.350 "bdev_enable_histogram", 00:05:11.350 "bdev_set_qos_limit", 00:05:11.350 "bdev_set_qd_sampling_period", 00:05:11.350 "bdev_get_bdevs", 00:05:11.350 "bdev_reset_iostat", 00:05:11.350 "bdev_get_iostat", 00:05:11.350 "bdev_examine", 00:05:11.350 "bdev_wait_for_examine", 00:05:11.350 "bdev_set_options", 00:05:11.350 "notify_get_notifications", 00:05:11.350 "notify_get_types", 00:05:11.350 "accel_get_stats", 00:05:11.350 "accel_set_options", 00:05:11.350 "accel_set_driver", 00:05:11.350 "accel_crypto_key_destroy", 00:05:11.351 "accel_crypto_keys_get", 00:05:11.351 "accel_crypto_key_create", 00:05:11.351 "accel_assign_opc", 00:05:11.351 "accel_get_module_info", 00:05:11.351 "accel_get_opc_assignments", 00:05:11.351 "vmd_rescan", 00:05:11.351 "vmd_remove_device", 00:05:11.351 "vmd_enable", 00:05:11.351 "sock_get_default_impl", 00:05:11.351 "sock_set_default_impl", 00:05:11.351 "sock_impl_set_options", 00:05:11.351 "sock_impl_get_options", 00:05:11.351 "iobuf_get_stats", 00:05:11.351 "iobuf_set_options", 00:05:11.351 "framework_get_pci_devices", 00:05:11.351 "framework_get_config", 00:05:11.351 "framework_get_subsystems", 00:05:11.351 "trace_get_info", 00:05:11.351 "trace_get_tpoint_group_mask", 00:05:11.351 "trace_disable_tpoint_group", 00:05:11.351 "trace_enable_tpoint_group", 00:05:11.351 "trace_clear_tpoint_mask", 00:05:11.351 "trace_set_tpoint_mask", 00:05:11.351 "keyring_get_keys", 00:05:11.351 "spdk_get_version", 00:05:11.351 "rpc_get_methods" 00:05:11.351 ] 00:05:11.351 12:48:16 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:11.351 12:48:16 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:11.351 12:48:16 -- common/autotest_common.sh@10 -- # set +x 00:05:11.351 12:48:16 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:11.351 12:48:16 -- spdkcli/tcp.sh@38 -- # killprocess 3761855 00:05:11.351 12:48:16 -- common/autotest_common.sh@936 -- # '[' -z 3761855 ']' 00:05:11.351 12:48:16 -- common/autotest_common.sh@940 -- # kill -0 3761855 00:05:11.351 12:48:16 -- common/autotest_common.sh@941 -- # uname 00:05:11.351 12:48:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:11.351 12:48:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3761855 00:05:11.351 12:48:16 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:11.351 12:48:16 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:11.351 12:48:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3761855' 00:05:11.351 killing process with pid 3761855 00:05:11.351 12:48:16 -- common/autotest_common.sh@955 -- # kill 3761855 00:05:11.351 12:48:16 -- common/autotest_common.sh@960 -- # wait 3761855 00:05:11.612 00:05:11.612 real 0m1.411s 00:05:11.612 user 0m2.587s 00:05:11.612 sys 0m0.420s 00:05:11.612 12:48:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:11.612 12:48:16 -- common/autotest_common.sh@10 -- # set +x 00:05:11.612 ************************************ 00:05:11.612 END TEST spdkcli_tcp 00:05:11.612 ************************************ 00:05:11.612 12:48:16 -- spdk/autotest.sh@176 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:11.612 12:48:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:11.612 12:48:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:11.612 12:48:16 -- common/autotest_common.sh@10 -- # set +x 00:05:11.873 ************************************ 00:05:11.873 START TEST dpdk_mem_utility 00:05:11.873 ************************************ 00:05:11.873 12:48:16 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:11.873 * Looking for test storage... 00:05:11.873 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:11.873 12:48:16 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:11.873 12:48:16 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3762260 00:05:11.873 12:48:16 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3762260 00:05:11.873 12:48:16 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:11.873 12:48:16 -- common/autotest_common.sh@817 -- # '[' -z 3762260 ']' 00:05:11.873 12:48:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:11.873 12:48:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:11.873 12:48:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:11.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:11.873 12:48:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:11.873 12:48:16 -- common/autotest_common.sh@10 -- # set +x 00:05:11.873 [2024-04-26 12:48:16.858035] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:05:11.874 [2024-04-26 12:48:16.858104] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3762260 ] 00:05:11.874 EAL: No free 2048 kB hugepages reported on node 1 00:05:11.874 [2024-04-26 12:48:16.922707] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.134 [2024-04-26 12:48:16.995338] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.706 12:48:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:12.706 12:48:17 -- common/autotest_common.sh@850 -- # return 0 00:05:12.706 12:48:17 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:12.706 12:48:17 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:12.707 12:48:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:12.707 12:48:17 -- common/autotest_common.sh@10 -- # set +x 00:05:12.707 { 00:05:12.707 "filename": "/tmp/spdk_mem_dump.txt" 00:05:12.707 } 00:05:12.707 12:48:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:12.707 12:48:17 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:12.707 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:12.707 1 heaps totaling size 814.000000 MiB 00:05:12.707 size: 814.000000 MiB heap id: 0 00:05:12.707 end heaps---------- 00:05:12.707 8 mempools totaling size 598.116089 MiB 00:05:12.707 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:12.707 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:12.707 size: 84.521057 MiB name: bdev_io_3762260 00:05:12.707 size: 51.011292 MiB name: evtpool_3762260 00:05:12.707 size: 50.003479 MiB name: msgpool_3762260 00:05:12.707 size: 21.763794 MiB name: PDU_Pool 00:05:12.707 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:12.707 size: 0.026123 MiB name: Session_Pool 00:05:12.707 end mempools------- 00:05:12.707 6 memzones totaling size 4.142822 MiB 00:05:12.707 size: 1.000366 MiB name: RG_ring_0_3762260 00:05:12.707 size: 1.000366 MiB name: RG_ring_1_3762260 00:05:12.707 size: 1.000366 MiB name: RG_ring_4_3762260 00:05:12.707 size: 1.000366 MiB name: RG_ring_5_3762260 00:05:12.707 size: 0.125366 MiB name: RG_ring_2_3762260 00:05:12.707 size: 0.015991 MiB name: RG_ring_3_3762260 00:05:12.707 end memzones------- 00:05:12.707 12:48:17 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:12.707 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:12.707 list of free elements. size: 12.519348 MiB 00:05:12.707 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:12.707 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:12.707 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:12.707 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:12.707 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:12.707 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:12.707 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:12.707 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:12.707 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:12.707 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:12.707 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:12.707 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:12.707 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:12.707 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:12.707 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:12.707 list of standard malloc elements. size: 199.218079 MiB 00:05:12.707 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:12.707 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:12.707 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:12.707 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:12.707 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:12.707 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:12.707 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:12.707 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:12.707 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:12.707 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:12.707 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:12.707 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:12.707 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:12.707 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:12.707 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:12.707 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:12.707 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:12.707 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:12.707 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:12.707 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:12.707 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:12.707 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:12.707 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:12.707 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:12.707 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:12.707 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:12.707 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:12.707 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:12.707 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:12.707 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:12.707 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:12.707 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:12.707 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:12.707 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:12.707 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:12.707 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:12.707 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:12.707 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:12.707 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:12.707 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:12.707 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:12.707 list of memzone associated elements. size: 602.262573 MiB 00:05:12.707 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:12.707 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:12.707 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:12.707 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:12.707 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:12.707 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_3762260_0 00:05:12.707 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:12.707 associated memzone info: size: 48.002930 MiB name: MP_evtpool_3762260_0 00:05:12.707 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:12.707 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3762260_0 00:05:12.707 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:12.707 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:12.707 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:12.707 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:12.707 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:12.707 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_3762260 00:05:12.707 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:12.707 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3762260 00:05:12.707 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:12.707 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3762260 00:05:12.707 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:12.707 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:12.707 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:12.707 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:12.707 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:12.707 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:12.707 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:12.707 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:12.707 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:12.707 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3762260 00:05:12.707 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:12.707 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3762260 00:05:12.707 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:12.707 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3762260 00:05:12.707 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:12.707 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3762260 00:05:12.707 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:12.707 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3762260 00:05:12.707 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:12.707 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:12.707 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:12.707 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:12.707 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:12.707 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:12.707 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:12.707 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3762260 00:05:12.707 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:12.707 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:12.707 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:12.707 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:12.707 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:12.707 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3762260 00:05:12.707 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:12.707 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:12.707 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:12.708 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3762260 00:05:12.708 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:12.708 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3762260 00:05:12.708 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:12.708 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:12.708 12:48:17 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:12.708 12:48:17 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3762260 00:05:12.708 12:48:17 -- common/autotest_common.sh@936 -- # '[' -z 3762260 ']' 00:05:12.708 12:48:17 -- common/autotest_common.sh@940 -- # kill -0 3762260 00:05:12.708 12:48:17 -- common/autotest_common.sh@941 -- # uname 00:05:12.708 12:48:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:12.708 12:48:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3762260 00:05:12.708 12:48:17 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:12.708 12:48:17 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:12.708 12:48:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3762260' 00:05:12.708 killing process with pid 3762260 00:05:12.708 12:48:17 -- common/autotest_common.sh@955 -- # kill 3762260 00:05:12.708 12:48:17 -- common/autotest_common.sh@960 -- # wait 3762260 00:05:12.968 00:05:12.968 real 0m1.265s 00:05:12.968 user 0m1.334s 00:05:12.968 sys 0m0.356s 00:05:12.968 12:48:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:12.969 12:48:17 -- common/autotest_common.sh@10 -- # set +x 00:05:12.969 ************************************ 00:05:12.969 END TEST dpdk_mem_utility 00:05:12.969 ************************************ 00:05:12.969 12:48:18 -- spdk/autotest.sh@177 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:12.969 12:48:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:12.969 12:48:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:12.969 12:48:18 -- common/autotest_common.sh@10 -- # set +x 00:05:13.230 ************************************ 00:05:13.230 START TEST event 00:05:13.230 ************************************ 00:05:13.230 12:48:18 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:13.230 * Looking for test storage... 00:05:13.230 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:13.230 12:48:18 -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:13.230 12:48:18 -- bdev/nbd_common.sh@6 -- # set -e 00:05:13.230 12:48:18 -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:13.230 12:48:18 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:05:13.230 12:48:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:13.230 12:48:18 -- common/autotest_common.sh@10 -- # set +x 00:05:13.492 ************************************ 00:05:13.492 START TEST event_perf 00:05:13.492 ************************************ 00:05:13.492 12:48:18 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:13.492 Running I/O for 1 seconds...[2024-04-26 12:48:18.431116] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:05:13.492 [2024-04-26 12:48:18.431215] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3762615 ] 00:05:13.492 EAL: No free 2048 kB hugepages reported on node 1 00:05:13.492 [2024-04-26 12:48:18.501598] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:13.753 [2024-04-26 12:48:18.579134] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:13.753 [2024-04-26 12:48:18.579265] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:13.753 [2024-04-26 12:48:18.579421] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.753 Running I/O for 1 seconds...[2024-04-26 12:48:18.579421] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:14.696 00:05:14.696 lcore 0: 166231 00:05:14.696 lcore 1: 166232 00:05:14.696 lcore 2: 166228 00:05:14.696 lcore 3: 166231 00:05:14.696 done. 00:05:14.696 00:05:14.696 real 0m1.224s 00:05:14.696 user 0m4.136s 00:05:14.696 sys 0m0.086s 00:05:14.696 12:48:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:14.696 12:48:19 -- common/autotest_common.sh@10 -- # set +x 00:05:14.696 ************************************ 00:05:14.696 END TEST event_perf 00:05:14.696 ************************************ 00:05:14.696 12:48:19 -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:14.696 12:48:19 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:05:14.696 12:48:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:14.696 12:48:19 -- common/autotest_common.sh@10 -- # set +x 00:05:14.956 ************************************ 00:05:14.956 START TEST event_reactor 00:05:14.956 ************************************ 00:05:14.956 12:48:19 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:14.956 [2024-04-26 12:48:19.849134] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:05:14.956 [2024-04-26 12:48:19.849230] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3762842 ] 00:05:14.956 EAL: No free 2048 kB hugepages reported on node 1 00:05:14.956 [2024-04-26 12:48:19.916817] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.956 [2024-04-26 12:48:19.988722] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.340 test_start 00:05:16.340 oneshot 00:05:16.340 tick 100 00:05:16.340 tick 100 00:05:16.340 tick 250 00:05:16.340 tick 100 00:05:16.340 tick 100 00:05:16.340 tick 250 00:05:16.340 tick 100 00:05:16.340 tick 500 00:05:16.340 tick 100 00:05:16.340 tick 100 00:05:16.340 tick 250 00:05:16.340 tick 100 00:05:16.340 tick 100 00:05:16.340 test_end 00:05:16.340 00:05:16.340 real 0m1.214s 00:05:16.340 user 0m1.139s 00:05:16.340 sys 0m0.069s 00:05:16.340 12:48:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:16.340 12:48:21 -- common/autotest_common.sh@10 -- # set +x 00:05:16.340 ************************************ 00:05:16.340 END TEST event_reactor 00:05:16.340 ************************************ 00:05:16.340 12:48:21 -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:16.340 12:48:21 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:05:16.340 12:48:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:16.340 12:48:21 -- common/autotest_common.sh@10 -- # set +x 00:05:16.340 ************************************ 00:05:16.340 START TEST event_reactor_perf 00:05:16.340 ************************************ 00:05:16.340 12:48:21 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:16.340 [2024-04-26 12:48:21.261288] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:05:16.340 [2024-04-26 12:48:21.261386] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3763087 ] 00:05:16.340 EAL: No free 2048 kB hugepages reported on node 1 00:05:16.340 [2024-04-26 12:48:21.330373] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.601 [2024-04-26 12:48:21.404560] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.542 test_start 00:05:17.542 test_end 00:05:17.542 Performance: 361793 events per second 00:05:17.542 00:05:17.542 real 0m1.217s 00:05:17.542 user 0m1.134s 00:05:17.542 sys 0m0.079s 00:05:17.542 12:48:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:17.542 12:48:22 -- common/autotest_common.sh@10 -- # set +x 00:05:17.542 ************************************ 00:05:17.542 END TEST event_reactor_perf 00:05:17.542 ************************************ 00:05:17.542 12:48:22 -- event/event.sh@49 -- # uname -s 00:05:17.542 12:48:22 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:17.542 12:48:22 -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:17.542 12:48:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:17.542 12:48:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:17.542 12:48:22 -- common/autotest_common.sh@10 -- # set +x 00:05:17.803 ************************************ 00:05:17.803 START TEST event_scheduler 00:05:17.803 ************************************ 00:05:17.803 12:48:22 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:17.803 * Looking for test storage... 00:05:17.803 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:17.803 12:48:22 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:17.803 12:48:22 -- scheduler/scheduler.sh@35 -- # scheduler_pid=3763464 00:05:17.803 12:48:22 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:17.803 12:48:22 -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:17.803 12:48:22 -- scheduler/scheduler.sh@37 -- # waitforlisten 3763464 00:05:17.803 12:48:22 -- common/autotest_common.sh@817 -- # '[' -z 3763464 ']' 00:05:17.803 12:48:22 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:17.803 12:48:22 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:17.803 12:48:22 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:17.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:17.803 12:48:22 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:17.803 12:48:22 -- common/autotest_common.sh@10 -- # set +x 00:05:17.803 [2024-04-26 12:48:22.812829] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:05:17.803 [2024-04-26 12:48:22.812885] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3763464 ] 00:05:17.803 EAL: No free 2048 kB hugepages reported on node 1 00:05:18.104 [2024-04-26 12:48:22.866109] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:18.104 [2024-04-26 12:48:22.918513] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.104 [2024-04-26 12:48:22.918694] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:18.104 [2024-04-26 12:48:22.918808] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:18.104 [2024-04-26 12:48:22.918809] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:18.698 12:48:23 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:18.698 12:48:23 -- common/autotest_common.sh@850 -- # return 0 00:05:18.698 12:48:23 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:18.698 12:48:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:18.698 12:48:23 -- common/autotest_common.sh@10 -- # set +x 00:05:18.698 POWER: Env isn't set yet! 00:05:18.698 POWER: Attempting to initialise ACPI cpufreq power management... 00:05:18.698 POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:18.698 POWER: Cannot set governor of lcore 0 to userspace 00:05:18.698 POWER: Attempting to initialise PSTAT power management... 00:05:18.698 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:05:18.698 POWER: Initialized successfully for lcore 0 power management 00:05:18.698 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:05:18.698 POWER: Initialized successfully for lcore 1 power management 00:05:18.698 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:05:18.698 POWER: Initialized successfully for lcore 2 power management 00:05:18.698 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:05:18.698 POWER: Initialized successfully for lcore 3 power management 00:05:18.698 12:48:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:18.698 12:48:23 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:18.698 12:48:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:18.698 12:48:23 -- common/autotest_common.sh@10 -- # set +x 00:05:18.698 [2024-04-26 12:48:23.687305] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:18.698 12:48:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:18.698 12:48:23 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:18.698 12:48:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:18.698 12:48:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:18.698 12:48:23 -- common/autotest_common.sh@10 -- # set +x 00:05:18.960 ************************************ 00:05:18.960 START TEST scheduler_create_thread 00:05:18.960 ************************************ 00:05:18.960 12:48:23 -- common/autotest_common.sh@1111 -- # scheduler_create_thread 00:05:18.960 12:48:23 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:18.960 12:48:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:18.960 12:48:23 -- common/autotest_common.sh@10 -- # set +x 00:05:18.960 2 00:05:18.960 12:48:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:18.960 12:48:23 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:18.960 12:48:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:18.960 12:48:23 -- common/autotest_common.sh@10 -- # set +x 00:05:18.960 3 00:05:18.960 12:48:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:18.960 12:48:23 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:18.960 12:48:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:18.960 12:48:23 -- common/autotest_common.sh@10 -- # set +x 00:05:18.960 4 00:05:18.960 12:48:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:18.960 12:48:23 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:18.960 12:48:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:18.960 12:48:23 -- common/autotest_common.sh@10 -- # set +x 00:05:18.960 5 00:05:18.960 12:48:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:18.960 12:48:23 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:18.960 12:48:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:18.960 12:48:23 -- common/autotest_common.sh@10 -- # set +x 00:05:18.960 6 00:05:18.960 12:48:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:18.960 12:48:23 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:18.960 12:48:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:18.960 12:48:23 -- common/autotest_common.sh@10 -- # set +x 00:05:18.960 7 00:05:18.960 12:48:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:18.960 12:48:23 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:18.960 12:48:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:18.960 12:48:23 -- common/autotest_common.sh@10 -- # set +x 00:05:18.960 8 00:05:18.960 12:48:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:18.960 12:48:23 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:18.960 12:48:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:18.960 12:48:23 -- common/autotest_common.sh@10 -- # set +x 00:05:18.960 9 00:05:18.960 12:48:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:18.960 12:48:23 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:18.960 12:48:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:18.960 12:48:23 -- common/autotest_common.sh@10 -- # set +x 00:05:20.347 10 00:05:20.347 12:48:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:20.347 12:48:25 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:20.347 12:48:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:20.347 12:48:25 -- common/autotest_common.sh@10 -- # set +x 00:05:21.734 12:48:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:21.734 12:48:26 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:21.734 12:48:26 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:21.734 12:48:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:21.734 12:48:26 -- common/autotest_common.sh@10 -- # set +x 00:05:22.305 12:48:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:22.305 12:48:27 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:22.305 12:48:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:22.305 12:48:27 -- common/autotest_common.sh@10 -- # set +x 00:05:23.248 12:48:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:23.248 12:48:28 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:23.249 12:48:28 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:23.249 12:48:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:23.249 12:48:28 -- common/autotest_common.sh@10 -- # set +x 00:05:24.192 12:48:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:24.192 00:05:24.192 real 0m5.099s 00:05:24.192 user 0m0.025s 00:05:24.192 sys 0m0.005s 00:05:24.192 12:48:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:24.192 12:48:28 -- common/autotest_common.sh@10 -- # set +x 00:05:24.192 ************************************ 00:05:24.192 END TEST scheduler_create_thread 00:05:24.192 ************************************ 00:05:24.192 12:48:28 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:24.192 12:48:28 -- scheduler/scheduler.sh@46 -- # killprocess 3763464 00:05:24.192 12:48:28 -- common/autotest_common.sh@936 -- # '[' -z 3763464 ']' 00:05:24.192 12:48:28 -- common/autotest_common.sh@940 -- # kill -0 3763464 00:05:24.192 12:48:28 -- common/autotest_common.sh@941 -- # uname 00:05:24.192 12:48:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:24.192 12:48:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3763464 00:05:24.192 12:48:28 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:05:24.192 12:48:28 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:05:24.192 12:48:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3763464' 00:05:24.192 killing process with pid 3763464 00:05:24.192 12:48:28 -- common/autotest_common.sh@955 -- # kill 3763464 00:05:24.192 12:48:28 -- common/autotest_common.sh@960 -- # wait 3763464 00:05:24.452 [2024-04-26 12:48:29.282657] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:24.452 POWER: Power management governor of lcore 0 has been set to 'powersave' successfully 00:05:24.452 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:05:24.452 POWER: Power management governor of lcore 1 has been set to 'powersave' successfully 00:05:24.452 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:05:24.452 POWER: Power management governor of lcore 2 has been set to 'powersave' successfully 00:05:24.452 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:05:24.452 POWER: Power management governor of lcore 3 has been set to 'powersave' successfully 00:05:24.452 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:05:24.452 00:05:24.452 real 0m6.825s 00:05:24.452 user 0m13.350s 00:05:24.452 sys 0m0.371s 00:05:24.452 12:48:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:24.452 12:48:29 -- common/autotest_common.sh@10 -- # set +x 00:05:24.452 ************************************ 00:05:24.452 END TEST event_scheduler 00:05:24.452 ************************************ 00:05:24.713 12:48:29 -- event/event.sh@51 -- # modprobe -n nbd 00:05:24.713 12:48:29 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:24.713 12:48:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:24.713 12:48:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:24.713 12:48:29 -- common/autotest_common.sh@10 -- # set +x 00:05:24.713 ************************************ 00:05:24.713 START TEST app_repeat 00:05:24.713 ************************************ 00:05:24.713 12:48:29 -- common/autotest_common.sh@1111 -- # app_repeat_test 00:05:24.713 12:48:29 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:24.713 12:48:29 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:24.713 12:48:29 -- event/event.sh@13 -- # local nbd_list 00:05:24.713 12:48:29 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:24.713 12:48:29 -- event/event.sh@14 -- # local bdev_list 00:05:24.713 12:48:29 -- event/event.sh@15 -- # local repeat_times=4 00:05:24.713 12:48:29 -- event/event.sh@17 -- # modprobe nbd 00:05:24.713 12:48:29 -- event/event.sh@19 -- # repeat_pid=3764877 00:05:24.713 12:48:29 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:24.713 12:48:29 -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:24.713 12:48:29 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3764877' 00:05:24.713 Process app_repeat pid: 3764877 00:05:24.713 12:48:29 -- event/event.sh@23 -- # for i in {0..2} 00:05:24.713 12:48:29 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:24.713 spdk_app_start Round 0 00:05:24.713 12:48:29 -- event/event.sh@25 -- # waitforlisten 3764877 /var/tmp/spdk-nbd.sock 00:05:24.713 12:48:29 -- common/autotest_common.sh@817 -- # '[' -z 3764877 ']' 00:05:24.713 12:48:29 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:24.713 12:48:29 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:24.713 12:48:29 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:24.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:24.713 12:48:29 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:24.713 12:48:29 -- common/autotest_common.sh@10 -- # set +x 00:05:24.713 [2024-04-26 12:48:29.710039] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:05:24.713 [2024-04-26 12:48:29.710106] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3764877 ] 00:05:24.713 EAL: No free 2048 kB hugepages reported on node 1 00:05:24.713 [2024-04-26 12:48:29.773044] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:24.974 [2024-04-26 12:48:29.837168] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:24.974 [2024-04-26 12:48:29.837272] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.548 12:48:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:25.548 12:48:30 -- common/autotest_common.sh@850 -- # return 0 00:05:25.548 12:48:30 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:25.809 Malloc0 00:05:25.809 12:48:30 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:25.809 Malloc1 00:05:25.809 12:48:30 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:25.809 12:48:30 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:25.809 12:48:30 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:25.809 12:48:30 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:25.809 12:48:30 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:25.809 12:48:30 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:25.809 12:48:30 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:25.809 12:48:30 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:25.809 12:48:30 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:25.809 12:48:30 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:25.809 12:48:30 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:25.809 12:48:30 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:25.809 12:48:30 -- bdev/nbd_common.sh@12 -- # local i 00:05:25.809 12:48:30 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:25.809 12:48:30 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:25.809 12:48:30 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:26.070 /dev/nbd0 00:05:26.070 12:48:30 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:26.070 12:48:30 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:26.070 12:48:30 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:05:26.070 12:48:30 -- common/autotest_common.sh@855 -- # local i 00:05:26.070 12:48:30 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:26.070 12:48:30 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:26.070 12:48:30 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:05:26.070 12:48:30 -- common/autotest_common.sh@859 -- # break 00:05:26.070 12:48:30 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:26.070 12:48:30 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:26.070 12:48:30 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:26.070 1+0 records in 00:05:26.070 1+0 records out 00:05:26.070 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000241841 s, 16.9 MB/s 00:05:26.070 12:48:30 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:26.070 12:48:30 -- common/autotest_common.sh@872 -- # size=4096 00:05:26.070 12:48:30 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:26.070 12:48:30 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:26.070 12:48:30 -- common/autotest_common.sh@875 -- # return 0 00:05:26.070 12:48:30 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:26.070 12:48:30 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:26.070 12:48:30 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:26.331 /dev/nbd1 00:05:26.331 12:48:31 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:26.331 12:48:31 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:26.331 12:48:31 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:05:26.331 12:48:31 -- common/autotest_common.sh@855 -- # local i 00:05:26.331 12:48:31 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:26.331 12:48:31 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:26.331 12:48:31 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:05:26.331 12:48:31 -- common/autotest_common.sh@859 -- # break 00:05:26.331 12:48:31 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:26.331 12:48:31 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:26.331 12:48:31 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:26.331 1+0 records in 00:05:26.331 1+0 records out 00:05:26.331 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000226938 s, 18.0 MB/s 00:05:26.331 12:48:31 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:26.331 12:48:31 -- common/autotest_common.sh@872 -- # size=4096 00:05:26.331 12:48:31 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:26.331 12:48:31 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:26.331 12:48:31 -- common/autotest_common.sh@875 -- # return 0 00:05:26.331 12:48:31 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:26.331 12:48:31 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:26.331 12:48:31 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:26.331 12:48:31 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:26.331 12:48:31 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:26.331 12:48:31 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:26.331 { 00:05:26.331 "nbd_device": "/dev/nbd0", 00:05:26.331 "bdev_name": "Malloc0" 00:05:26.331 }, 00:05:26.331 { 00:05:26.331 "nbd_device": "/dev/nbd1", 00:05:26.331 "bdev_name": "Malloc1" 00:05:26.331 } 00:05:26.331 ]' 00:05:26.331 12:48:31 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:26.331 { 00:05:26.331 "nbd_device": "/dev/nbd0", 00:05:26.331 "bdev_name": "Malloc0" 00:05:26.331 }, 00:05:26.331 { 00:05:26.331 "nbd_device": "/dev/nbd1", 00:05:26.331 "bdev_name": "Malloc1" 00:05:26.331 } 00:05:26.332 ]' 00:05:26.332 12:48:31 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:26.332 12:48:31 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:26.332 /dev/nbd1' 00:05:26.332 12:48:31 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:26.332 /dev/nbd1' 00:05:26.332 12:48:31 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:26.592 12:48:31 -- bdev/nbd_common.sh@65 -- # count=2 00:05:26.592 12:48:31 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:26.592 12:48:31 -- bdev/nbd_common.sh@95 -- # count=2 00:05:26.592 12:48:31 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:26.592 12:48:31 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:26.592 12:48:31 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:26.592 12:48:31 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:26.592 12:48:31 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:26.592 12:48:31 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:26.592 12:48:31 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:26.592 12:48:31 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:26.592 256+0 records in 00:05:26.592 256+0 records out 00:05:26.592 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0124284 s, 84.4 MB/s 00:05:26.592 12:48:31 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:26.592 12:48:31 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:26.592 256+0 records in 00:05:26.592 256+0 records out 00:05:26.592 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0160734 s, 65.2 MB/s 00:05:26.592 12:48:31 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:26.592 12:48:31 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:26.592 256+0 records in 00:05:26.592 256+0 records out 00:05:26.592 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0174008 s, 60.3 MB/s 00:05:26.592 12:48:31 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:26.592 12:48:31 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:26.592 12:48:31 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:26.592 12:48:31 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:26.592 12:48:31 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:26.592 12:48:31 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:26.592 12:48:31 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:26.592 12:48:31 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:26.593 12:48:31 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:26.593 12:48:31 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:26.593 12:48:31 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:26.593 12:48:31 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:26.593 12:48:31 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:26.593 12:48:31 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:26.593 12:48:31 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:26.593 12:48:31 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:26.593 12:48:31 -- bdev/nbd_common.sh@51 -- # local i 00:05:26.593 12:48:31 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:26.593 12:48:31 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:26.593 12:48:31 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:26.593 12:48:31 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:26.593 12:48:31 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:26.593 12:48:31 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:26.593 12:48:31 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:26.593 12:48:31 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:26.593 12:48:31 -- bdev/nbd_common.sh@41 -- # break 00:05:26.593 12:48:31 -- bdev/nbd_common.sh@45 -- # return 0 00:05:26.593 12:48:31 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:26.593 12:48:31 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:26.853 12:48:31 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:26.853 12:48:31 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:26.853 12:48:31 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:26.853 12:48:31 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:26.853 12:48:31 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:26.853 12:48:31 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:26.853 12:48:31 -- bdev/nbd_common.sh@41 -- # break 00:05:26.853 12:48:31 -- bdev/nbd_common.sh@45 -- # return 0 00:05:26.853 12:48:31 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:26.853 12:48:31 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:26.853 12:48:31 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:27.113 12:48:31 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:27.113 12:48:31 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:27.113 12:48:31 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:27.113 12:48:31 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:27.113 12:48:32 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:27.113 12:48:32 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:27.113 12:48:32 -- bdev/nbd_common.sh@65 -- # true 00:05:27.113 12:48:32 -- bdev/nbd_common.sh@65 -- # count=0 00:05:27.113 12:48:32 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:27.113 12:48:32 -- bdev/nbd_common.sh@104 -- # count=0 00:05:27.113 12:48:32 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:27.113 12:48:32 -- bdev/nbd_common.sh@109 -- # return 0 00:05:27.113 12:48:32 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:27.113 12:48:32 -- event/event.sh@35 -- # sleep 3 00:05:27.373 [2024-04-26 12:48:32.303672] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:27.373 [2024-04-26 12:48:32.364175] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:27.373 [2024-04-26 12:48:32.364175] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.373 [2024-04-26 12:48:32.396023] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:27.373 [2024-04-26 12:48:32.396059] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:30.669 12:48:35 -- event/event.sh@23 -- # for i in {0..2} 00:05:30.669 12:48:35 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:30.669 spdk_app_start Round 1 00:05:30.669 12:48:35 -- event/event.sh@25 -- # waitforlisten 3764877 /var/tmp/spdk-nbd.sock 00:05:30.669 12:48:35 -- common/autotest_common.sh@817 -- # '[' -z 3764877 ']' 00:05:30.669 12:48:35 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:30.669 12:48:35 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:30.669 12:48:35 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:30.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:30.669 12:48:35 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:30.669 12:48:35 -- common/autotest_common.sh@10 -- # set +x 00:05:30.669 12:48:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:30.669 12:48:35 -- common/autotest_common.sh@850 -- # return 0 00:05:30.669 12:48:35 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:30.669 Malloc0 00:05:30.669 12:48:35 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:30.669 Malloc1 00:05:30.669 12:48:35 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:30.669 12:48:35 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:30.669 12:48:35 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:30.669 12:48:35 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:30.669 12:48:35 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:30.669 12:48:35 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:30.669 12:48:35 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:30.669 12:48:35 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:30.669 12:48:35 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:30.669 12:48:35 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:30.669 12:48:35 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:30.669 12:48:35 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:30.669 12:48:35 -- bdev/nbd_common.sh@12 -- # local i 00:05:30.669 12:48:35 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:30.669 12:48:35 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:30.669 12:48:35 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:30.930 /dev/nbd0 00:05:30.930 12:48:35 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:30.930 12:48:35 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:30.930 12:48:35 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:05:30.930 12:48:35 -- common/autotest_common.sh@855 -- # local i 00:05:30.930 12:48:35 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:30.930 12:48:35 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:30.930 12:48:35 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:05:30.930 12:48:35 -- common/autotest_common.sh@859 -- # break 00:05:30.930 12:48:35 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:30.930 12:48:35 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:30.930 12:48:35 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:30.930 1+0 records in 00:05:30.930 1+0 records out 00:05:30.930 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000241612 s, 17.0 MB/s 00:05:30.930 12:48:35 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:30.930 12:48:35 -- common/autotest_common.sh@872 -- # size=4096 00:05:30.930 12:48:35 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:30.930 12:48:35 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:30.930 12:48:35 -- common/autotest_common.sh@875 -- # return 0 00:05:30.930 12:48:35 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:30.930 12:48:35 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:30.930 12:48:35 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:30.930 /dev/nbd1 00:05:31.192 12:48:35 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:31.192 12:48:35 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:31.192 12:48:35 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:05:31.192 12:48:35 -- common/autotest_common.sh@855 -- # local i 00:05:31.192 12:48:35 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:31.192 12:48:35 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:31.192 12:48:35 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:05:31.192 12:48:35 -- common/autotest_common.sh@859 -- # break 00:05:31.192 12:48:36 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:31.192 12:48:36 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:31.192 12:48:36 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:31.192 1+0 records in 00:05:31.192 1+0 records out 00:05:31.193 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000252723 s, 16.2 MB/s 00:05:31.193 12:48:36 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:31.193 12:48:36 -- common/autotest_common.sh@872 -- # size=4096 00:05:31.193 12:48:36 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:31.193 12:48:36 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:31.193 12:48:36 -- common/autotest_common.sh@875 -- # return 0 00:05:31.193 12:48:36 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:31.193 12:48:36 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:31.193 12:48:36 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:31.193 12:48:36 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:31.193 12:48:36 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:31.193 12:48:36 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:31.193 { 00:05:31.193 "nbd_device": "/dev/nbd0", 00:05:31.193 "bdev_name": "Malloc0" 00:05:31.193 }, 00:05:31.193 { 00:05:31.193 "nbd_device": "/dev/nbd1", 00:05:31.193 "bdev_name": "Malloc1" 00:05:31.193 } 00:05:31.193 ]' 00:05:31.193 12:48:36 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:31.193 { 00:05:31.193 "nbd_device": "/dev/nbd0", 00:05:31.193 "bdev_name": "Malloc0" 00:05:31.193 }, 00:05:31.193 { 00:05:31.193 "nbd_device": "/dev/nbd1", 00:05:31.193 "bdev_name": "Malloc1" 00:05:31.193 } 00:05:31.193 ]' 00:05:31.193 12:48:36 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:31.193 12:48:36 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:31.193 /dev/nbd1' 00:05:31.193 12:48:36 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:31.193 /dev/nbd1' 00:05:31.193 12:48:36 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:31.193 12:48:36 -- bdev/nbd_common.sh@65 -- # count=2 00:05:31.193 12:48:36 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:31.193 12:48:36 -- bdev/nbd_common.sh@95 -- # count=2 00:05:31.193 12:48:36 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:31.193 12:48:36 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:31.193 12:48:36 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:31.193 12:48:36 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:31.193 12:48:36 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:31.193 12:48:36 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:31.193 12:48:36 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:31.193 12:48:36 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:31.193 256+0 records in 00:05:31.193 256+0 records out 00:05:31.193 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0118509 s, 88.5 MB/s 00:05:31.193 12:48:36 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:31.193 12:48:36 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:31.453 256+0 records in 00:05:31.453 256+0 records out 00:05:31.453 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0160864 s, 65.2 MB/s 00:05:31.453 12:48:36 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:31.453 12:48:36 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:31.453 256+0 records in 00:05:31.453 256+0 records out 00:05:31.453 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0173674 s, 60.4 MB/s 00:05:31.453 12:48:36 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:31.453 12:48:36 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:31.453 12:48:36 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:31.453 12:48:36 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:31.453 12:48:36 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:31.453 12:48:36 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:31.453 12:48:36 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:31.453 12:48:36 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:31.453 12:48:36 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:31.453 12:48:36 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:31.453 12:48:36 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:31.453 12:48:36 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:31.453 12:48:36 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:31.453 12:48:36 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:31.453 12:48:36 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:31.453 12:48:36 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:31.453 12:48:36 -- bdev/nbd_common.sh@51 -- # local i 00:05:31.453 12:48:36 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:31.453 12:48:36 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:31.453 12:48:36 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:31.454 12:48:36 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:31.454 12:48:36 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:31.454 12:48:36 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:31.454 12:48:36 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:31.454 12:48:36 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:31.454 12:48:36 -- bdev/nbd_common.sh@41 -- # break 00:05:31.454 12:48:36 -- bdev/nbd_common.sh@45 -- # return 0 00:05:31.454 12:48:36 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:31.454 12:48:36 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:31.715 12:48:36 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:31.715 12:48:36 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:31.715 12:48:36 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:31.715 12:48:36 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:31.715 12:48:36 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:31.715 12:48:36 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:31.715 12:48:36 -- bdev/nbd_common.sh@41 -- # break 00:05:31.715 12:48:36 -- bdev/nbd_common.sh@45 -- # return 0 00:05:31.715 12:48:36 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:31.715 12:48:36 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:31.715 12:48:36 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:31.975 12:48:36 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:31.975 12:48:36 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:31.975 12:48:36 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:31.975 12:48:36 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:31.975 12:48:36 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:31.975 12:48:36 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:31.975 12:48:36 -- bdev/nbd_common.sh@65 -- # true 00:05:31.975 12:48:36 -- bdev/nbd_common.sh@65 -- # count=0 00:05:31.976 12:48:36 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:31.976 12:48:36 -- bdev/nbd_common.sh@104 -- # count=0 00:05:31.976 12:48:36 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:31.976 12:48:36 -- bdev/nbd_common.sh@109 -- # return 0 00:05:31.976 12:48:36 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:31.976 12:48:37 -- event/event.sh@35 -- # sleep 3 00:05:32.236 [2024-04-26 12:48:37.166038] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:32.236 [2024-04-26 12:48:37.226239] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:32.236 [2024-04-26 12:48:37.226240] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.236 [2024-04-26 12:48:37.258956] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:32.236 [2024-04-26 12:48:37.258993] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:35.541 12:48:40 -- event/event.sh@23 -- # for i in {0..2} 00:05:35.541 12:48:40 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:35.541 spdk_app_start Round 2 00:05:35.541 12:48:40 -- event/event.sh@25 -- # waitforlisten 3764877 /var/tmp/spdk-nbd.sock 00:05:35.541 12:48:40 -- common/autotest_common.sh@817 -- # '[' -z 3764877 ']' 00:05:35.541 12:48:40 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:35.541 12:48:40 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:35.541 12:48:40 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:35.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:35.541 12:48:40 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:35.541 12:48:40 -- common/autotest_common.sh@10 -- # set +x 00:05:35.541 12:48:40 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:35.541 12:48:40 -- common/autotest_common.sh@850 -- # return 0 00:05:35.541 12:48:40 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:35.541 Malloc0 00:05:35.541 12:48:40 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:35.541 Malloc1 00:05:35.541 12:48:40 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:35.541 12:48:40 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:35.541 12:48:40 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:35.541 12:48:40 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:35.541 12:48:40 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:35.541 12:48:40 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:35.541 12:48:40 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:35.541 12:48:40 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:35.541 12:48:40 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:35.541 12:48:40 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:35.541 12:48:40 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:35.541 12:48:40 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:35.541 12:48:40 -- bdev/nbd_common.sh@12 -- # local i 00:05:35.541 12:48:40 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:35.541 12:48:40 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:35.541 12:48:40 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:35.803 /dev/nbd0 00:05:35.803 12:48:40 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:35.803 12:48:40 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:35.803 12:48:40 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:05:35.803 12:48:40 -- common/autotest_common.sh@855 -- # local i 00:05:35.803 12:48:40 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:35.803 12:48:40 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:35.803 12:48:40 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:05:35.803 12:48:40 -- common/autotest_common.sh@859 -- # break 00:05:35.803 12:48:40 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:35.803 12:48:40 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:35.803 12:48:40 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:35.803 1+0 records in 00:05:35.803 1+0 records out 00:05:35.803 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00020286 s, 20.2 MB/s 00:05:35.803 12:48:40 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:35.803 12:48:40 -- common/autotest_common.sh@872 -- # size=4096 00:05:35.803 12:48:40 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:35.803 12:48:40 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:35.803 12:48:40 -- common/autotest_common.sh@875 -- # return 0 00:05:35.803 12:48:40 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:35.803 12:48:40 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:35.803 12:48:40 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:35.803 /dev/nbd1 00:05:35.803 12:48:40 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:35.803 12:48:40 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:35.803 12:48:40 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:05:35.804 12:48:40 -- common/autotest_common.sh@855 -- # local i 00:05:35.804 12:48:40 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:35.804 12:48:40 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:35.804 12:48:40 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:05:35.804 12:48:40 -- common/autotest_common.sh@859 -- # break 00:05:35.804 12:48:40 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:35.804 12:48:40 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:35.804 12:48:40 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:35.804 1+0 records in 00:05:35.804 1+0 records out 00:05:35.804 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000273559 s, 15.0 MB/s 00:05:35.804 12:48:40 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:35.804 12:48:40 -- common/autotest_common.sh@872 -- # size=4096 00:05:35.804 12:48:40 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:35.804 12:48:40 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:35.804 12:48:40 -- common/autotest_common.sh@875 -- # return 0 00:05:35.804 12:48:40 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:35.804 12:48:40 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:35.804 12:48:40 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:36.068 12:48:40 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.068 12:48:40 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:36.068 12:48:41 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:36.068 { 00:05:36.068 "nbd_device": "/dev/nbd0", 00:05:36.068 "bdev_name": "Malloc0" 00:05:36.068 }, 00:05:36.068 { 00:05:36.068 "nbd_device": "/dev/nbd1", 00:05:36.068 "bdev_name": "Malloc1" 00:05:36.068 } 00:05:36.068 ]' 00:05:36.068 12:48:41 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:36.068 { 00:05:36.068 "nbd_device": "/dev/nbd0", 00:05:36.068 "bdev_name": "Malloc0" 00:05:36.068 }, 00:05:36.068 { 00:05:36.068 "nbd_device": "/dev/nbd1", 00:05:36.068 "bdev_name": "Malloc1" 00:05:36.068 } 00:05:36.068 ]' 00:05:36.068 12:48:41 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:36.068 12:48:41 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:36.068 /dev/nbd1' 00:05:36.068 12:48:41 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:36.068 /dev/nbd1' 00:05:36.068 12:48:41 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:36.068 12:48:41 -- bdev/nbd_common.sh@65 -- # count=2 00:05:36.068 12:48:41 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:36.068 12:48:41 -- bdev/nbd_common.sh@95 -- # count=2 00:05:36.068 12:48:41 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:36.068 12:48:41 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:36.068 12:48:41 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.068 12:48:41 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:36.068 12:48:41 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:36.068 12:48:41 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:36.068 12:48:41 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:36.068 12:48:41 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:36.068 256+0 records in 00:05:36.068 256+0 records out 00:05:36.068 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00291344 s, 360 MB/s 00:05:36.068 12:48:41 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:36.068 12:48:41 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:36.068 256+0 records in 00:05:36.068 256+0 records out 00:05:36.068 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0168697 s, 62.2 MB/s 00:05:36.068 12:48:41 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:36.069 12:48:41 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:36.069 256+0 records in 00:05:36.069 256+0 records out 00:05:36.069 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.01703 s, 61.6 MB/s 00:05:36.069 12:48:41 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:36.069 12:48:41 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.069 12:48:41 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:36.069 12:48:41 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:36.069 12:48:41 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:36.069 12:48:41 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:36.069 12:48:41 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:36.069 12:48:41 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:36.069 12:48:41 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:36.069 12:48:41 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:36.069 12:48:41 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:36.330 12:48:41 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:36.330 12:48:41 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:36.330 12:48:41 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.330 12:48:41 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.330 12:48:41 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:36.330 12:48:41 -- bdev/nbd_common.sh@51 -- # local i 00:05:36.330 12:48:41 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:36.330 12:48:41 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:36.330 12:48:41 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:36.330 12:48:41 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:36.330 12:48:41 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:36.330 12:48:41 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:36.330 12:48:41 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:36.330 12:48:41 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:36.330 12:48:41 -- bdev/nbd_common.sh@41 -- # break 00:05:36.330 12:48:41 -- bdev/nbd_common.sh@45 -- # return 0 00:05:36.330 12:48:41 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:36.330 12:48:41 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:36.592 12:48:41 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:36.592 12:48:41 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:36.592 12:48:41 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:36.592 12:48:41 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:36.592 12:48:41 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:36.592 12:48:41 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:36.592 12:48:41 -- bdev/nbd_common.sh@41 -- # break 00:05:36.592 12:48:41 -- bdev/nbd_common.sh@45 -- # return 0 00:05:36.592 12:48:41 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:36.592 12:48:41 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.592 12:48:41 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:36.854 12:48:41 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:36.854 12:48:41 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:36.854 12:48:41 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:36.854 12:48:41 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:36.854 12:48:41 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:36.854 12:48:41 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:36.854 12:48:41 -- bdev/nbd_common.sh@65 -- # true 00:05:36.854 12:48:41 -- bdev/nbd_common.sh@65 -- # count=0 00:05:36.854 12:48:41 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:36.854 12:48:41 -- bdev/nbd_common.sh@104 -- # count=0 00:05:36.854 12:48:41 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:36.854 12:48:41 -- bdev/nbd_common.sh@109 -- # return 0 00:05:36.854 12:48:41 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:36.854 12:48:41 -- event/event.sh@35 -- # sleep 3 00:05:37.116 [2024-04-26 12:48:42.007070] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:37.116 [2024-04-26 12:48:42.068150] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:37.116 [2024-04-26 12:48:42.068150] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.116 [2024-04-26 12:48:42.100090] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:37.116 [2024-04-26 12:48:42.100126] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:40.421 12:48:44 -- event/event.sh@38 -- # waitforlisten 3764877 /var/tmp/spdk-nbd.sock 00:05:40.421 12:48:44 -- common/autotest_common.sh@817 -- # '[' -z 3764877 ']' 00:05:40.421 12:48:44 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:40.421 12:48:44 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:40.421 12:48:44 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:40.421 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:40.421 12:48:44 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:40.421 12:48:44 -- common/autotest_common.sh@10 -- # set +x 00:05:40.421 12:48:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:40.421 12:48:45 -- common/autotest_common.sh@850 -- # return 0 00:05:40.421 12:48:45 -- event/event.sh@39 -- # killprocess 3764877 00:05:40.421 12:48:45 -- common/autotest_common.sh@936 -- # '[' -z 3764877 ']' 00:05:40.421 12:48:45 -- common/autotest_common.sh@940 -- # kill -0 3764877 00:05:40.421 12:48:45 -- common/autotest_common.sh@941 -- # uname 00:05:40.421 12:48:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:40.421 12:48:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3764877 00:05:40.421 12:48:45 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:40.421 12:48:45 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:40.421 12:48:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3764877' 00:05:40.421 killing process with pid 3764877 00:05:40.421 12:48:45 -- common/autotest_common.sh@955 -- # kill 3764877 00:05:40.421 12:48:45 -- common/autotest_common.sh@960 -- # wait 3764877 00:05:40.421 spdk_app_start is called in Round 0. 00:05:40.421 Shutdown signal received, stop current app iteration 00:05:40.421 Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 reinitialization... 00:05:40.421 spdk_app_start is called in Round 1. 00:05:40.421 Shutdown signal received, stop current app iteration 00:05:40.421 Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 reinitialization... 00:05:40.421 spdk_app_start is called in Round 2. 00:05:40.421 Shutdown signal received, stop current app iteration 00:05:40.421 Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 reinitialization... 00:05:40.421 spdk_app_start is called in Round 3. 00:05:40.421 Shutdown signal received, stop current app iteration 00:05:40.421 12:48:45 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:40.421 12:48:45 -- event/event.sh@42 -- # return 0 00:05:40.421 00:05:40.421 real 0m15.524s 00:05:40.421 user 0m33.474s 00:05:40.421 sys 0m2.072s 00:05:40.421 12:48:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:40.421 12:48:45 -- common/autotest_common.sh@10 -- # set +x 00:05:40.421 ************************************ 00:05:40.421 END TEST app_repeat 00:05:40.421 ************************************ 00:05:40.421 12:48:45 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:40.421 12:48:45 -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:40.421 12:48:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:40.421 12:48:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:40.421 12:48:45 -- common/autotest_common.sh@10 -- # set +x 00:05:40.421 ************************************ 00:05:40.421 START TEST cpu_locks 00:05:40.421 ************************************ 00:05:40.421 12:48:45 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:40.682 * Looking for test storage... 00:05:40.682 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:40.682 12:48:45 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:40.682 12:48:45 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:40.682 12:48:45 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:40.682 12:48:45 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:40.682 12:48:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:40.682 12:48:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:40.682 12:48:45 -- common/autotest_common.sh@10 -- # set +x 00:05:40.682 ************************************ 00:05:40.682 START TEST default_locks 00:05:40.682 ************************************ 00:05:40.682 12:48:45 -- common/autotest_common.sh@1111 -- # default_locks 00:05:40.682 12:48:45 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3768466 00:05:40.682 12:48:45 -- event/cpu_locks.sh@47 -- # waitforlisten 3768466 00:05:40.682 12:48:45 -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:40.682 12:48:45 -- common/autotest_common.sh@817 -- # '[' -z 3768466 ']' 00:05:40.682 12:48:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:40.682 12:48:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:40.682 12:48:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:40.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:40.682 12:48:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:40.682 12:48:45 -- common/autotest_common.sh@10 -- # set +x 00:05:40.682 [2024-04-26 12:48:45.698958] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:05:40.682 [2024-04-26 12:48:45.699016] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3768466 ] 00:05:40.682 EAL: No free 2048 kB hugepages reported on node 1 00:05:40.943 [2024-04-26 12:48:45.763584] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.943 [2024-04-26 12:48:45.835952] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.512 12:48:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:41.512 12:48:46 -- common/autotest_common.sh@850 -- # return 0 00:05:41.512 12:48:46 -- event/cpu_locks.sh@49 -- # locks_exist 3768466 00:05:41.512 12:48:46 -- event/cpu_locks.sh@22 -- # lslocks -p 3768466 00:05:41.512 12:48:46 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:41.773 lslocks: write error 00:05:41.773 12:48:46 -- event/cpu_locks.sh@50 -- # killprocess 3768466 00:05:41.773 12:48:46 -- common/autotest_common.sh@936 -- # '[' -z 3768466 ']' 00:05:41.773 12:48:46 -- common/autotest_common.sh@940 -- # kill -0 3768466 00:05:41.773 12:48:46 -- common/autotest_common.sh@941 -- # uname 00:05:41.773 12:48:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:41.773 12:48:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3768466 00:05:41.773 12:48:46 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:41.773 12:48:46 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:41.773 12:48:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3768466' 00:05:41.773 killing process with pid 3768466 00:05:41.773 12:48:46 -- common/autotest_common.sh@955 -- # kill 3768466 00:05:41.773 12:48:46 -- common/autotest_common.sh@960 -- # wait 3768466 00:05:42.034 12:48:46 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3768466 00:05:42.034 12:48:46 -- common/autotest_common.sh@638 -- # local es=0 00:05:42.034 12:48:46 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 3768466 00:05:42.034 12:48:46 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:05:42.034 12:48:46 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:42.034 12:48:46 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:05:42.034 12:48:46 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:42.034 12:48:46 -- common/autotest_common.sh@641 -- # waitforlisten 3768466 00:05:42.034 12:48:46 -- common/autotest_common.sh@817 -- # '[' -z 3768466 ']' 00:05:42.034 12:48:46 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:42.034 12:48:46 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:42.034 12:48:46 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:42.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:42.034 12:48:46 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:42.034 12:48:46 -- common/autotest_common.sh@10 -- # set +x 00:05:42.034 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 832: kill: (3768466) - No such process 00:05:42.034 ERROR: process (pid: 3768466) is no longer running 00:05:42.034 12:48:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:42.034 12:48:46 -- common/autotest_common.sh@850 -- # return 1 00:05:42.034 12:48:46 -- common/autotest_common.sh@641 -- # es=1 00:05:42.034 12:48:46 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:42.034 12:48:46 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:42.034 12:48:46 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:42.034 12:48:46 -- event/cpu_locks.sh@54 -- # no_locks 00:05:42.034 12:48:46 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:42.034 12:48:46 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:42.034 12:48:46 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:42.034 00:05:42.034 real 0m1.213s 00:05:42.034 user 0m1.277s 00:05:42.034 sys 0m0.394s 00:05:42.034 12:48:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:42.034 12:48:46 -- common/autotest_common.sh@10 -- # set +x 00:05:42.034 ************************************ 00:05:42.034 END TEST default_locks 00:05:42.034 ************************************ 00:05:42.034 12:48:46 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:42.034 12:48:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:42.034 12:48:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:42.034 12:48:46 -- common/autotest_common.sh@10 -- # set +x 00:05:42.034 ************************************ 00:05:42.034 START TEST default_locks_via_rpc 00:05:42.034 ************************************ 00:05:42.034 12:48:47 -- common/autotest_common.sh@1111 -- # default_locks_via_rpc 00:05:42.034 12:48:47 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3768721 00:05:42.034 12:48:47 -- event/cpu_locks.sh@63 -- # waitforlisten 3768721 00:05:42.034 12:48:47 -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:42.034 12:48:47 -- common/autotest_common.sh@817 -- # '[' -z 3768721 ']' 00:05:42.034 12:48:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:42.034 12:48:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:42.034 12:48:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:42.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:42.034 12:48:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:42.034 12:48:47 -- common/autotest_common.sh@10 -- # set +x 00:05:42.295 [2024-04-26 12:48:47.096716] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:05:42.295 [2024-04-26 12:48:47.096763] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3768721 ] 00:05:42.295 EAL: No free 2048 kB hugepages reported on node 1 00:05:42.295 [2024-04-26 12:48:47.157447] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.295 [2024-04-26 12:48:47.224628] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.865 12:48:47 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:42.865 12:48:47 -- common/autotest_common.sh@850 -- # return 0 00:05:42.865 12:48:47 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:42.865 12:48:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:42.865 12:48:47 -- common/autotest_common.sh@10 -- # set +x 00:05:42.865 12:48:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:42.865 12:48:47 -- event/cpu_locks.sh@67 -- # no_locks 00:05:42.865 12:48:47 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:42.865 12:48:47 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:42.865 12:48:47 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:42.865 12:48:47 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:42.865 12:48:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:42.865 12:48:47 -- common/autotest_common.sh@10 -- # set +x 00:05:42.865 12:48:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:42.865 12:48:47 -- event/cpu_locks.sh@71 -- # locks_exist 3768721 00:05:42.865 12:48:47 -- event/cpu_locks.sh@22 -- # lslocks -p 3768721 00:05:42.865 12:48:47 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:43.433 12:48:48 -- event/cpu_locks.sh@73 -- # killprocess 3768721 00:05:43.434 12:48:48 -- common/autotest_common.sh@936 -- # '[' -z 3768721 ']' 00:05:43.434 12:48:48 -- common/autotest_common.sh@940 -- # kill -0 3768721 00:05:43.434 12:48:48 -- common/autotest_common.sh@941 -- # uname 00:05:43.434 12:48:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:43.434 12:48:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3768721 00:05:43.434 12:48:48 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:43.434 12:48:48 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:43.434 12:48:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3768721' 00:05:43.434 killing process with pid 3768721 00:05:43.434 12:48:48 -- common/autotest_common.sh@955 -- # kill 3768721 00:05:43.434 12:48:48 -- common/autotest_common.sh@960 -- # wait 3768721 00:05:43.693 00:05:43.693 real 0m1.553s 00:05:43.693 user 0m1.650s 00:05:43.693 sys 0m0.527s 00:05:43.693 12:48:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:43.693 12:48:48 -- common/autotest_common.sh@10 -- # set +x 00:05:43.693 ************************************ 00:05:43.693 END TEST default_locks_via_rpc 00:05:43.693 ************************************ 00:05:43.693 12:48:48 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:43.693 12:48:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:43.693 12:48:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:43.693 12:48:48 -- common/autotest_common.sh@10 -- # set +x 00:05:43.953 ************************************ 00:05:43.953 START TEST non_locking_app_on_locked_coremask 00:05:43.953 ************************************ 00:05:43.953 12:48:48 -- common/autotest_common.sh@1111 -- # non_locking_app_on_locked_coremask 00:05:43.953 12:48:48 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3769124 00:05:43.953 12:48:48 -- event/cpu_locks.sh@81 -- # waitforlisten 3769124 /var/tmp/spdk.sock 00:05:43.953 12:48:48 -- common/autotest_common.sh@817 -- # '[' -z 3769124 ']' 00:05:43.953 12:48:48 -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:43.953 12:48:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:43.953 12:48:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:43.953 12:48:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:43.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:43.953 12:48:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:43.953 12:48:48 -- common/autotest_common.sh@10 -- # set +x 00:05:43.953 [2024-04-26 12:48:48.848455] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:05:43.953 [2024-04-26 12:48:48.848515] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3769124 ] 00:05:43.953 EAL: No free 2048 kB hugepages reported on node 1 00:05:43.953 [2024-04-26 12:48:48.913320] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.953 [2024-04-26 12:48:48.986563] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.942 12:48:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:44.942 12:48:49 -- common/autotest_common.sh@850 -- # return 0 00:05:44.942 12:48:49 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3769222 00:05:44.942 12:48:49 -- event/cpu_locks.sh@85 -- # waitforlisten 3769222 /var/tmp/spdk2.sock 00:05:44.942 12:48:49 -- common/autotest_common.sh@817 -- # '[' -z 3769222 ']' 00:05:44.943 12:48:49 -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:44.943 12:48:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:44.943 12:48:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:44.943 12:48:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:44.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:44.943 12:48:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:44.943 12:48:49 -- common/autotest_common.sh@10 -- # set +x 00:05:44.943 [2024-04-26 12:48:49.669058] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:05:44.943 [2024-04-26 12:48:49.669124] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3769222 ] 00:05:44.943 EAL: No free 2048 kB hugepages reported on node 1 00:05:44.943 [2024-04-26 12:48:49.759476] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:44.943 [2024-04-26 12:48:49.759512] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.943 [2024-04-26 12:48:49.887206] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.532 12:48:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:45.532 12:48:50 -- common/autotest_common.sh@850 -- # return 0 00:05:45.532 12:48:50 -- event/cpu_locks.sh@87 -- # locks_exist 3769124 00:05:45.532 12:48:50 -- event/cpu_locks.sh@22 -- # lslocks -p 3769124 00:05:45.532 12:48:50 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:46.101 lslocks: write error 00:05:46.101 12:48:50 -- event/cpu_locks.sh@89 -- # killprocess 3769124 00:05:46.101 12:48:50 -- common/autotest_common.sh@936 -- # '[' -z 3769124 ']' 00:05:46.101 12:48:50 -- common/autotest_common.sh@940 -- # kill -0 3769124 00:05:46.101 12:48:50 -- common/autotest_common.sh@941 -- # uname 00:05:46.101 12:48:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:46.101 12:48:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3769124 00:05:46.101 12:48:50 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:46.101 12:48:50 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:46.101 12:48:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3769124' 00:05:46.101 killing process with pid 3769124 00:05:46.101 12:48:50 -- common/autotest_common.sh@955 -- # kill 3769124 00:05:46.101 12:48:50 -- common/autotest_common.sh@960 -- # wait 3769124 00:05:46.362 12:48:51 -- event/cpu_locks.sh@90 -- # killprocess 3769222 00:05:46.362 12:48:51 -- common/autotest_common.sh@936 -- # '[' -z 3769222 ']' 00:05:46.362 12:48:51 -- common/autotest_common.sh@940 -- # kill -0 3769222 00:05:46.362 12:48:51 -- common/autotest_common.sh@941 -- # uname 00:05:46.362 12:48:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:46.362 12:48:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3769222 00:05:46.362 12:48:51 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:46.362 12:48:51 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:46.362 12:48:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3769222' 00:05:46.362 killing process with pid 3769222 00:05:46.362 12:48:51 -- common/autotest_common.sh@955 -- # kill 3769222 00:05:46.362 12:48:51 -- common/autotest_common.sh@960 -- # wait 3769222 00:05:46.622 00:05:46.622 real 0m2.837s 00:05:46.622 user 0m3.081s 00:05:46.622 sys 0m0.859s 00:05:46.622 12:48:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:46.622 12:48:51 -- common/autotest_common.sh@10 -- # set +x 00:05:46.622 ************************************ 00:05:46.622 END TEST non_locking_app_on_locked_coremask 00:05:46.622 ************************************ 00:05:46.622 12:48:51 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:46.622 12:48:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:46.622 12:48:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:46.622 12:48:51 -- common/autotest_common.sh@10 -- # set +x 00:05:46.883 ************************************ 00:05:46.883 START TEST locking_app_on_unlocked_coremask 00:05:46.883 ************************************ 00:05:46.883 12:48:51 -- common/autotest_common.sh@1111 -- # locking_app_on_unlocked_coremask 00:05:46.883 12:48:51 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3769660 00:05:46.883 12:48:51 -- event/cpu_locks.sh@99 -- # waitforlisten 3769660 /var/tmp/spdk.sock 00:05:46.883 12:48:51 -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:46.883 12:48:51 -- common/autotest_common.sh@817 -- # '[' -z 3769660 ']' 00:05:46.883 12:48:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.883 12:48:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:46.883 12:48:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.883 12:48:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:46.883 12:48:51 -- common/autotest_common.sh@10 -- # set +x 00:05:46.883 [2024-04-26 12:48:51.856963] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:05:46.884 [2024-04-26 12:48:51.857018] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3769660 ] 00:05:46.884 EAL: No free 2048 kB hugepages reported on node 1 00:05:46.884 [2024-04-26 12:48:51.919746] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:46.884 [2024-04-26 12:48:51.919779] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.144 [2024-04-26 12:48:51.990295] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.715 12:48:52 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:47.715 12:48:52 -- common/autotest_common.sh@850 -- # return 0 00:05:47.715 12:48:52 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3769936 00:05:47.715 12:48:52 -- event/cpu_locks.sh@103 -- # waitforlisten 3769936 /var/tmp/spdk2.sock 00:05:47.715 12:48:52 -- common/autotest_common.sh@817 -- # '[' -z 3769936 ']' 00:05:47.715 12:48:52 -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:47.715 12:48:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:47.715 12:48:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:47.715 12:48:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:47.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:47.715 12:48:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:47.715 12:48:52 -- common/autotest_common.sh@10 -- # set +x 00:05:47.715 [2024-04-26 12:48:52.665858] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:05:47.715 [2024-04-26 12:48:52.665911] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3769936 ] 00:05:47.715 EAL: No free 2048 kB hugepages reported on node 1 00:05:47.715 [2024-04-26 12:48:52.753416] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.976 [2024-04-26 12:48:52.880686] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.546 12:48:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:48.546 12:48:53 -- common/autotest_common.sh@850 -- # return 0 00:05:48.546 12:48:53 -- event/cpu_locks.sh@105 -- # locks_exist 3769936 00:05:48.546 12:48:53 -- event/cpu_locks.sh@22 -- # lslocks -p 3769936 00:05:48.546 12:48:53 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:48.806 lslocks: write error 00:05:48.806 12:48:53 -- event/cpu_locks.sh@107 -- # killprocess 3769660 00:05:48.806 12:48:53 -- common/autotest_common.sh@936 -- # '[' -z 3769660 ']' 00:05:48.806 12:48:53 -- common/autotest_common.sh@940 -- # kill -0 3769660 00:05:48.806 12:48:53 -- common/autotest_common.sh@941 -- # uname 00:05:48.806 12:48:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:48.806 12:48:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3769660 00:05:49.066 12:48:53 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:49.066 12:48:53 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:49.066 12:48:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3769660' 00:05:49.066 killing process with pid 3769660 00:05:49.066 12:48:53 -- common/autotest_common.sh@955 -- # kill 3769660 00:05:49.066 12:48:53 -- common/autotest_common.sh@960 -- # wait 3769660 00:05:49.326 12:48:54 -- event/cpu_locks.sh@108 -- # killprocess 3769936 00:05:49.326 12:48:54 -- common/autotest_common.sh@936 -- # '[' -z 3769936 ']' 00:05:49.326 12:48:54 -- common/autotest_common.sh@940 -- # kill -0 3769936 00:05:49.326 12:48:54 -- common/autotest_common.sh@941 -- # uname 00:05:49.326 12:48:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:49.326 12:48:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3769936 00:05:49.326 12:48:54 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:49.326 12:48:54 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:49.326 12:48:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3769936' 00:05:49.326 killing process with pid 3769936 00:05:49.326 12:48:54 -- common/autotest_common.sh@955 -- # kill 3769936 00:05:49.326 12:48:54 -- common/autotest_common.sh@960 -- # wait 3769936 00:05:49.586 00:05:49.586 real 0m2.747s 00:05:49.587 user 0m3.012s 00:05:49.587 sys 0m0.800s 00:05:49.587 12:48:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:49.587 12:48:54 -- common/autotest_common.sh@10 -- # set +x 00:05:49.587 ************************************ 00:05:49.587 END TEST locking_app_on_unlocked_coremask 00:05:49.587 ************************************ 00:05:49.587 12:48:54 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:49.587 12:48:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:49.587 12:48:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:49.587 12:48:54 -- common/autotest_common.sh@10 -- # set +x 00:05:49.847 ************************************ 00:05:49.847 START TEST locking_app_on_locked_coremask 00:05:49.847 ************************************ 00:05:49.847 12:48:54 -- common/autotest_common.sh@1111 -- # locking_app_on_locked_coremask 00:05:49.847 12:48:54 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3770323 00:05:49.847 12:48:54 -- event/cpu_locks.sh@116 -- # waitforlisten 3770323 /var/tmp/spdk.sock 00:05:49.847 12:48:54 -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:49.847 12:48:54 -- common/autotest_common.sh@817 -- # '[' -z 3770323 ']' 00:05:49.847 12:48:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:49.847 12:48:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:49.847 12:48:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:49.847 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:49.847 12:48:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:49.847 12:48:54 -- common/autotest_common.sh@10 -- # set +x 00:05:49.847 [2024-04-26 12:48:54.780275] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:05:49.847 [2024-04-26 12:48:54.780321] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3770323 ] 00:05:49.847 EAL: No free 2048 kB hugepages reported on node 1 00:05:49.847 [2024-04-26 12:48:54.840584] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.847 [2024-04-26 12:48:54.902266] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.790 12:48:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:50.790 12:48:55 -- common/autotest_common.sh@850 -- # return 0 00:05:50.790 12:48:55 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3770549 00:05:50.790 12:48:55 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3770549 /var/tmp/spdk2.sock 00:05:50.790 12:48:55 -- common/autotest_common.sh@638 -- # local es=0 00:05:50.790 12:48:55 -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:50.790 12:48:55 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 3770549 /var/tmp/spdk2.sock 00:05:50.790 12:48:55 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:05:50.790 12:48:55 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:50.790 12:48:55 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:05:50.790 12:48:55 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:50.790 12:48:55 -- common/autotest_common.sh@641 -- # waitforlisten 3770549 /var/tmp/spdk2.sock 00:05:50.790 12:48:55 -- common/autotest_common.sh@817 -- # '[' -z 3770549 ']' 00:05:50.790 12:48:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:50.790 12:48:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:50.790 12:48:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:50.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:50.790 12:48:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:50.790 12:48:55 -- common/autotest_common.sh@10 -- # set +x 00:05:50.790 [2024-04-26 12:48:55.603054] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:05:50.790 [2024-04-26 12:48:55.603103] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3770549 ] 00:05:50.790 EAL: No free 2048 kB hugepages reported on node 1 00:05:50.790 [2024-04-26 12:48:55.691572] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3770323 has claimed it. 00:05:50.790 [2024-04-26 12:48:55.691616] app.c: 821:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:51.361 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 832: kill: (3770549) - No such process 00:05:51.361 ERROR: process (pid: 3770549) is no longer running 00:05:51.361 12:48:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:51.361 12:48:56 -- common/autotest_common.sh@850 -- # return 1 00:05:51.361 12:48:56 -- common/autotest_common.sh@641 -- # es=1 00:05:51.361 12:48:56 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:51.361 12:48:56 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:51.361 12:48:56 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:51.361 12:48:56 -- event/cpu_locks.sh@122 -- # locks_exist 3770323 00:05:51.361 12:48:56 -- event/cpu_locks.sh@22 -- # lslocks -p 3770323 00:05:51.361 12:48:56 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:51.933 lslocks: write error 00:05:51.933 12:48:56 -- event/cpu_locks.sh@124 -- # killprocess 3770323 00:05:51.933 12:48:56 -- common/autotest_common.sh@936 -- # '[' -z 3770323 ']' 00:05:51.933 12:48:56 -- common/autotest_common.sh@940 -- # kill -0 3770323 00:05:51.933 12:48:56 -- common/autotest_common.sh@941 -- # uname 00:05:51.933 12:48:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:51.933 12:48:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3770323 00:05:51.933 12:48:56 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:51.933 12:48:56 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:51.933 12:48:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3770323' 00:05:51.933 killing process with pid 3770323 00:05:51.933 12:48:56 -- common/autotest_common.sh@955 -- # kill 3770323 00:05:51.933 12:48:56 -- common/autotest_common.sh@960 -- # wait 3770323 00:05:51.933 00:05:51.933 real 0m2.229s 00:05:51.933 user 0m2.480s 00:05:51.933 sys 0m0.612s 00:05:51.933 12:48:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:51.933 12:48:56 -- common/autotest_common.sh@10 -- # set +x 00:05:51.933 ************************************ 00:05:51.933 END TEST locking_app_on_locked_coremask 00:05:51.933 ************************************ 00:05:52.200 12:48:56 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:52.201 12:48:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:52.201 12:48:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:52.201 12:48:56 -- common/autotest_common.sh@10 -- # set +x 00:05:52.201 ************************************ 00:05:52.201 START TEST locking_overlapped_coremask 00:05:52.201 ************************************ 00:05:52.201 12:48:57 -- common/autotest_common.sh@1111 -- # locking_overlapped_coremask 00:05:52.201 12:48:57 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3770900 00:05:52.201 12:48:57 -- event/cpu_locks.sh@133 -- # waitforlisten 3770900 /var/tmp/spdk.sock 00:05:52.201 12:48:57 -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:52.201 12:48:57 -- common/autotest_common.sh@817 -- # '[' -z 3770900 ']' 00:05:52.201 12:48:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:52.201 12:48:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:52.201 12:48:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:52.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:52.201 12:48:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:52.201 12:48:57 -- common/autotest_common.sh@10 -- # set +x 00:05:52.201 [2024-04-26 12:48:57.190898] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:05:52.201 [2024-04-26 12:48:57.190941] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3770900 ] 00:05:52.201 EAL: No free 2048 kB hugepages reported on node 1 00:05:52.201 [2024-04-26 12:48:57.251205] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:52.464 [2024-04-26 12:48:57.315322] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:52.464 [2024-04-26 12:48:57.315454] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:52.464 [2024-04-26 12:48:57.315457] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.033 12:48:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:53.033 12:48:57 -- common/autotest_common.sh@850 -- # return 0 00:05:53.033 12:48:57 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3771042 00:05:53.033 12:48:57 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3771042 /var/tmp/spdk2.sock 00:05:53.033 12:48:57 -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:53.033 12:48:57 -- common/autotest_common.sh@638 -- # local es=0 00:05:53.033 12:48:57 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 3771042 /var/tmp/spdk2.sock 00:05:53.033 12:48:57 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:05:53.033 12:48:57 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:53.033 12:48:57 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:05:53.033 12:48:57 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:53.033 12:48:57 -- common/autotest_common.sh@641 -- # waitforlisten 3771042 /var/tmp/spdk2.sock 00:05:53.033 12:48:57 -- common/autotest_common.sh@817 -- # '[' -z 3771042 ']' 00:05:53.033 12:48:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:53.033 12:48:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:53.033 12:48:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:53.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:53.033 12:48:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:53.033 12:48:57 -- common/autotest_common.sh@10 -- # set +x 00:05:53.033 [2024-04-26 12:48:58.013508] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:05:53.033 [2024-04-26 12:48:58.013558] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3771042 ] 00:05:53.033 EAL: No free 2048 kB hugepages reported on node 1 00:05:53.033 [2024-04-26 12:48:58.084036] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3770900 has claimed it. 00:05:53.033 [2024-04-26 12:48:58.084070] app.c: 821:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:53.604 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 832: kill: (3771042) - No such process 00:05:53.604 ERROR: process (pid: 3771042) is no longer running 00:05:53.604 12:48:58 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:53.604 12:48:58 -- common/autotest_common.sh@850 -- # return 1 00:05:53.604 12:48:58 -- common/autotest_common.sh@641 -- # es=1 00:05:53.604 12:48:58 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:53.604 12:48:58 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:53.604 12:48:58 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:53.604 12:48:58 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:53.604 12:48:58 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:53.604 12:48:58 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:53.604 12:48:58 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:53.604 12:48:58 -- event/cpu_locks.sh@141 -- # killprocess 3770900 00:05:53.604 12:48:58 -- common/autotest_common.sh@936 -- # '[' -z 3770900 ']' 00:05:53.604 12:48:58 -- common/autotest_common.sh@940 -- # kill -0 3770900 00:05:53.604 12:48:58 -- common/autotest_common.sh@941 -- # uname 00:05:53.604 12:48:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:53.605 12:48:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3770900 00:05:53.865 12:48:58 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:53.865 12:48:58 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:53.865 12:48:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3770900' 00:05:53.865 killing process with pid 3770900 00:05:53.865 12:48:58 -- common/autotest_common.sh@955 -- # kill 3770900 00:05:53.865 12:48:58 -- common/autotest_common.sh@960 -- # wait 3770900 00:05:53.865 00:05:53.865 real 0m1.744s 00:05:53.865 user 0m4.965s 00:05:53.865 sys 0m0.354s 00:05:53.865 12:48:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:53.865 12:48:58 -- common/autotest_common.sh@10 -- # set +x 00:05:53.865 ************************************ 00:05:53.865 END TEST locking_overlapped_coremask 00:05:53.865 ************************************ 00:05:53.865 12:48:58 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:53.865 12:48:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:53.865 12:48:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:53.865 12:48:58 -- common/autotest_common.sh@10 -- # set +x 00:05:54.126 ************************************ 00:05:54.126 START TEST locking_overlapped_coremask_via_rpc 00:05:54.126 ************************************ 00:05:54.126 12:48:59 -- common/autotest_common.sh@1111 -- # locking_overlapped_coremask_via_rpc 00:05:54.126 12:48:59 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3771405 00:05:54.126 12:48:59 -- event/cpu_locks.sh@149 -- # waitforlisten 3771405 /var/tmp/spdk.sock 00:05:54.126 12:48:59 -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:54.126 12:48:59 -- common/autotest_common.sh@817 -- # '[' -z 3771405 ']' 00:05:54.126 12:48:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.126 12:48:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:54.127 12:48:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.127 12:48:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:54.127 12:48:59 -- common/autotest_common.sh@10 -- # set +x 00:05:54.127 [2024-04-26 12:48:59.130988] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:05:54.127 [2024-04-26 12:48:59.131036] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3771405 ] 00:05:54.127 EAL: No free 2048 kB hugepages reported on node 1 00:05:54.388 [2024-04-26 12:48:59.190726] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:54.389 [2024-04-26 12:48:59.190753] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:54.389 [2024-04-26 12:48:59.255549] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:54.389 [2024-04-26 12:48:59.255637] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:54.389 [2024-04-26 12:48:59.255639] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.988 12:48:59 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:54.988 12:48:59 -- common/autotest_common.sh@850 -- # return 0 00:05:54.988 12:48:59 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3771426 00:05:54.988 12:48:59 -- event/cpu_locks.sh@153 -- # waitforlisten 3771426 /var/tmp/spdk2.sock 00:05:54.988 12:48:59 -- common/autotest_common.sh@817 -- # '[' -z 3771426 ']' 00:05:54.988 12:48:59 -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:54.988 12:48:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:54.988 12:48:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:54.988 12:48:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:54.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:54.988 12:48:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:54.988 12:48:59 -- common/autotest_common.sh@10 -- # set +x 00:05:54.988 [2024-04-26 12:48:59.950749] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:05:54.988 [2024-04-26 12:48:59.950801] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3771426 ] 00:05:54.988 EAL: No free 2048 kB hugepages reported on node 1 00:05:54.988 [2024-04-26 12:49:00.023776] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:54.988 [2024-04-26 12:49:00.023800] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:55.249 [2024-04-26 12:49:00.127435] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:55.249 [2024-04-26 12:49:00.127591] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:55.249 [2024-04-26 12:49:00.127593] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:05:55.820 12:49:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:55.820 12:49:00 -- common/autotest_common.sh@850 -- # return 0 00:05:55.820 12:49:00 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:55.820 12:49:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:55.820 12:49:00 -- common/autotest_common.sh@10 -- # set +x 00:05:55.820 12:49:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:55.820 12:49:00 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:55.820 12:49:00 -- common/autotest_common.sh@638 -- # local es=0 00:05:55.820 12:49:00 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:55.820 12:49:00 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:05:55.820 12:49:00 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:55.820 12:49:00 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:05:55.820 12:49:00 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:55.820 12:49:00 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:55.820 12:49:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:55.820 12:49:00 -- common/autotest_common.sh@10 -- # set +x 00:05:55.820 [2024-04-26 12:49:00.730896] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3771405 has claimed it. 00:05:55.820 request: 00:05:55.820 { 00:05:55.820 "method": "framework_enable_cpumask_locks", 00:05:55.820 "req_id": 1 00:05:55.820 } 00:05:55.820 Got JSON-RPC error response 00:05:55.820 response: 00:05:55.820 { 00:05:55.820 "code": -32603, 00:05:55.820 "message": "Failed to claim CPU core: 2" 00:05:55.820 } 00:05:55.820 12:49:00 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:05:55.820 12:49:00 -- common/autotest_common.sh@641 -- # es=1 00:05:55.820 12:49:00 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:55.820 12:49:00 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:55.820 12:49:00 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:55.820 12:49:00 -- event/cpu_locks.sh@158 -- # waitforlisten 3771405 /var/tmp/spdk.sock 00:05:55.820 12:49:00 -- common/autotest_common.sh@817 -- # '[' -z 3771405 ']' 00:05:55.820 12:49:00 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.820 12:49:00 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:55.820 12:49:00 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.820 12:49:00 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:55.820 12:49:00 -- common/autotest_common.sh@10 -- # set +x 00:05:56.081 12:49:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:56.081 12:49:00 -- common/autotest_common.sh@850 -- # return 0 00:05:56.081 12:49:00 -- event/cpu_locks.sh@159 -- # waitforlisten 3771426 /var/tmp/spdk2.sock 00:05:56.081 12:49:00 -- common/autotest_common.sh@817 -- # '[' -z 3771426 ']' 00:05:56.081 12:49:00 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:56.081 12:49:00 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:56.081 12:49:00 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:56.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:56.081 12:49:00 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:56.081 12:49:00 -- common/autotest_common.sh@10 -- # set +x 00:05:56.081 12:49:01 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:56.081 12:49:01 -- common/autotest_common.sh@850 -- # return 0 00:05:56.081 12:49:01 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:56.081 12:49:01 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:56.081 12:49:01 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:56.081 12:49:01 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:56.081 00:05:56.081 real 0m2.003s 00:05:56.081 user 0m0.769s 00:05:56.081 sys 0m0.154s 00:05:56.081 12:49:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:56.081 12:49:01 -- common/autotest_common.sh@10 -- # set +x 00:05:56.081 ************************************ 00:05:56.081 END TEST locking_overlapped_coremask_via_rpc 00:05:56.081 ************************************ 00:05:56.081 12:49:01 -- event/cpu_locks.sh@174 -- # cleanup 00:05:56.081 12:49:01 -- event/cpu_locks.sh@15 -- # [[ -z 3771405 ]] 00:05:56.081 12:49:01 -- event/cpu_locks.sh@15 -- # killprocess 3771405 00:05:56.081 12:49:01 -- common/autotest_common.sh@936 -- # '[' -z 3771405 ']' 00:05:56.081 12:49:01 -- common/autotest_common.sh@940 -- # kill -0 3771405 00:05:56.081 12:49:01 -- common/autotest_common.sh@941 -- # uname 00:05:56.081 12:49:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:56.081 12:49:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3771405 00:05:56.341 12:49:01 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:56.341 12:49:01 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:56.341 12:49:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3771405' 00:05:56.341 killing process with pid 3771405 00:05:56.341 12:49:01 -- common/autotest_common.sh@955 -- # kill 3771405 00:05:56.341 12:49:01 -- common/autotest_common.sh@960 -- # wait 3771405 00:05:56.341 12:49:01 -- event/cpu_locks.sh@16 -- # [[ -z 3771426 ]] 00:05:56.341 12:49:01 -- event/cpu_locks.sh@16 -- # killprocess 3771426 00:05:56.341 12:49:01 -- common/autotest_common.sh@936 -- # '[' -z 3771426 ']' 00:05:56.341 12:49:01 -- common/autotest_common.sh@940 -- # kill -0 3771426 00:05:56.341 12:49:01 -- common/autotest_common.sh@941 -- # uname 00:05:56.341 12:49:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:56.341 12:49:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3771426 00:05:56.602 12:49:01 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:05:56.602 12:49:01 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:05:56.602 12:49:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3771426' 00:05:56.602 killing process with pid 3771426 00:05:56.602 12:49:01 -- common/autotest_common.sh@955 -- # kill 3771426 00:05:56.602 12:49:01 -- common/autotest_common.sh@960 -- # wait 3771426 00:05:56.602 12:49:01 -- event/cpu_locks.sh@18 -- # rm -f 00:05:56.602 12:49:01 -- event/cpu_locks.sh@1 -- # cleanup 00:05:56.602 12:49:01 -- event/cpu_locks.sh@15 -- # [[ -z 3771405 ]] 00:05:56.602 12:49:01 -- event/cpu_locks.sh@15 -- # killprocess 3771405 00:05:56.602 12:49:01 -- common/autotest_common.sh@936 -- # '[' -z 3771405 ']' 00:05:56.602 12:49:01 -- common/autotest_common.sh@940 -- # kill -0 3771405 00:05:56.602 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (3771405) - No such process 00:05:56.602 12:49:01 -- common/autotest_common.sh@963 -- # echo 'Process with pid 3771405 is not found' 00:05:56.602 Process with pid 3771405 is not found 00:05:56.602 12:49:01 -- event/cpu_locks.sh@16 -- # [[ -z 3771426 ]] 00:05:56.602 12:49:01 -- event/cpu_locks.sh@16 -- # killprocess 3771426 00:05:56.602 12:49:01 -- common/autotest_common.sh@936 -- # '[' -z 3771426 ']' 00:05:56.602 12:49:01 -- common/autotest_common.sh@940 -- # kill -0 3771426 00:05:56.602 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (3771426) - No such process 00:05:56.602 12:49:01 -- common/autotest_common.sh@963 -- # echo 'Process with pid 3771426 is not found' 00:05:56.602 Process with pid 3771426 is not found 00:05:56.602 12:49:01 -- event/cpu_locks.sh@18 -- # rm -f 00:05:56.602 00:05:56.602 real 0m16.247s 00:05:56.602 user 0m27.135s 00:05:56.602 sys 0m4.895s 00:05:56.602 12:49:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:56.602 12:49:01 -- common/autotest_common.sh@10 -- # set +x 00:05:56.602 ************************************ 00:05:56.602 END TEST cpu_locks 00:05:56.602 ************************************ 00:05:56.863 00:05:56.863 real 0m43.529s 00:05:56.863 user 1m20.858s 00:05:56.863 sys 0m8.270s 00:05:56.863 12:49:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:56.863 12:49:01 -- common/autotest_common.sh@10 -- # set +x 00:05:56.863 ************************************ 00:05:56.863 END TEST event 00:05:56.863 ************************************ 00:05:56.863 12:49:01 -- spdk/autotest.sh@178 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:56.863 12:49:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:56.863 12:49:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:56.863 12:49:01 -- common/autotest_common.sh@10 -- # set +x 00:05:56.863 ************************************ 00:05:56.863 START TEST thread 00:05:56.863 ************************************ 00:05:56.863 12:49:01 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:57.125 * Looking for test storage... 00:05:57.125 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:57.125 12:49:01 -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:57.125 12:49:01 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:05:57.125 12:49:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:57.125 12:49:01 -- common/autotest_common.sh@10 -- # set +x 00:05:57.125 ************************************ 00:05:57.125 START TEST thread_poller_perf 00:05:57.125 ************************************ 00:05:57.125 12:49:02 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:57.125 [2024-04-26 12:49:02.140246] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:05:57.125 [2024-04-26 12:49:02.140339] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3772012 ] 00:05:57.125 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.386 [2024-04-26 12:49:02.210653] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.386 [2024-04-26 12:49:02.284144] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.386 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:58.329 ====================================== 00:05:58.329 busy:2409271648 (cyc) 00:05:58.329 total_run_count: 287000 00:05:58.329 tsc_hz: 2400000000 (cyc) 00:05:58.329 ====================================== 00:05:58.329 poller_cost: 8394 (cyc), 3497 (nsec) 00:05:58.329 00:05:58.329 real 0m1.226s 00:05:58.329 user 0m1.144s 00:05:58.329 sys 0m0.077s 00:05:58.329 12:49:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:58.329 12:49:03 -- common/autotest_common.sh@10 -- # set +x 00:05:58.329 ************************************ 00:05:58.329 END TEST thread_poller_perf 00:05:58.329 ************************************ 00:05:58.329 12:49:03 -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:58.329 12:49:03 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:05:58.329 12:49:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:58.329 12:49:03 -- common/autotest_common.sh@10 -- # set +x 00:05:58.589 ************************************ 00:05:58.589 START TEST thread_poller_perf 00:05:58.589 ************************************ 00:05:58.589 12:49:03 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:58.589 [2024-04-26 12:49:03.563862] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:05:58.589 [2024-04-26 12:49:03.563960] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3772244 ] 00:05:58.589 EAL: No free 2048 kB hugepages reported on node 1 00:05:58.589 [2024-04-26 12:49:03.630068] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.849 [2024-04-26 12:49:03.698246] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.849 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:59.788 ====================================== 00:05:59.788 busy:2402330538 (cyc) 00:05:59.788 total_run_count: 3816000 00:05:59.788 tsc_hz: 2400000000 (cyc) 00:05:59.788 ====================================== 00:05:59.788 poller_cost: 629 (cyc), 262 (nsec) 00:05:59.788 00:05:59.788 real 0m1.209s 00:05:59.788 user 0m1.133s 00:05:59.788 sys 0m0.072s 00:05:59.788 12:49:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:59.788 12:49:04 -- common/autotest_common.sh@10 -- # set +x 00:05:59.788 ************************************ 00:05:59.788 END TEST thread_poller_perf 00:05:59.788 ************************************ 00:05:59.788 12:49:04 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:59.788 00:05:59.788 real 0m2.921s 00:05:59.788 user 0m2.462s 00:05:59.788 sys 0m0.424s 00:05:59.788 12:49:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:59.788 12:49:04 -- common/autotest_common.sh@10 -- # set +x 00:05:59.788 ************************************ 00:05:59.788 END TEST thread 00:05:59.788 ************************************ 00:05:59.788 12:49:04 -- spdk/autotest.sh@179 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:05:59.788 12:49:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:59.788 12:49:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:59.788 12:49:04 -- common/autotest_common.sh@10 -- # set +x 00:06:00.049 ************************************ 00:06:00.049 START TEST accel 00:06:00.049 ************************************ 00:06:00.049 12:49:04 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:00.049 * Looking for test storage... 00:06:00.049 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:00.049 12:49:05 -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:00.049 12:49:05 -- accel/accel.sh@82 -- # get_expected_opcs 00:06:00.049 12:49:05 -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:00.049 12:49:05 -- accel/accel.sh@62 -- # spdk_tgt_pid=3772641 00:06:00.049 12:49:05 -- accel/accel.sh@63 -- # waitforlisten 3772641 00:06:00.049 12:49:05 -- common/autotest_common.sh@817 -- # '[' -z 3772641 ']' 00:06:00.049 12:49:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.049 12:49:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:00.049 12:49:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.049 12:49:05 -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:00.049 12:49:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:00.049 12:49:05 -- common/autotest_common.sh@10 -- # set +x 00:06:00.049 12:49:05 -- accel/accel.sh@61 -- # build_accel_config 00:06:00.049 12:49:05 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:00.049 12:49:05 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:00.049 12:49:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:00.049 12:49:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:00.049 12:49:05 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:00.049 12:49:05 -- accel/accel.sh@40 -- # local IFS=, 00:06:00.049 12:49:05 -- accel/accel.sh@41 -- # jq -r . 00:06:00.311 [2024-04-26 12:49:05.161693] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:06:00.311 [2024-04-26 12:49:05.161770] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3772641 ] 00:06:00.311 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.311 [2024-04-26 12:49:05.226939] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.311 [2024-04-26 12:49:05.300862] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.882 12:49:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:00.882 12:49:05 -- common/autotest_common.sh@850 -- # return 0 00:06:00.882 12:49:05 -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:00.882 12:49:05 -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:00.882 12:49:05 -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:00.882 12:49:05 -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:00.882 12:49:05 -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:00.882 12:49:05 -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:00.882 12:49:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:00.882 12:49:05 -- common/autotest_common.sh@10 -- # set +x 00:06:00.882 12:49:05 -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:00.882 12:49:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:01.144 12:49:05 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:01.144 12:49:05 -- accel/accel.sh@72 -- # IFS== 00:06:01.144 12:49:05 -- accel/accel.sh@72 -- # read -r opc module 00:06:01.144 12:49:05 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:01.144 12:49:05 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:01.144 12:49:05 -- accel/accel.sh@72 -- # IFS== 00:06:01.144 12:49:05 -- accel/accel.sh@72 -- # read -r opc module 00:06:01.144 12:49:05 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:01.144 12:49:05 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:01.144 12:49:05 -- accel/accel.sh@72 -- # IFS== 00:06:01.144 12:49:05 -- accel/accel.sh@72 -- # read -r opc module 00:06:01.144 12:49:05 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:01.144 12:49:05 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:01.144 12:49:05 -- accel/accel.sh@72 -- # IFS== 00:06:01.144 12:49:05 -- accel/accel.sh@72 -- # read -r opc module 00:06:01.144 12:49:05 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:01.144 12:49:05 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:01.144 12:49:05 -- accel/accel.sh@72 -- # IFS== 00:06:01.144 12:49:05 -- accel/accel.sh@72 -- # read -r opc module 00:06:01.144 12:49:05 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:01.144 12:49:05 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:01.144 12:49:05 -- accel/accel.sh@72 -- # IFS== 00:06:01.144 12:49:05 -- accel/accel.sh@72 -- # read -r opc module 00:06:01.144 12:49:05 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:01.144 12:49:05 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:01.144 12:49:05 -- accel/accel.sh@72 -- # IFS== 00:06:01.144 12:49:05 -- accel/accel.sh@72 -- # read -r opc module 00:06:01.144 12:49:05 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:01.144 12:49:05 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:01.144 12:49:05 -- accel/accel.sh@72 -- # IFS== 00:06:01.144 12:49:05 -- accel/accel.sh@72 -- # read -r opc module 00:06:01.144 12:49:05 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:01.144 12:49:05 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:01.144 12:49:05 -- accel/accel.sh@72 -- # IFS== 00:06:01.144 12:49:05 -- accel/accel.sh@72 -- # read -r opc module 00:06:01.144 12:49:05 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:01.144 12:49:05 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:01.144 12:49:05 -- accel/accel.sh@72 -- # IFS== 00:06:01.144 12:49:05 -- accel/accel.sh@72 -- # read -r opc module 00:06:01.144 12:49:05 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:01.144 12:49:05 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:01.144 12:49:05 -- accel/accel.sh@72 -- # IFS== 00:06:01.144 12:49:05 -- accel/accel.sh@72 -- # read -r opc module 00:06:01.144 12:49:05 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:01.144 12:49:05 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:01.144 12:49:05 -- accel/accel.sh@72 -- # IFS== 00:06:01.144 12:49:05 -- accel/accel.sh@72 -- # read -r opc module 00:06:01.144 12:49:05 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:01.144 12:49:05 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:01.144 12:49:05 -- accel/accel.sh@72 -- # IFS== 00:06:01.144 12:49:05 -- accel/accel.sh@72 -- # read -r opc module 00:06:01.144 12:49:05 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:01.144 12:49:05 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:01.144 12:49:05 -- accel/accel.sh@72 -- # IFS== 00:06:01.144 12:49:05 -- accel/accel.sh@72 -- # read -r opc module 00:06:01.144 12:49:05 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:01.145 12:49:05 -- accel/accel.sh@75 -- # killprocess 3772641 00:06:01.145 12:49:05 -- common/autotest_common.sh@936 -- # '[' -z 3772641 ']' 00:06:01.145 12:49:05 -- common/autotest_common.sh@940 -- # kill -0 3772641 00:06:01.145 12:49:05 -- common/autotest_common.sh@941 -- # uname 00:06:01.145 12:49:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:01.145 12:49:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3772641 00:06:01.145 12:49:06 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:01.145 12:49:06 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:01.145 12:49:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3772641' 00:06:01.145 killing process with pid 3772641 00:06:01.145 12:49:06 -- common/autotest_common.sh@955 -- # kill 3772641 00:06:01.145 12:49:06 -- common/autotest_common.sh@960 -- # wait 3772641 00:06:01.406 12:49:06 -- accel/accel.sh@76 -- # trap - ERR 00:06:01.406 12:49:06 -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:01.406 12:49:06 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:01.406 12:49:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:01.406 12:49:06 -- common/autotest_common.sh@10 -- # set +x 00:06:01.406 12:49:06 -- common/autotest_common.sh@1111 -- # accel_perf -h 00:06:01.406 12:49:06 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:01.406 12:49:06 -- accel/accel.sh@12 -- # build_accel_config 00:06:01.406 12:49:06 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:01.406 12:49:06 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:01.406 12:49:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:01.406 12:49:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:01.406 12:49:06 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:01.406 12:49:06 -- accel/accel.sh@40 -- # local IFS=, 00:06:01.406 12:49:06 -- accel/accel.sh@41 -- # jq -r . 00:06:01.406 12:49:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:01.406 12:49:06 -- common/autotest_common.sh@10 -- # set +x 00:06:01.406 12:49:06 -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:01.406 12:49:06 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:01.406 12:49:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:01.406 12:49:06 -- common/autotest_common.sh@10 -- # set +x 00:06:01.667 ************************************ 00:06:01.667 START TEST accel_missing_filename 00:06:01.667 ************************************ 00:06:01.667 12:49:06 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w compress 00:06:01.667 12:49:06 -- common/autotest_common.sh@638 -- # local es=0 00:06:01.667 12:49:06 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:01.667 12:49:06 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:06:01.667 12:49:06 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:01.667 12:49:06 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:06:01.667 12:49:06 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:01.667 12:49:06 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress 00:06:01.667 12:49:06 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:01.667 12:49:06 -- accel/accel.sh@12 -- # build_accel_config 00:06:01.667 12:49:06 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:01.667 12:49:06 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:01.667 12:49:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:01.667 12:49:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:01.667 12:49:06 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:01.667 12:49:06 -- accel/accel.sh@40 -- # local IFS=, 00:06:01.667 12:49:06 -- accel/accel.sh@41 -- # jq -r . 00:06:01.667 [2024-04-26 12:49:06.613360] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:06:01.667 [2024-04-26 12:49:06.613422] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3773021 ] 00:06:01.667 EAL: No free 2048 kB hugepages reported on node 1 00:06:01.667 [2024-04-26 12:49:06.676671] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.927 [2024-04-26 12:49:06.744864] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.927 [2024-04-26 12:49:06.776858] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:01.927 [2024-04-26 12:49:06.813971] accel_perf.c:1394:main: *ERROR*: ERROR starting application 00:06:01.927 A filename is required. 00:06:01.927 12:49:06 -- common/autotest_common.sh@641 -- # es=234 00:06:01.927 12:49:06 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:01.927 12:49:06 -- common/autotest_common.sh@650 -- # es=106 00:06:01.927 12:49:06 -- common/autotest_common.sh@651 -- # case "$es" in 00:06:01.927 12:49:06 -- common/autotest_common.sh@658 -- # es=1 00:06:01.927 12:49:06 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:01.927 00:06:01.927 real 0m0.283s 00:06:01.927 user 0m0.223s 00:06:01.927 sys 0m0.100s 00:06:01.927 12:49:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:01.927 12:49:06 -- common/autotest_common.sh@10 -- # set +x 00:06:01.927 ************************************ 00:06:01.927 END TEST accel_missing_filename 00:06:01.927 ************************************ 00:06:01.927 12:49:06 -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:01.927 12:49:06 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:06:01.927 12:49:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:01.927 12:49:06 -- common/autotest_common.sh@10 -- # set +x 00:06:02.188 ************************************ 00:06:02.188 START TEST accel_compress_verify 00:06:02.188 ************************************ 00:06:02.188 12:49:07 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:02.188 12:49:07 -- common/autotest_common.sh@638 -- # local es=0 00:06:02.188 12:49:07 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:02.188 12:49:07 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:06:02.188 12:49:07 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:02.188 12:49:07 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:06:02.188 12:49:07 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:02.188 12:49:07 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:02.188 12:49:07 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:02.188 12:49:07 -- accel/accel.sh@12 -- # build_accel_config 00:06:02.188 12:49:07 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:02.188 12:49:07 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:02.188 12:49:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:02.188 12:49:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:02.188 12:49:07 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:02.188 12:49:07 -- accel/accel.sh@40 -- # local IFS=, 00:06:02.188 12:49:07 -- accel/accel.sh@41 -- # jq -r . 00:06:02.188 [2024-04-26 12:49:07.089766] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:06:02.188 [2024-04-26 12:49:07.089855] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3773108 ] 00:06:02.188 EAL: No free 2048 kB hugepages reported on node 1 00:06:02.188 [2024-04-26 12:49:07.155963] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.188 [2024-04-26 12:49:07.228278] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.448 [2024-04-26 12:49:07.260831] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:02.448 [2024-04-26 12:49:07.298262] accel_perf.c:1394:main: *ERROR*: ERROR starting application 00:06:02.448 00:06:02.448 Compression does not support the verify option, aborting. 00:06:02.448 12:49:07 -- common/autotest_common.sh@641 -- # es=161 00:06:02.448 12:49:07 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:02.449 12:49:07 -- common/autotest_common.sh@650 -- # es=33 00:06:02.449 12:49:07 -- common/autotest_common.sh@651 -- # case "$es" in 00:06:02.449 12:49:07 -- common/autotest_common.sh@658 -- # es=1 00:06:02.449 12:49:07 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:02.449 00:06:02.449 real 0m0.292s 00:06:02.449 user 0m0.220s 00:06:02.449 sys 0m0.112s 00:06:02.449 12:49:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:02.449 12:49:07 -- common/autotest_common.sh@10 -- # set +x 00:06:02.449 ************************************ 00:06:02.449 END TEST accel_compress_verify 00:06:02.449 ************************************ 00:06:02.449 12:49:07 -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:02.449 12:49:07 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:02.449 12:49:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:02.449 12:49:07 -- common/autotest_common.sh@10 -- # set +x 00:06:02.710 ************************************ 00:06:02.710 START TEST accel_wrong_workload 00:06:02.710 ************************************ 00:06:02.710 12:49:07 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w foobar 00:06:02.710 12:49:07 -- common/autotest_common.sh@638 -- # local es=0 00:06:02.710 12:49:07 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:02.710 12:49:07 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:06:02.710 12:49:07 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:02.710 12:49:07 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:06:02.710 12:49:07 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:02.710 12:49:07 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w foobar 00:06:02.710 12:49:07 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:02.710 12:49:07 -- accel/accel.sh@12 -- # build_accel_config 00:06:02.710 12:49:07 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:02.710 12:49:07 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:02.710 12:49:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:02.710 12:49:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:02.710 12:49:07 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:02.710 12:49:07 -- accel/accel.sh@40 -- # local IFS=, 00:06:02.710 12:49:07 -- accel/accel.sh@41 -- # jq -r . 00:06:02.710 Unsupported workload type: foobar 00:06:02.710 [2024-04-26 12:49:07.568913] app.c:1364:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:02.710 accel_perf options: 00:06:02.710 [-h help message] 00:06:02.710 [-q queue depth per core] 00:06:02.710 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:02.710 [-T number of threads per core 00:06:02.710 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:02.710 [-t time in seconds] 00:06:02.710 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:02.710 [ dif_verify, , dif_generate, dif_generate_copy 00:06:02.710 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:02.710 [-l for compress/decompress workloads, name of uncompressed input file 00:06:02.710 [-S for crc32c workload, use this seed value (default 0) 00:06:02.710 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:02.710 [-f for fill workload, use this BYTE value (default 255) 00:06:02.710 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:02.710 [-y verify result if this switch is on] 00:06:02.710 [-a tasks to allocate per core (default: same value as -q)] 00:06:02.710 Can be used to spread operations across a wider range of memory. 00:06:02.710 12:49:07 -- common/autotest_common.sh@641 -- # es=1 00:06:02.710 12:49:07 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:02.710 12:49:07 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:06:02.710 12:49:07 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:02.710 00:06:02.710 real 0m0.038s 00:06:02.710 user 0m0.023s 00:06:02.710 sys 0m0.014s 00:06:02.711 12:49:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:02.711 12:49:07 -- common/autotest_common.sh@10 -- # set +x 00:06:02.711 ************************************ 00:06:02.711 END TEST accel_wrong_workload 00:06:02.711 ************************************ 00:06:02.711 Error: writing output failed: Broken pipe 00:06:02.711 12:49:07 -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:02.711 12:49:07 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:06:02.711 12:49:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:02.711 12:49:07 -- common/autotest_common.sh@10 -- # set +x 00:06:02.711 ************************************ 00:06:02.711 START TEST accel_negative_buffers 00:06:02.711 ************************************ 00:06:02.711 12:49:07 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:02.711 12:49:07 -- common/autotest_common.sh@638 -- # local es=0 00:06:02.711 12:49:07 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:02.711 12:49:07 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:06:02.711 12:49:07 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:02.711 12:49:07 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:06:02.711 12:49:07 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:02.711 12:49:07 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w xor -y -x -1 00:06:02.711 12:49:07 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:02.711 12:49:07 -- accel/accel.sh@12 -- # build_accel_config 00:06:02.711 12:49:07 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:02.711 12:49:07 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:02.972 12:49:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:02.972 12:49:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:02.972 12:49:07 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:02.972 12:49:07 -- accel/accel.sh@40 -- # local IFS=, 00:06:02.972 12:49:07 -- accel/accel.sh@41 -- # jq -r . 00:06:02.972 -x option must be non-negative. 00:06:02.972 [2024-04-26 12:49:07.792612] app.c:1364:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:02.972 accel_perf options: 00:06:02.972 [-h help message] 00:06:02.972 [-q queue depth per core] 00:06:02.972 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:02.972 [-T number of threads per core 00:06:02.972 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:02.972 [-t time in seconds] 00:06:02.972 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:02.972 [ dif_verify, , dif_generate, dif_generate_copy 00:06:02.972 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:02.972 [-l for compress/decompress workloads, name of uncompressed input file 00:06:02.972 [-S for crc32c workload, use this seed value (default 0) 00:06:02.972 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:02.972 [-f for fill workload, use this BYTE value (default 255) 00:06:02.972 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:02.972 [-y verify result if this switch is on] 00:06:02.972 [-a tasks to allocate per core (default: same value as -q)] 00:06:02.972 Can be used to spread operations across a wider range of memory. 00:06:02.972 12:49:07 -- common/autotest_common.sh@641 -- # es=1 00:06:02.972 12:49:07 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:02.972 12:49:07 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:06:02.972 12:49:07 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:02.972 00:06:02.972 real 0m0.035s 00:06:02.972 user 0m0.019s 00:06:02.972 sys 0m0.016s 00:06:02.972 12:49:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:02.972 12:49:07 -- common/autotest_common.sh@10 -- # set +x 00:06:02.972 ************************************ 00:06:02.972 END TEST accel_negative_buffers 00:06:02.972 ************************************ 00:06:02.972 Error: writing output failed: Broken pipe 00:06:02.972 12:49:07 -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:02.972 12:49:07 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:02.972 12:49:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:02.972 12:49:07 -- common/autotest_common.sh@10 -- # set +x 00:06:02.972 ************************************ 00:06:02.972 START TEST accel_crc32c 00:06:02.972 ************************************ 00:06:02.972 12:49:07 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:02.972 12:49:07 -- accel/accel.sh@16 -- # local accel_opc 00:06:02.972 12:49:07 -- accel/accel.sh@17 -- # local accel_module 00:06:02.972 12:49:07 -- accel/accel.sh@19 -- # IFS=: 00:06:02.972 12:49:07 -- accel/accel.sh@19 -- # read -r var val 00:06:02.972 12:49:07 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:02.972 12:49:07 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:02.972 12:49:07 -- accel/accel.sh@12 -- # build_accel_config 00:06:02.972 12:49:07 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:02.972 12:49:07 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:02.972 12:49:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:02.972 12:49:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:02.972 12:49:07 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:02.972 12:49:07 -- accel/accel.sh@40 -- # local IFS=, 00:06:02.972 12:49:07 -- accel/accel.sh@41 -- # jq -r . 00:06:02.972 [2024-04-26 12:49:08.016142] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:06:02.972 [2024-04-26 12:49:08.016214] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3773461 ] 00:06:03.233 EAL: No free 2048 kB hugepages reported on node 1 00:06:03.233 [2024-04-26 12:49:08.082409] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.233 [2024-04-26 12:49:08.153785] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.233 12:49:08 -- accel/accel.sh@20 -- # val= 00:06:03.233 12:49:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.233 12:49:08 -- accel/accel.sh@19 -- # IFS=: 00:06:03.233 12:49:08 -- accel/accel.sh@19 -- # read -r var val 00:06:03.233 12:49:08 -- accel/accel.sh@20 -- # val= 00:06:03.233 12:49:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.233 12:49:08 -- accel/accel.sh@19 -- # IFS=: 00:06:03.233 12:49:08 -- accel/accel.sh@19 -- # read -r var val 00:06:03.233 12:49:08 -- accel/accel.sh@20 -- # val=0x1 00:06:03.233 12:49:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.233 12:49:08 -- accel/accel.sh@19 -- # IFS=: 00:06:03.233 12:49:08 -- accel/accel.sh@19 -- # read -r var val 00:06:03.233 12:49:08 -- accel/accel.sh@20 -- # val= 00:06:03.233 12:49:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.233 12:49:08 -- accel/accel.sh@19 -- # IFS=: 00:06:03.233 12:49:08 -- accel/accel.sh@19 -- # read -r var val 00:06:03.233 12:49:08 -- accel/accel.sh@20 -- # val= 00:06:03.233 12:49:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.233 12:49:08 -- accel/accel.sh@19 -- # IFS=: 00:06:03.233 12:49:08 -- accel/accel.sh@19 -- # read -r var val 00:06:03.233 12:49:08 -- accel/accel.sh@20 -- # val=crc32c 00:06:03.233 12:49:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.233 12:49:08 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:03.233 12:49:08 -- accel/accel.sh@19 -- # IFS=: 00:06:03.233 12:49:08 -- accel/accel.sh@19 -- # read -r var val 00:06:03.233 12:49:08 -- accel/accel.sh@20 -- # val=32 00:06:03.233 12:49:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.233 12:49:08 -- accel/accel.sh@19 -- # IFS=: 00:06:03.233 12:49:08 -- accel/accel.sh@19 -- # read -r var val 00:06:03.233 12:49:08 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:03.233 12:49:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.233 12:49:08 -- accel/accel.sh@19 -- # IFS=: 00:06:03.233 12:49:08 -- accel/accel.sh@19 -- # read -r var val 00:06:03.233 12:49:08 -- accel/accel.sh@20 -- # val= 00:06:03.233 12:49:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.233 12:49:08 -- accel/accel.sh@19 -- # IFS=: 00:06:03.233 12:49:08 -- accel/accel.sh@19 -- # read -r var val 00:06:03.233 12:49:08 -- accel/accel.sh@20 -- # val=software 00:06:03.233 12:49:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.233 12:49:08 -- accel/accel.sh@22 -- # accel_module=software 00:06:03.233 12:49:08 -- accel/accel.sh@19 -- # IFS=: 00:06:03.233 12:49:08 -- accel/accel.sh@19 -- # read -r var val 00:06:03.233 12:49:08 -- accel/accel.sh@20 -- # val=32 00:06:03.233 12:49:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.233 12:49:08 -- accel/accel.sh@19 -- # IFS=: 00:06:03.233 12:49:08 -- accel/accel.sh@19 -- # read -r var val 00:06:03.233 12:49:08 -- accel/accel.sh@20 -- # val=32 00:06:03.233 12:49:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.233 12:49:08 -- accel/accel.sh@19 -- # IFS=: 00:06:03.233 12:49:08 -- accel/accel.sh@19 -- # read -r var val 00:06:03.233 12:49:08 -- accel/accel.sh@20 -- # val=1 00:06:03.233 12:49:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.233 12:49:08 -- accel/accel.sh@19 -- # IFS=: 00:06:03.233 12:49:08 -- accel/accel.sh@19 -- # read -r var val 00:06:03.233 12:49:08 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:03.233 12:49:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.233 12:49:08 -- accel/accel.sh@19 -- # IFS=: 00:06:03.233 12:49:08 -- accel/accel.sh@19 -- # read -r var val 00:06:03.233 12:49:08 -- accel/accel.sh@20 -- # val=Yes 00:06:03.233 12:49:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.233 12:49:08 -- accel/accel.sh@19 -- # IFS=: 00:06:03.233 12:49:08 -- accel/accel.sh@19 -- # read -r var val 00:06:03.233 12:49:08 -- accel/accel.sh@20 -- # val= 00:06:03.233 12:49:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.233 12:49:08 -- accel/accel.sh@19 -- # IFS=: 00:06:03.233 12:49:08 -- accel/accel.sh@19 -- # read -r var val 00:06:03.233 12:49:08 -- accel/accel.sh@20 -- # val= 00:06:03.233 12:49:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.233 12:49:08 -- accel/accel.sh@19 -- # IFS=: 00:06:03.233 12:49:08 -- accel/accel.sh@19 -- # read -r var val 00:06:04.617 12:49:09 -- accel/accel.sh@20 -- # val= 00:06:04.617 12:49:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.617 12:49:09 -- accel/accel.sh@19 -- # IFS=: 00:06:04.617 12:49:09 -- accel/accel.sh@19 -- # read -r var val 00:06:04.617 12:49:09 -- accel/accel.sh@20 -- # val= 00:06:04.617 12:49:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.617 12:49:09 -- accel/accel.sh@19 -- # IFS=: 00:06:04.617 12:49:09 -- accel/accel.sh@19 -- # read -r var val 00:06:04.617 12:49:09 -- accel/accel.sh@20 -- # val= 00:06:04.617 12:49:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.617 12:49:09 -- accel/accel.sh@19 -- # IFS=: 00:06:04.617 12:49:09 -- accel/accel.sh@19 -- # read -r var val 00:06:04.617 12:49:09 -- accel/accel.sh@20 -- # val= 00:06:04.617 12:49:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.617 12:49:09 -- accel/accel.sh@19 -- # IFS=: 00:06:04.617 12:49:09 -- accel/accel.sh@19 -- # read -r var val 00:06:04.617 12:49:09 -- accel/accel.sh@20 -- # val= 00:06:04.617 12:49:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.617 12:49:09 -- accel/accel.sh@19 -- # IFS=: 00:06:04.617 12:49:09 -- accel/accel.sh@19 -- # read -r var val 00:06:04.617 12:49:09 -- accel/accel.sh@20 -- # val= 00:06:04.617 12:49:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.617 12:49:09 -- accel/accel.sh@19 -- # IFS=: 00:06:04.617 12:49:09 -- accel/accel.sh@19 -- # read -r var val 00:06:04.617 12:49:09 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:04.617 12:49:09 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:04.617 12:49:09 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:04.617 00:06:04.617 real 0m1.295s 00:06:04.617 user 0m1.194s 00:06:04.617 sys 0m0.111s 00:06:04.617 12:49:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:04.617 12:49:09 -- common/autotest_common.sh@10 -- # set +x 00:06:04.617 ************************************ 00:06:04.617 END TEST accel_crc32c 00:06:04.617 ************************************ 00:06:04.617 12:49:09 -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:04.617 12:49:09 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:04.617 12:49:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:04.617 12:49:09 -- common/autotest_common.sh@10 -- # set +x 00:06:04.617 ************************************ 00:06:04.617 START TEST accel_crc32c_C2 00:06:04.617 ************************************ 00:06:04.617 12:49:09 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:04.617 12:49:09 -- accel/accel.sh@16 -- # local accel_opc 00:06:04.617 12:49:09 -- accel/accel.sh@17 -- # local accel_module 00:06:04.617 12:49:09 -- accel/accel.sh@19 -- # IFS=: 00:06:04.617 12:49:09 -- accel/accel.sh@19 -- # read -r var val 00:06:04.617 12:49:09 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:04.617 12:49:09 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:04.617 12:49:09 -- accel/accel.sh@12 -- # build_accel_config 00:06:04.617 12:49:09 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:04.617 12:49:09 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:04.617 12:49:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:04.617 12:49:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:04.617 12:49:09 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:04.617 12:49:09 -- accel/accel.sh@40 -- # local IFS=, 00:06:04.617 12:49:09 -- accel/accel.sh@41 -- # jq -r . 00:06:04.617 [2024-04-26 12:49:09.488501] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:06:04.617 [2024-04-26 12:49:09.488597] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3773819 ] 00:06:04.617 EAL: No free 2048 kB hugepages reported on node 1 00:06:04.617 [2024-04-26 12:49:09.552516] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.617 [2024-04-26 12:49:09.620977] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.617 12:49:09 -- accel/accel.sh@20 -- # val= 00:06:04.617 12:49:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.617 12:49:09 -- accel/accel.sh@19 -- # IFS=: 00:06:04.617 12:49:09 -- accel/accel.sh@19 -- # read -r var val 00:06:04.617 12:49:09 -- accel/accel.sh@20 -- # val= 00:06:04.617 12:49:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.617 12:49:09 -- accel/accel.sh@19 -- # IFS=: 00:06:04.617 12:49:09 -- accel/accel.sh@19 -- # read -r var val 00:06:04.617 12:49:09 -- accel/accel.sh@20 -- # val=0x1 00:06:04.617 12:49:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.617 12:49:09 -- accel/accel.sh@19 -- # IFS=: 00:06:04.617 12:49:09 -- accel/accel.sh@19 -- # read -r var val 00:06:04.617 12:49:09 -- accel/accel.sh@20 -- # val= 00:06:04.617 12:49:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.617 12:49:09 -- accel/accel.sh@19 -- # IFS=: 00:06:04.617 12:49:09 -- accel/accel.sh@19 -- # read -r var val 00:06:04.617 12:49:09 -- accel/accel.sh@20 -- # val= 00:06:04.617 12:49:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.617 12:49:09 -- accel/accel.sh@19 -- # IFS=: 00:06:04.617 12:49:09 -- accel/accel.sh@19 -- # read -r var val 00:06:04.617 12:49:09 -- accel/accel.sh@20 -- # val=crc32c 00:06:04.617 12:49:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.617 12:49:09 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:04.617 12:49:09 -- accel/accel.sh@19 -- # IFS=: 00:06:04.617 12:49:09 -- accel/accel.sh@19 -- # read -r var val 00:06:04.617 12:49:09 -- accel/accel.sh@20 -- # val=0 00:06:04.617 12:49:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.617 12:49:09 -- accel/accel.sh@19 -- # IFS=: 00:06:04.617 12:49:09 -- accel/accel.sh@19 -- # read -r var val 00:06:04.617 12:49:09 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:04.617 12:49:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.617 12:49:09 -- accel/accel.sh@19 -- # IFS=: 00:06:04.617 12:49:09 -- accel/accel.sh@19 -- # read -r var val 00:06:04.617 12:49:09 -- accel/accel.sh@20 -- # val= 00:06:04.617 12:49:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.618 12:49:09 -- accel/accel.sh@19 -- # IFS=: 00:06:04.618 12:49:09 -- accel/accel.sh@19 -- # read -r var val 00:06:04.618 12:49:09 -- accel/accel.sh@20 -- # val=software 00:06:04.618 12:49:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.618 12:49:09 -- accel/accel.sh@22 -- # accel_module=software 00:06:04.618 12:49:09 -- accel/accel.sh@19 -- # IFS=: 00:06:04.618 12:49:09 -- accel/accel.sh@19 -- # read -r var val 00:06:04.618 12:49:09 -- accel/accel.sh@20 -- # val=32 00:06:04.618 12:49:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.618 12:49:09 -- accel/accel.sh@19 -- # IFS=: 00:06:04.618 12:49:09 -- accel/accel.sh@19 -- # read -r var val 00:06:04.618 12:49:09 -- accel/accel.sh@20 -- # val=32 00:06:04.618 12:49:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.618 12:49:09 -- accel/accel.sh@19 -- # IFS=: 00:06:04.618 12:49:09 -- accel/accel.sh@19 -- # read -r var val 00:06:04.618 12:49:09 -- accel/accel.sh@20 -- # val=1 00:06:04.618 12:49:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.618 12:49:09 -- accel/accel.sh@19 -- # IFS=: 00:06:04.618 12:49:09 -- accel/accel.sh@19 -- # read -r var val 00:06:04.618 12:49:09 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:04.618 12:49:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.618 12:49:09 -- accel/accel.sh@19 -- # IFS=: 00:06:04.618 12:49:09 -- accel/accel.sh@19 -- # read -r var val 00:06:04.618 12:49:09 -- accel/accel.sh@20 -- # val=Yes 00:06:04.618 12:49:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.618 12:49:09 -- accel/accel.sh@19 -- # IFS=: 00:06:04.618 12:49:09 -- accel/accel.sh@19 -- # read -r var val 00:06:04.618 12:49:09 -- accel/accel.sh@20 -- # val= 00:06:04.618 12:49:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.618 12:49:09 -- accel/accel.sh@19 -- # IFS=: 00:06:04.618 12:49:09 -- accel/accel.sh@19 -- # read -r var val 00:06:04.618 12:49:09 -- accel/accel.sh@20 -- # val= 00:06:04.618 12:49:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.618 12:49:09 -- accel/accel.sh@19 -- # IFS=: 00:06:04.618 12:49:09 -- accel/accel.sh@19 -- # read -r var val 00:06:06.004 12:49:10 -- accel/accel.sh@20 -- # val= 00:06:06.004 12:49:10 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.004 12:49:10 -- accel/accel.sh@19 -- # IFS=: 00:06:06.004 12:49:10 -- accel/accel.sh@19 -- # read -r var val 00:06:06.004 12:49:10 -- accel/accel.sh@20 -- # val= 00:06:06.004 12:49:10 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.004 12:49:10 -- accel/accel.sh@19 -- # IFS=: 00:06:06.004 12:49:10 -- accel/accel.sh@19 -- # read -r var val 00:06:06.004 12:49:10 -- accel/accel.sh@20 -- # val= 00:06:06.004 12:49:10 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.004 12:49:10 -- accel/accel.sh@19 -- # IFS=: 00:06:06.004 12:49:10 -- accel/accel.sh@19 -- # read -r var val 00:06:06.004 12:49:10 -- accel/accel.sh@20 -- # val= 00:06:06.004 12:49:10 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.004 12:49:10 -- accel/accel.sh@19 -- # IFS=: 00:06:06.004 12:49:10 -- accel/accel.sh@19 -- # read -r var val 00:06:06.004 12:49:10 -- accel/accel.sh@20 -- # val= 00:06:06.004 12:49:10 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.004 12:49:10 -- accel/accel.sh@19 -- # IFS=: 00:06:06.004 12:49:10 -- accel/accel.sh@19 -- # read -r var val 00:06:06.004 12:49:10 -- accel/accel.sh@20 -- # val= 00:06:06.004 12:49:10 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.004 12:49:10 -- accel/accel.sh@19 -- # IFS=: 00:06:06.004 12:49:10 -- accel/accel.sh@19 -- # read -r var val 00:06:06.004 12:49:10 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:06.004 12:49:10 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:06.004 12:49:10 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:06.004 00:06:06.004 real 0m1.291s 00:06:06.004 user 0m1.193s 00:06:06.004 sys 0m0.109s 00:06:06.004 12:49:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:06.004 12:49:10 -- common/autotest_common.sh@10 -- # set +x 00:06:06.004 ************************************ 00:06:06.004 END TEST accel_crc32c_C2 00:06:06.004 ************************************ 00:06:06.004 12:49:10 -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:06.004 12:49:10 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:06.004 12:49:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:06.004 12:49:10 -- common/autotest_common.sh@10 -- # set +x 00:06:06.004 ************************************ 00:06:06.004 START TEST accel_copy 00:06:06.004 ************************************ 00:06:06.004 12:49:10 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy -y 00:06:06.004 12:49:10 -- accel/accel.sh@16 -- # local accel_opc 00:06:06.004 12:49:10 -- accel/accel.sh@17 -- # local accel_module 00:06:06.004 12:49:10 -- accel/accel.sh@19 -- # IFS=: 00:06:06.004 12:49:10 -- accel/accel.sh@19 -- # read -r var val 00:06:06.004 12:49:10 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:06.004 12:49:10 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:06.004 12:49:10 -- accel/accel.sh@12 -- # build_accel_config 00:06:06.004 12:49:10 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:06.004 12:49:10 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:06.004 12:49:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:06.004 12:49:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:06.004 12:49:10 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:06.004 12:49:10 -- accel/accel.sh@40 -- # local IFS=, 00:06:06.004 12:49:10 -- accel/accel.sh@41 -- # jq -r . 00:06:06.004 [2024-04-26 12:49:10.968728] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:06:06.004 [2024-04-26 12:49:10.968827] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3774173 ] 00:06:06.004 EAL: No free 2048 kB hugepages reported on node 1 00:06:06.004 [2024-04-26 12:49:11.034487] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.265 [2024-04-26 12:49:11.106156] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.265 12:49:11 -- accel/accel.sh@20 -- # val= 00:06:06.265 12:49:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.265 12:49:11 -- accel/accel.sh@19 -- # IFS=: 00:06:06.265 12:49:11 -- accel/accel.sh@19 -- # read -r var val 00:06:06.265 12:49:11 -- accel/accel.sh@20 -- # val= 00:06:06.265 12:49:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.265 12:49:11 -- accel/accel.sh@19 -- # IFS=: 00:06:06.265 12:49:11 -- accel/accel.sh@19 -- # read -r var val 00:06:06.265 12:49:11 -- accel/accel.sh@20 -- # val=0x1 00:06:06.265 12:49:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.265 12:49:11 -- accel/accel.sh@19 -- # IFS=: 00:06:06.265 12:49:11 -- accel/accel.sh@19 -- # read -r var val 00:06:06.265 12:49:11 -- accel/accel.sh@20 -- # val= 00:06:06.265 12:49:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.265 12:49:11 -- accel/accel.sh@19 -- # IFS=: 00:06:06.265 12:49:11 -- accel/accel.sh@19 -- # read -r var val 00:06:06.265 12:49:11 -- accel/accel.sh@20 -- # val= 00:06:06.265 12:49:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.265 12:49:11 -- accel/accel.sh@19 -- # IFS=: 00:06:06.265 12:49:11 -- accel/accel.sh@19 -- # read -r var val 00:06:06.265 12:49:11 -- accel/accel.sh@20 -- # val=copy 00:06:06.265 12:49:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.265 12:49:11 -- accel/accel.sh@23 -- # accel_opc=copy 00:06:06.265 12:49:11 -- accel/accel.sh@19 -- # IFS=: 00:06:06.265 12:49:11 -- accel/accel.sh@19 -- # read -r var val 00:06:06.265 12:49:11 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:06.265 12:49:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.265 12:49:11 -- accel/accel.sh@19 -- # IFS=: 00:06:06.265 12:49:11 -- accel/accel.sh@19 -- # read -r var val 00:06:06.265 12:49:11 -- accel/accel.sh@20 -- # val= 00:06:06.265 12:49:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.265 12:49:11 -- accel/accel.sh@19 -- # IFS=: 00:06:06.265 12:49:11 -- accel/accel.sh@19 -- # read -r var val 00:06:06.265 12:49:11 -- accel/accel.sh@20 -- # val=software 00:06:06.265 12:49:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.265 12:49:11 -- accel/accel.sh@22 -- # accel_module=software 00:06:06.265 12:49:11 -- accel/accel.sh@19 -- # IFS=: 00:06:06.265 12:49:11 -- accel/accel.sh@19 -- # read -r var val 00:06:06.265 12:49:11 -- accel/accel.sh@20 -- # val=32 00:06:06.265 12:49:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.265 12:49:11 -- accel/accel.sh@19 -- # IFS=: 00:06:06.265 12:49:11 -- accel/accel.sh@19 -- # read -r var val 00:06:06.265 12:49:11 -- accel/accel.sh@20 -- # val=32 00:06:06.265 12:49:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.265 12:49:11 -- accel/accel.sh@19 -- # IFS=: 00:06:06.265 12:49:11 -- accel/accel.sh@19 -- # read -r var val 00:06:06.265 12:49:11 -- accel/accel.sh@20 -- # val=1 00:06:06.265 12:49:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.265 12:49:11 -- accel/accel.sh@19 -- # IFS=: 00:06:06.265 12:49:11 -- accel/accel.sh@19 -- # read -r var val 00:06:06.265 12:49:11 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:06.265 12:49:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.265 12:49:11 -- accel/accel.sh@19 -- # IFS=: 00:06:06.265 12:49:11 -- accel/accel.sh@19 -- # read -r var val 00:06:06.265 12:49:11 -- accel/accel.sh@20 -- # val=Yes 00:06:06.265 12:49:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.265 12:49:11 -- accel/accel.sh@19 -- # IFS=: 00:06:06.265 12:49:11 -- accel/accel.sh@19 -- # read -r var val 00:06:06.265 12:49:11 -- accel/accel.sh@20 -- # val= 00:06:06.265 12:49:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.265 12:49:11 -- accel/accel.sh@19 -- # IFS=: 00:06:06.265 12:49:11 -- accel/accel.sh@19 -- # read -r var val 00:06:06.265 12:49:11 -- accel/accel.sh@20 -- # val= 00:06:06.265 12:49:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.265 12:49:11 -- accel/accel.sh@19 -- # IFS=: 00:06:06.265 12:49:11 -- accel/accel.sh@19 -- # read -r var val 00:06:07.206 12:49:12 -- accel/accel.sh@20 -- # val= 00:06:07.206 12:49:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.206 12:49:12 -- accel/accel.sh@19 -- # IFS=: 00:06:07.206 12:49:12 -- accel/accel.sh@19 -- # read -r var val 00:06:07.206 12:49:12 -- accel/accel.sh@20 -- # val= 00:06:07.207 12:49:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.207 12:49:12 -- accel/accel.sh@19 -- # IFS=: 00:06:07.207 12:49:12 -- accel/accel.sh@19 -- # read -r var val 00:06:07.207 12:49:12 -- accel/accel.sh@20 -- # val= 00:06:07.207 12:49:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.207 12:49:12 -- accel/accel.sh@19 -- # IFS=: 00:06:07.207 12:49:12 -- accel/accel.sh@19 -- # read -r var val 00:06:07.207 12:49:12 -- accel/accel.sh@20 -- # val= 00:06:07.207 12:49:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.207 12:49:12 -- accel/accel.sh@19 -- # IFS=: 00:06:07.207 12:49:12 -- accel/accel.sh@19 -- # read -r var val 00:06:07.207 12:49:12 -- accel/accel.sh@20 -- # val= 00:06:07.207 12:49:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.207 12:49:12 -- accel/accel.sh@19 -- # IFS=: 00:06:07.207 12:49:12 -- accel/accel.sh@19 -- # read -r var val 00:06:07.207 12:49:12 -- accel/accel.sh@20 -- # val= 00:06:07.207 12:49:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.207 12:49:12 -- accel/accel.sh@19 -- # IFS=: 00:06:07.207 12:49:12 -- accel/accel.sh@19 -- # read -r var val 00:06:07.207 12:49:12 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:07.207 12:49:12 -- accel/accel.sh@27 -- # [[ -n copy ]] 00:06:07.207 12:49:12 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:07.207 00:06:07.207 real 0m1.297s 00:06:07.207 user 0m1.202s 00:06:07.207 sys 0m0.105s 00:06:07.207 12:49:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:07.207 12:49:12 -- common/autotest_common.sh@10 -- # set +x 00:06:07.207 ************************************ 00:06:07.207 END TEST accel_copy 00:06:07.207 ************************************ 00:06:07.491 12:49:12 -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:07.491 12:49:12 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:06:07.491 12:49:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:07.491 12:49:12 -- common/autotest_common.sh@10 -- # set +x 00:06:07.491 ************************************ 00:06:07.491 START TEST accel_fill 00:06:07.491 ************************************ 00:06:07.491 12:49:12 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:07.491 12:49:12 -- accel/accel.sh@16 -- # local accel_opc 00:06:07.491 12:49:12 -- accel/accel.sh@17 -- # local accel_module 00:06:07.491 12:49:12 -- accel/accel.sh@19 -- # IFS=: 00:06:07.491 12:49:12 -- accel/accel.sh@19 -- # read -r var val 00:06:07.491 12:49:12 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:07.491 12:49:12 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:07.491 12:49:12 -- accel/accel.sh@12 -- # build_accel_config 00:06:07.491 12:49:12 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:07.491 12:49:12 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:07.491 12:49:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:07.491 12:49:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:07.491 12:49:12 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:07.491 12:49:12 -- accel/accel.sh@40 -- # local IFS=, 00:06:07.491 12:49:12 -- accel/accel.sh@41 -- # jq -r . 00:06:07.491 [2024-04-26 12:49:12.449658] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:06:07.491 [2024-04-26 12:49:12.449722] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3774446 ] 00:06:07.491 EAL: No free 2048 kB hugepages reported on node 1 00:06:07.491 [2024-04-26 12:49:12.514963] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.752 [2024-04-26 12:49:12.588068] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.752 12:49:12 -- accel/accel.sh@20 -- # val= 00:06:07.752 12:49:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.752 12:49:12 -- accel/accel.sh@19 -- # IFS=: 00:06:07.752 12:49:12 -- accel/accel.sh@19 -- # read -r var val 00:06:07.752 12:49:12 -- accel/accel.sh@20 -- # val= 00:06:07.752 12:49:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.752 12:49:12 -- accel/accel.sh@19 -- # IFS=: 00:06:07.752 12:49:12 -- accel/accel.sh@19 -- # read -r var val 00:06:07.752 12:49:12 -- accel/accel.sh@20 -- # val=0x1 00:06:07.752 12:49:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.752 12:49:12 -- accel/accel.sh@19 -- # IFS=: 00:06:07.752 12:49:12 -- accel/accel.sh@19 -- # read -r var val 00:06:07.752 12:49:12 -- accel/accel.sh@20 -- # val= 00:06:07.752 12:49:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.752 12:49:12 -- accel/accel.sh@19 -- # IFS=: 00:06:07.752 12:49:12 -- accel/accel.sh@19 -- # read -r var val 00:06:07.752 12:49:12 -- accel/accel.sh@20 -- # val= 00:06:07.752 12:49:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.752 12:49:12 -- accel/accel.sh@19 -- # IFS=: 00:06:07.752 12:49:12 -- accel/accel.sh@19 -- # read -r var val 00:06:07.752 12:49:12 -- accel/accel.sh@20 -- # val=fill 00:06:07.752 12:49:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.752 12:49:12 -- accel/accel.sh@23 -- # accel_opc=fill 00:06:07.752 12:49:12 -- accel/accel.sh@19 -- # IFS=: 00:06:07.752 12:49:12 -- accel/accel.sh@19 -- # read -r var val 00:06:07.752 12:49:12 -- accel/accel.sh@20 -- # val=0x80 00:06:07.752 12:49:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.752 12:49:12 -- accel/accel.sh@19 -- # IFS=: 00:06:07.752 12:49:12 -- accel/accel.sh@19 -- # read -r var val 00:06:07.752 12:49:12 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:07.752 12:49:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.752 12:49:12 -- accel/accel.sh@19 -- # IFS=: 00:06:07.752 12:49:12 -- accel/accel.sh@19 -- # read -r var val 00:06:07.752 12:49:12 -- accel/accel.sh@20 -- # val= 00:06:07.752 12:49:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.752 12:49:12 -- accel/accel.sh@19 -- # IFS=: 00:06:07.752 12:49:12 -- accel/accel.sh@19 -- # read -r var val 00:06:07.752 12:49:12 -- accel/accel.sh@20 -- # val=software 00:06:07.752 12:49:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.752 12:49:12 -- accel/accel.sh@22 -- # accel_module=software 00:06:07.752 12:49:12 -- accel/accel.sh@19 -- # IFS=: 00:06:07.752 12:49:12 -- accel/accel.sh@19 -- # read -r var val 00:06:07.752 12:49:12 -- accel/accel.sh@20 -- # val=64 00:06:07.752 12:49:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.752 12:49:12 -- accel/accel.sh@19 -- # IFS=: 00:06:07.752 12:49:12 -- accel/accel.sh@19 -- # read -r var val 00:06:07.752 12:49:12 -- accel/accel.sh@20 -- # val=64 00:06:07.752 12:49:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.752 12:49:12 -- accel/accel.sh@19 -- # IFS=: 00:06:07.752 12:49:12 -- accel/accel.sh@19 -- # read -r var val 00:06:07.752 12:49:12 -- accel/accel.sh@20 -- # val=1 00:06:07.752 12:49:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.752 12:49:12 -- accel/accel.sh@19 -- # IFS=: 00:06:07.752 12:49:12 -- accel/accel.sh@19 -- # read -r var val 00:06:07.752 12:49:12 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:07.752 12:49:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.752 12:49:12 -- accel/accel.sh@19 -- # IFS=: 00:06:07.752 12:49:12 -- accel/accel.sh@19 -- # read -r var val 00:06:07.752 12:49:12 -- accel/accel.sh@20 -- # val=Yes 00:06:07.752 12:49:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.752 12:49:12 -- accel/accel.sh@19 -- # IFS=: 00:06:07.752 12:49:12 -- accel/accel.sh@19 -- # read -r var val 00:06:07.752 12:49:12 -- accel/accel.sh@20 -- # val= 00:06:07.752 12:49:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.752 12:49:12 -- accel/accel.sh@19 -- # IFS=: 00:06:07.752 12:49:12 -- accel/accel.sh@19 -- # read -r var val 00:06:07.752 12:49:12 -- accel/accel.sh@20 -- # val= 00:06:07.752 12:49:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.752 12:49:12 -- accel/accel.sh@19 -- # IFS=: 00:06:07.752 12:49:12 -- accel/accel.sh@19 -- # read -r var val 00:06:08.694 12:49:13 -- accel/accel.sh@20 -- # val= 00:06:08.694 12:49:13 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.694 12:49:13 -- accel/accel.sh@19 -- # IFS=: 00:06:08.694 12:49:13 -- accel/accel.sh@19 -- # read -r var val 00:06:08.694 12:49:13 -- accel/accel.sh@20 -- # val= 00:06:08.694 12:49:13 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.694 12:49:13 -- accel/accel.sh@19 -- # IFS=: 00:06:08.694 12:49:13 -- accel/accel.sh@19 -- # read -r var val 00:06:08.694 12:49:13 -- accel/accel.sh@20 -- # val= 00:06:08.694 12:49:13 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.694 12:49:13 -- accel/accel.sh@19 -- # IFS=: 00:06:08.694 12:49:13 -- accel/accel.sh@19 -- # read -r var val 00:06:08.694 12:49:13 -- accel/accel.sh@20 -- # val= 00:06:08.694 12:49:13 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.694 12:49:13 -- accel/accel.sh@19 -- # IFS=: 00:06:08.694 12:49:13 -- accel/accel.sh@19 -- # read -r var val 00:06:08.694 12:49:13 -- accel/accel.sh@20 -- # val= 00:06:08.694 12:49:13 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.694 12:49:13 -- accel/accel.sh@19 -- # IFS=: 00:06:08.694 12:49:13 -- accel/accel.sh@19 -- # read -r var val 00:06:08.694 12:49:13 -- accel/accel.sh@20 -- # val= 00:06:08.694 12:49:13 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.694 12:49:13 -- accel/accel.sh@19 -- # IFS=: 00:06:08.694 12:49:13 -- accel/accel.sh@19 -- # read -r var val 00:06:08.694 12:49:13 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:08.694 12:49:13 -- accel/accel.sh@27 -- # [[ -n fill ]] 00:06:08.694 12:49:13 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:08.694 00:06:08.694 real 0m1.297s 00:06:08.694 user 0m1.202s 00:06:08.694 sys 0m0.106s 00:06:08.694 12:49:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:08.694 12:49:13 -- common/autotest_common.sh@10 -- # set +x 00:06:08.694 ************************************ 00:06:08.694 END TEST accel_fill 00:06:08.694 ************************************ 00:06:08.955 12:49:13 -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:08.955 12:49:13 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:08.955 12:49:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:08.955 12:49:13 -- common/autotest_common.sh@10 -- # set +x 00:06:08.955 ************************************ 00:06:08.955 START TEST accel_copy_crc32c 00:06:08.955 ************************************ 00:06:08.955 12:49:13 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy_crc32c -y 00:06:08.955 12:49:13 -- accel/accel.sh@16 -- # local accel_opc 00:06:08.955 12:49:13 -- accel/accel.sh@17 -- # local accel_module 00:06:08.955 12:49:13 -- accel/accel.sh@19 -- # IFS=: 00:06:08.955 12:49:13 -- accel/accel.sh@19 -- # read -r var val 00:06:08.955 12:49:13 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:08.955 12:49:13 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:08.955 12:49:13 -- accel/accel.sh@12 -- # build_accel_config 00:06:08.955 12:49:13 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:08.955 12:49:13 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:08.955 12:49:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:08.955 12:49:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:08.955 12:49:13 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:08.955 12:49:13 -- accel/accel.sh@40 -- # local IFS=, 00:06:08.955 12:49:13 -- accel/accel.sh@41 -- # jq -r . 00:06:08.955 [2024-04-26 12:49:13.933782] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:06:08.955 [2024-04-26 12:49:13.933865] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3774693 ] 00:06:08.955 EAL: No free 2048 kB hugepages reported on node 1 00:06:08.955 [2024-04-26 12:49:13.997486] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.216 [2024-04-26 12:49:14.064155] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.216 12:49:14 -- accel/accel.sh@20 -- # val= 00:06:09.216 12:49:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.216 12:49:14 -- accel/accel.sh@19 -- # IFS=: 00:06:09.216 12:49:14 -- accel/accel.sh@19 -- # read -r var val 00:06:09.216 12:49:14 -- accel/accel.sh@20 -- # val= 00:06:09.216 12:49:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.216 12:49:14 -- accel/accel.sh@19 -- # IFS=: 00:06:09.216 12:49:14 -- accel/accel.sh@19 -- # read -r var val 00:06:09.216 12:49:14 -- accel/accel.sh@20 -- # val=0x1 00:06:09.216 12:49:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.216 12:49:14 -- accel/accel.sh@19 -- # IFS=: 00:06:09.216 12:49:14 -- accel/accel.sh@19 -- # read -r var val 00:06:09.216 12:49:14 -- accel/accel.sh@20 -- # val= 00:06:09.216 12:49:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.216 12:49:14 -- accel/accel.sh@19 -- # IFS=: 00:06:09.216 12:49:14 -- accel/accel.sh@19 -- # read -r var val 00:06:09.216 12:49:14 -- accel/accel.sh@20 -- # val= 00:06:09.216 12:49:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.216 12:49:14 -- accel/accel.sh@19 -- # IFS=: 00:06:09.216 12:49:14 -- accel/accel.sh@19 -- # read -r var val 00:06:09.216 12:49:14 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:09.216 12:49:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.216 12:49:14 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:09.216 12:49:14 -- accel/accel.sh@19 -- # IFS=: 00:06:09.216 12:49:14 -- accel/accel.sh@19 -- # read -r var val 00:06:09.216 12:49:14 -- accel/accel.sh@20 -- # val=0 00:06:09.216 12:49:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.216 12:49:14 -- accel/accel.sh@19 -- # IFS=: 00:06:09.216 12:49:14 -- accel/accel.sh@19 -- # read -r var val 00:06:09.216 12:49:14 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:09.216 12:49:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.216 12:49:14 -- accel/accel.sh@19 -- # IFS=: 00:06:09.216 12:49:14 -- accel/accel.sh@19 -- # read -r var val 00:06:09.216 12:49:14 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:09.216 12:49:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.216 12:49:14 -- accel/accel.sh@19 -- # IFS=: 00:06:09.216 12:49:14 -- accel/accel.sh@19 -- # read -r var val 00:06:09.216 12:49:14 -- accel/accel.sh@20 -- # val= 00:06:09.216 12:49:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.216 12:49:14 -- accel/accel.sh@19 -- # IFS=: 00:06:09.216 12:49:14 -- accel/accel.sh@19 -- # read -r var val 00:06:09.216 12:49:14 -- accel/accel.sh@20 -- # val=software 00:06:09.216 12:49:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.216 12:49:14 -- accel/accel.sh@22 -- # accel_module=software 00:06:09.216 12:49:14 -- accel/accel.sh@19 -- # IFS=: 00:06:09.216 12:49:14 -- accel/accel.sh@19 -- # read -r var val 00:06:09.217 12:49:14 -- accel/accel.sh@20 -- # val=32 00:06:09.217 12:49:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.217 12:49:14 -- accel/accel.sh@19 -- # IFS=: 00:06:09.217 12:49:14 -- accel/accel.sh@19 -- # read -r var val 00:06:09.217 12:49:14 -- accel/accel.sh@20 -- # val=32 00:06:09.217 12:49:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.217 12:49:14 -- accel/accel.sh@19 -- # IFS=: 00:06:09.217 12:49:14 -- accel/accel.sh@19 -- # read -r var val 00:06:09.217 12:49:14 -- accel/accel.sh@20 -- # val=1 00:06:09.217 12:49:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.217 12:49:14 -- accel/accel.sh@19 -- # IFS=: 00:06:09.217 12:49:14 -- accel/accel.sh@19 -- # read -r var val 00:06:09.217 12:49:14 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:09.217 12:49:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.217 12:49:14 -- accel/accel.sh@19 -- # IFS=: 00:06:09.217 12:49:14 -- accel/accel.sh@19 -- # read -r var val 00:06:09.217 12:49:14 -- accel/accel.sh@20 -- # val=Yes 00:06:09.217 12:49:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.217 12:49:14 -- accel/accel.sh@19 -- # IFS=: 00:06:09.217 12:49:14 -- accel/accel.sh@19 -- # read -r var val 00:06:09.217 12:49:14 -- accel/accel.sh@20 -- # val= 00:06:09.217 12:49:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.217 12:49:14 -- accel/accel.sh@19 -- # IFS=: 00:06:09.217 12:49:14 -- accel/accel.sh@19 -- # read -r var val 00:06:09.217 12:49:14 -- accel/accel.sh@20 -- # val= 00:06:09.217 12:49:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.217 12:49:14 -- accel/accel.sh@19 -- # IFS=: 00:06:09.217 12:49:14 -- accel/accel.sh@19 -- # read -r var val 00:06:10.164 12:49:15 -- accel/accel.sh@20 -- # val= 00:06:10.164 12:49:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.164 12:49:15 -- accel/accel.sh@19 -- # IFS=: 00:06:10.164 12:49:15 -- accel/accel.sh@19 -- # read -r var val 00:06:10.164 12:49:15 -- accel/accel.sh@20 -- # val= 00:06:10.164 12:49:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.164 12:49:15 -- accel/accel.sh@19 -- # IFS=: 00:06:10.164 12:49:15 -- accel/accel.sh@19 -- # read -r var val 00:06:10.164 12:49:15 -- accel/accel.sh@20 -- # val= 00:06:10.164 12:49:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.164 12:49:15 -- accel/accel.sh@19 -- # IFS=: 00:06:10.164 12:49:15 -- accel/accel.sh@19 -- # read -r var val 00:06:10.164 12:49:15 -- accel/accel.sh@20 -- # val= 00:06:10.164 12:49:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.164 12:49:15 -- accel/accel.sh@19 -- # IFS=: 00:06:10.164 12:49:15 -- accel/accel.sh@19 -- # read -r var val 00:06:10.164 12:49:15 -- accel/accel.sh@20 -- # val= 00:06:10.164 12:49:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.164 12:49:15 -- accel/accel.sh@19 -- # IFS=: 00:06:10.164 12:49:15 -- accel/accel.sh@19 -- # read -r var val 00:06:10.164 12:49:15 -- accel/accel.sh@20 -- # val= 00:06:10.165 12:49:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.165 12:49:15 -- accel/accel.sh@19 -- # IFS=: 00:06:10.165 12:49:15 -- accel/accel.sh@19 -- # read -r var val 00:06:10.165 12:49:15 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:10.165 12:49:15 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:10.165 12:49:15 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:10.165 00:06:10.165 real 0m1.289s 00:06:10.165 user 0m1.196s 00:06:10.165 sys 0m0.104s 00:06:10.165 12:49:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:10.165 12:49:15 -- common/autotest_common.sh@10 -- # set +x 00:06:10.165 ************************************ 00:06:10.165 END TEST accel_copy_crc32c 00:06:10.165 ************************************ 00:06:10.426 12:49:15 -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:10.426 12:49:15 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:10.426 12:49:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:10.426 12:49:15 -- common/autotest_common.sh@10 -- # set +x 00:06:10.426 ************************************ 00:06:10.426 START TEST accel_copy_crc32c_C2 00:06:10.426 ************************************ 00:06:10.426 12:49:15 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:10.426 12:49:15 -- accel/accel.sh@16 -- # local accel_opc 00:06:10.426 12:49:15 -- accel/accel.sh@17 -- # local accel_module 00:06:10.426 12:49:15 -- accel/accel.sh@19 -- # IFS=: 00:06:10.426 12:49:15 -- accel/accel.sh@19 -- # read -r var val 00:06:10.426 12:49:15 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:10.426 12:49:15 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:10.426 12:49:15 -- accel/accel.sh@12 -- # build_accel_config 00:06:10.426 12:49:15 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:10.426 12:49:15 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:10.426 12:49:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:10.426 12:49:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:10.426 12:49:15 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:10.426 12:49:15 -- accel/accel.sh@40 -- # local IFS=, 00:06:10.426 12:49:15 -- accel/accel.sh@41 -- # jq -r . 00:06:10.426 [2024-04-26 12:49:15.408197] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:06:10.426 [2024-04-26 12:49:15.408270] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3774955 ] 00:06:10.426 EAL: No free 2048 kB hugepages reported on node 1 00:06:10.426 [2024-04-26 12:49:15.475439] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.688 [2024-04-26 12:49:15.547935] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.688 12:49:15 -- accel/accel.sh@20 -- # val= 00:06:10.688 12:49:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.688 12:49:15 -- accel/accel.sh@19 -- # IFS=: 00:06:10.688 12:49:15 -- accel/accel.sh@19 -- # read -r var val 00:06:10.688 12:49:15 -- accel/accel.sh@20 -- # val= 00:06:10.688 12:49:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.688 12:49:15 -- accel/accel.sh@19 -- # IFS=: 00:06:10.688 12:49:15 -- accel/accel.sh@19 -- # read -r var val 00:06:10.688 12:49:15 -- accel/accel.sh@20 -- # val=0x1 00:06:10.688 12:49:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.688 12:49:15 -- accel/accel.sh@19 -- # IFS=: 00:06:10.688 12:49:15 -- accel/accel.sh@19 -- # read -r var val 00:06:10.688 12:49:15 -- accel/accel.sh@20 -- # val= 00:06:10.688 12:49:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.688 12:49:15 -- accel/accel.sh@19 -- # IFS=: 00:06:10.688 12:49:15 -- accel/accel.sh@19 -- # read -r var val 00:06:10.688 12:49:15 -- accel/accel.sh@20 -- # val= 00:06:10.688 12:49:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.688 12:49:15 -- accel/accel.sh@19 -- # IFS=: 00:06:10.688 12:49:15 -- accel/accel.sh@19 -- # read -r var val 00:06:10.688 12:49:15 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:10.688 12:49:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.688 12:49:15 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:10.688 12:49:15 -- accel/accel.sh@19 -- # IFS=: 00:06:10.688 12:49:15 -- accel/accel.sh@19 -- # read -r var val 00:06:10.688 12:49:15 -- accel/accel.sh@20 -- # val=0 00:06:10.688 12:49:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.688 12:49:15 -- accel/accel.sh@19 -- # IFS=: 00:06:10.688 12:49:15 -- accel/accel.sh@19 -- # read -r var val 00:06:10.688 12:49:15 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:10.688 12:49:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.688 12:49:15 -- accel/accel.sh@19 -- # IFS=: 00:06:10.688 12:49:15 -- accel/accel.sh@19 -- # read -r var val 00:06:10.688 12:49:15 -- accel/accel.sh@20 -- # val='8192 bytes' 00:06:10.688 12:49:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.688 12:49:15 -- accel/accel.sh@19 -- # IFS=: 00:06:10.688 12:49:15 -- accel/accel.sh@19 -- # read -r var val 00:06:10.688 12:49:15 -- accel/accel.sh@20 -- # val= 00:06:10.688 12:49:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.688 12:49:15 -- accel/accel.sh@19 -- # IFS=: 00:06:10.688 12:49:15 -- accel/accel.sh@19 -- # read -r var val 00:06:10.688 12:49:15 -- accel/accel.sh@20 -- # val=software 00:06:10.688 12:49:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.689 12:49:15 -- accel/accel.sh@22 -- # accel_module=software 00:06:10.689 12:49:15 -- accel/accel.sh@19 -- # IFS=: 00:06:10.689 12:49:15 -- accel/accel.sh@19 -- # read -r var val 00:06:10.689 12:49:15 -- accel/accel.sh@20 -- # val=32 00:06:10.689 12:49:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.689 12:49:15 -- accel/accel.sh@19 -- # IFS=: 00:06:10.689 12:49:15 -- accel/accel.sh@19 -- # read -r var val 00:06:10.689 12:49:15 -- accel/accel.sh@20 -- # val=32 00:06:10.689 12:49:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.689 12:49:15 -- accel/accel.sh@19 -- # IFS=: 00:06:10.689 12:49:15 -- accel/accel.sh@19 -- # read -r var val 00:06:10.689 12:49:15 -- accel/accel.sh@20 -- # val=1 00:06:10.689 12:49:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.689 12:49:15 -- accel/accel.sh@19 -- # IFS=: 00:06:10.689 12:49:15 -- accel/accel.sh@19 -- # read -r var val 00:06:10.689 12:49:15 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:10.689 12:49:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.689 12:49:15 -- accel/accel.sh@19 -- # IFS=: 00:06:10.689 12:49:15 -- accel/accel.sh@19 -- # read -r var val 00:06:10.689 12:49:15 -- accel/accel.sh@20 -- # val=Yes 00:06:10.689 12:49:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.689 12:49:15 -- accel/accel.sh@19 -- # IFS=: 00:06:10.689 12:49:15 -- accel/accel.sh@19 -- # read -r var val 00:06:10.689 12:49:15 -- accel/accel.sh@20 -- # val= 00:06:10.689 12:49:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.689 12:49:15 -- accel/accel.sh@19 -- # IFS=: 00:06:10.689 12:49:15 -- accel/accel.sh@19 -- # read -r var val 00:06:10.689 12:49:15 -- accel/accel.sh@20 -- # val= 00:06:10.689 12:49:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.689 12:49:15 -- accel/accel.sh@19 -- # IFS=: 00:06:10.689 12:49:15 -- accel/accel.sh@19 -- # read -r var val 00:06:11.632 12:49:16 -- accel/accel.sh@20 -- # val= 00:06:11.632 12:49:16 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.632 12:49:16 -- accel/accel.sh@19 -- # IFS=: 00:06:11.632 12:49:16 -- accel/accel.sh@19 -- # read -r var val 00:06:11.632 12:49:16 -- accel/accel.sh@20 -- # val= 00:06:11.632 12:49:16 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.632 12:49:16 -- accel/accel.sh@19 -- # IFS=: 00:06:11.632 12:49:16 -- accel/accel.sh@19 -- # read -r var val 00:06:11.632 12:49:16 -- accel/accel.sh@20 -- # val= 00:06:11.632 12:49:16 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.632 12:49:16 -- accel/accel.sh@19 -- # IFS=: 00:06:11.632 12:49:16 -- accel/accel.sh@19 -- # read -r var val 00:06:11.632 12:49:16 -- accel/accel.sh@20 -- # val= 00:06:11.632 12:49:16 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.632 12:49:16 -- accel/accel.sh@19 -- # IFS=: 00:06:11.632 12:49:16 -- accel/accel.sh@19 -- # read -r var val 00:06:11.632 12:49:16 -- accel/accel.sh@20 -- # val= 00:06:11.632 12:49:16 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.632 12:49:16 -- accel/accel.sh@19 -- # IFS=: 00:06:11.632 12:49:16 -- accel/accel.sh@19 -- # read -r var val 00:06:11.632 12:49:16 -- accel/accel.sh@20 -- # val= 00:06:11.632 12:49:16 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.632 12:49:16 -- accel/accel.sh@19 -- # IFS=: 00:06:11.632 12:49:16 -- accel/accel.sh@19 -- # read -r var val 00:06:11.632 12:49:16 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:11.632 12:49:16 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:11.632 12:49:16 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:11.632 00:06:11.632 real 0m1.299s 00:06:11.632 user 0m1.199s 00:06:11.632 sys 0m0.111s 00:06:11.632 12:49:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:11.632 12:49:16 -- common/autotest_common.sh@10 -- # set +x 00:06:11.632 ************************************ 00:06:11.632 END TEST accel_copy_crc32c_C2 00:06:11.632 ************************************ 00:06:11.894 12:49:16 -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:11.894 12:49:16 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:11.894 12:49:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:11.894 12:49:16 -- common/autotest_common.sh@10 -- # set +x 00:06:11.894 ************************************ 00:06:11.894 START TEST accel_dualcast 00:06:11.894 ************************************ 00:06:11.894 12:49:16 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dualcast -y 00:06:11.894 12:49:16 -- accel/accel.sh@16 -- # local accel_opc 00:06:11.894 12:49:16 -- accel/accel.sh@17 -- # local accel_module 00:06:11.894 12:49:16 -- accel/accel.sh@19 -- # IFS=: 00:06:11.894 12:49:16 -- accel/accel.sh@19 -- # read -r var val 00:06:11.894 12:49:16 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:11.894 12:49:16 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:11.894 12:49:16 -- accel/accel.sh@12 -- # build_accel_config 00:06:11.894 12:49:16 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:11.894 12:49:16 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:11.894 12:49:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:11.894 12:49:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:11.894 12:49:16 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:11.894 12:49:16 -- accel/accel.sh@40 -- # local IFS=, 00:06:11.894 12:49:16 -- accel/accel.sh@41 -- # jq -r . 00:06:11.894 [2024-04-26 12:49:16.890345] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:06:11.894 [2024-04-26 12:49:16.890425] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3775298 ] 00:06:11.894 EAL: No free 2048 kB hugepages reported on node 1 00:06:12.156 [2024-04-26 12:49:16.955688] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.156 [2024-04-26 12:49:17.027576] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.156 12:49:17 -- accel/accel.sh@20 -- # val= 00:06:12.156 12:49:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.156 12:49:17 -- accel/accel.sh@19 -- # IFS=: 00:06:12.156 12:49:17 -- accel/accel.sh@19 -- # read -r var val 00:06:12.156 12:49:17 -- accel/accel.sh@20 -- # val= 00:06:12.156 12:49:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.156 12:49:17 -- accel/accel.sh@19 -- # IFS=: 00:06:12.156 12:49:17 -- accel/accel.sh@19 -- # read -r var val 00:06:12.157 12:49:17 -- accel/accel.sh@20 -- # val=0x1 00:06:12.157 12:49:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.157 12:49:17 -- accel/accel.sh@19 -- # IFS=: 00:06:12.157 12:49:17 -- accel/accel.sh@19 -- # read -r var val 00:06:12.157 12:49:17 -- accel/accel.sh@20 -- # val= 00:06:12.157 12:49:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.157 12:49:17 -- accel/accel.sh@19 -- # IFS=: 00:06:12.157 12:49:17 -- accel/accel.sh@19 -- # read -r var val 00:06:12.157 12:49:17 -- accel/accel.sh@20 -- # val= 00:06:12.157 12:49:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.157 12:49:17 -- accel/accel.sh@19 -- # IFS=: 00:06:12.157 12:49:17 -- accel/accel.sh@19 -- # read -r var val 00:06:12.157 12:49:17 -- accel/accel.sh@20 -- # val=dualcast 00:06:12.157 12:49:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.157 12:49:17 -- accel/accel.sh@23 -- # accel_opc=dualcast 00:06:12.157 12:49:17 -- accel/accel.sh@19 -- # IFS=: 00:06:12.157 12:49:17 -- accel/accel.sh@19 -- # read -r var val 00:06:12.157 12:49:17 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:12.157 12:49:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.157 12:49:17 -- accel/accel.sh@19 -- # IFS=: 00:06:12.157 12:49:17 -- accel/accel.sh@19 -- # read -r var val 00:06:12.157 12:49:17 -- accel/accel.sh@20 -- # val= 00:06:12.157 12:49:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.157 12:49:17 -- accel/accel.sh@19 -- # IFS=: 00:06:12.157 12:49:17 -- accel/accel.sh@19 -- # read -r var val 00:06:12.157 12:49:17 -- accel/accel.sh@20 -- # val=software 00:06:12.157 12:49:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.157 12:49:17 -- accel/accel.sh@22 -- # accel_module=software 00:06:12.157 12:49:17 -- accel/accel.sh@19 -- # IFS=: 00:06:12.157 12:49:17 -- accel/accel.sh@19 -- # read -r var val 00:06:12.157 12:49:17 -- accel/accel.sh@20 -- # val=32 00:06:12.157 12:49:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.157 12:49:17 -- accel/accel.sh@19 -- # IFS=: 00:06:12.157 12:49:17 -- accel/accel.sh@19 -- # read -r var val 00:06:12.157 12:49:17 -- accel/accel.sh@20 -- # val=32 00:06:12.157 12:49:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.157 12:49:17 -- accel/accel.sh@19 -- # IFS=: 00:06:12.157 12:49:17 -- accel/accel.sh@19 -- # read -r var val 00:06:12.157 12:49:17 -- accel/accel.sh@20 -- # val=1 00:06:12.157 12:49:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.157 12:49:17 -- accel/accel.sh@19 -- # IFS=: 00:06:12.157 12:49:17 -- accel/accel.sh@19 -- # read -r var val 00:06:12.157 12:49:17 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:12.157 12:49:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.157 12:49:17 -- accel/accel.sh@19 -- # IFS=: 00:06:12.157 12:49:17 -- accel/accel.sh@19 -- # read -r var val 00:06:12.157 12:49:17 -- accel/accel.sh@20 -- # val=Yes 00:06:12.157 12:49:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.157 12:49:17 -- accel/accel.sh@19 -- # IFS=: 00:06:12.157 12:49:17 -- accel/accel.sh@19 -- # read -r var val 00:06:12.157 12:49:17 -- accel/accel.sh@20 -- # val= 00:06:12.157 12:49:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.157 12:49:17 -- accel/accel.sh@19 -- # IFS=: 00:06:12.157 12:49:17 -- accel/accel.sh@19 -- # read -r var val 00:06:12.157 12:49:17 -- accel/accel.sh@20 -- # val= 00:06:12.157 12:49:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.157 12:49:17 -- accel/accel.sh@19 -- # IFS=: 00:06:12.157 12:49:17 -- accel/accel.sh@19 -- # read -r var val 00:06:13.100 12:49:18 -- accel/accel.sh@20 -- # val= 00:06:13.100 12:49:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.100 12:49:18 -- accel/accel.sh@19 -- # IFS=: 00:06:13.100 12:49:18 -- accel/accel.sh@19 -- # read -r var val 00:06:13.100 12:49:18 -- accel/accel.sh@20 -- # val= 00:06:13.100 12:49:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.100 12:49:18 -- accel/accel.sh@19 -- # IFS=: 00:06:13.100 12:49:18 -- accel/accel.sh@19 -- # read -r var val 00:06:13.100 12:49:18 -- accel/accel.sh@20 -- # val= 00:06:13.100 12:49:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.100 12:49:18 -- accel/accel.sh@19 -- # IFS=: 00:06:13.100 12:49:18 -- accel/accel.sh@19 -- # read -r var val 00:06:13.100 12:49:18 -- accel/accel.sh@20 -- # val= 00:06:13.100 12:49:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.100 12:49:18 -- accel/accel.sh@19 -- # IFS=: 00:06:13.100 12:49:18 -- accel/accel.sh@19 -- # read -r var val 00:06:13.100 12:49:18 -- accel/accel.sh@20 -- # val= 00:06:13.100 12:49:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.100 12:49:18 -- accel/accel.sh@19 -- # IFS=: 00:06:13.100 12:49:18 -- accel/accel.sh@19 -- # read -r var val 00:06:13.100 12:49:18 -- accel/accel.sh@20 -- # val= 00:06:13.100 12:49:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.100 12:49:18 -- accel/accel.sh@19 -- # IFS=: 00:06:13.100 12:49:18 -- accel/accel.sh@19 -- # read -r var val 00:06:13.100 12:49:18 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:13.100 12:49:18 -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:06:13.100 12:49:18 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:13.100 00:06:13.100 real 0m1.295s 00:06:13.100 user 0m1.196s 00:06:13.100 sys 0m0.109s 00:06:13.100 12:49:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:13.100 12:49:18 -- common/autotest_common.sh@10 -- # set +x 00:06:13.100 ************************************ 00:06:13.101 END TEST accel_dualcast 00:06:13.101 ************************************ 00:06:13.362 12:49:18 -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:13.362 12:49:18 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:13.362 12:49:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:13.362 12:49:18 -- common/autotest_common.sh@10 -- # set +x 00:06:13.362 ************************************ 00:06:13.362 START TEST accel_compare 00:06:13.362 ************************************ 00:06:13.362 12:49:18 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w compare -y 00:06:13.362 12:49:18 -- accel/accel.sh@16 -- # local accel_opc 00:06:13.362 12:49:18 -- accel/accel.sh@17 -- # local accel_module 00:06:13.362 12:49:18 -- accel/accel.sh@19 -- # IFS=: 00:06:13.362 12:49:18 -- accel/accel.sh@19 -- # read -r var val 00:06:13.362 12:49:18 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:13.362 12:49:18 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:13.362 12:49:18 -- accel/accel.sh@12 -- # build_accel_config 00:06:13.362 12:49:18 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:13.362 12:49:18 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:13.362 12:49:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:13.362 12:49:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:13.362 12:49:18 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:13.362 12:49:18 -- accel/accel.sh@40 -- # local IFS=, 00:06:13.362 12:49:18 -- accel/accel.sh@41 -- # jq -r . 00:06:13.362 [2024-04-26 12:49:18.367573] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:06:13.362 [2024-04-26 12:49:18.367640] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3775659 ] 00:06:13.362 EAL: No free 2048 kB hugepages reported on node 1 00:06:13.624 [2024-04-26 12:49:18.433357] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.624 [2024-04-26 12:49:18.503922] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.624 12:49:18 -- accel/accel.sh@20 -- # val= 00:06:13.624 12:49:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.624 12:49:18 -- accel/accel.sh@19 -- # IFS=: 00:06:13.624 12:49:18 -- accel/accel.sh@19 -- # read -r var val 00:06:13.624 12:49:18 -- accel/accel.sh@20 -- # val= 00:06:13.624 12:49:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.624 12:49:18 -- accel/accel.sh@19 -- # IFS=: 00:06:13.624 12:49:18 -- accel/accel.sh@19 -- # read -r var val 00:06:13.624 12:49:18 -- accel/accel.sh@20 -- # val=0x1 00:06:13.624 12:49:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.624 12:49:18 -- accel/accel.sh@19 -- # IFS=: 00:06:13.624 12:49:18 -- accel/accel.sh@19 -- # read -r var val 00:06:13.624 12:49:18 -- accel/accel.sh@20 -- # val= 00:06:13.624 12:49:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.624 12:49:18 -- accel/accel.sh@19 -- # IFS=: 00:06:13.624 12:49:18 -- accel/accel.sh@19 -- # read -r var val 00:06:13.624 12:49:18 -- accel/accel.sh@20 -- # val= 00:06:13.624 12:49:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.624 12:49:18 -- accel/accel.sh@19 -- # IFS=: 00:06:13.624 12:49:18 -- accel/accel.sh@19 -- # read -r var val 00:06:13.624 12:49:18 -- accel/accel.sh@20 -- # val=compare 00:06:13.624 12:49:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.624 12:49:18 -- accel/accel.sh@23 -- # accel_opc=compare 00:06:13.624 12:49:18 -- accel/accel.sh@19 -- # IFS=: 00:06:13.624 12:49:18 -- accel/accel.sh@19 -- # read -r var val 00:06:13.624 12:49:18 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:13.624 12:49:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.624 12:49:18 -- accel/accel.sh@19 -- # IFS=: 00:06:13.624 12:49:18 -- accel/accel.sh@19 -- # read -r var val 00:06:13.624 12:49:18 -- accel/accel.sh@20 -- # val= 00:06:13.624 12:49:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.624 12:49:18 -- accel/accel.sh@19 -- # IFS=: 00:06:13.624 12:49:18 -- accel/accel.sh@19 -- # read -r var val 00:06:13.624 12:49:18 -- accel/accel.sh@20 -- # val=software 00:06:13.624 12:49:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.624 12:49:18 -- accel/accel.sh@22 -- # accel_module=software 00:06:13.624 12:49:18 -- accel/accel.sh@19 -- # IFS=: 00:06:13.624 12:49:18 -- accel/accel.sh@19 -- # read -r var val 00:06:13.624 12:49:18 -- accel/accel.sh@20 -- # val=32 00:06:13.624 12:49:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.624 12:49:18 -- accel/accel.sh@19 -- # IFS=: 00:06:13.624 12:49:18 -- accel/accel.sh@19 -- # read -r var val 00:06:13.624 12:49:18 -- accel/accel.sh@20 -- # val=32 00:06:13.624 12:49:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.624 12:49:18 -- accel/accel.sh@19 -- # IFS=: 00:06:13.624 12:49:18 -- accel/accel.sh@19 -- # read -r var val 00:06:13.624 12:49:18 -- accel/accel.sh@20 -- # val=1 00:06:13.624 12:49:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.624 12:49:18 -- accel/accel.sh@19 -- # IFS=: 00:06:13.624 12:49:18 -- accel/accel.sh@19 -- # read -r var val 00:06:13.624 12:49:18 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:13.624 12:49:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.624 12:49:18 -- accel/accel.sh@19 -- # IFS=: 00:06:13.624 12:49:18 -- accel/accel.sh@19 -- # read -r var val 00:06:13.624 12:49:18 -- accel/accel.sh@20 -- # val=Yes 00:06:13.624 12:49:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.624 12:49:18 -- accel/accel.sh@19 -- # IFS=: 00:06:13.624 12:49:18 -- accel/accel.sh@19 -- # read -r var val 00:06:13.624 12:49:18 -- accel/accel.sh@20 -- # val= 00:06:13.624 12:49:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.624 12:49:18 -- accel/accel.sh@19 -- # IFS=: 00:06:13.624 12:49:18 -- accel/accel.sh@19 -- # read -r var val 00:06:13.624 12:49:18 -- accel/accel.sh@20 -- # val= 00:06:13.624 12:49:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.624 12:49:18 -- accel/accel.sh@19 -- # IFS=: 00:06:13.624 12:49:18 -- accel/accel.sh@19 -- # read -r var val 00:06:14.568 12:49:19 -- accel/accel.sh@20 -- # val= 00:06:14.568 12:49:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.568 12:49:19 -- accel/accel.sh@19 -- # IFS=: 00:06:14.568 12:49:19 -- accel/accel.sh@19 -- # read -r var val 00:06:14.829 12:49:19 -- accel/accel.sh@20 -- # val= 00:06:14.829 12:49:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.829 12:49:19 -- accel/accel.sh@19 -- # IFS=: 00:06:14.829 12:49:19 -- accel/accel.sh@19 -- # read -r var val 00:06:14.829 12:49:19 -- accel/accel.sh@20 -- # val= 00:06:14.829 12:49:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.829 12:49:19 -- accel/accel.sh@19 -- # IFS=: 00:06:14.829 12:49:19 -- accel/accel.sh@19 -- # read -r var val 00:06:14.829 12:49:19 -- accel/accel.sh@20 -- # val= 00:06:14.829 12:49:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.829 12:49:19 -- accel/accel.sh@19 -- # IFS=: 00:06:14.829 12:49:19 -- accel/accel.sh@19 -- # read -r var val 00:06:14.829 12:49:19 -- accel/accel.sh@20 -- # val= 00:06:14.829 12:49:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.829 12:49:19 -- accel/accel.sh@19 -- # IFS=: 00:06:14.829 12:49:19 -- accel/accel.sh@19 -- # read -r var val 00:06:14.829 12:49:19 -- accel/accel.sh@20 -- # val= 00:06:14.829 12:49:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.829 12:49:19 -- accel/accel.sh@19 -- # IFS=: 00:06:14.829 12:49:19 -- accel/accel.sh@19 -- # read -r var val 00:06:14.829 12:49:19 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:14.829 12:49:19 -- accel/accel.sh@27 -- # [[ -n compare ]] 00:06:14.829 12:49:19 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:14.829 00:06:14.829 real 0m1.295s 00:06:14.829 user 0m1.202s 00:06:14.829 sys 0m0.102s 00:06:14.829 12:49:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:14.829 12:49:19 -- common/autotest_common.sh@10 -- # set +x 00:06:14.829 ************************************ 00:06:14.829 END TEST accel_compare 00:06:14.829 ************************************ 00:06:14.829 12:49:19 -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:14.829 12:49:19 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:14.829 12:49:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:14.829 12:49:19 -- common/autotest_common.sh@10 -- # set +x 00:06:14.829 ************************************ 00:06:14.829 START TEST accel_xor 00:06:14.829 ************************************ 00:06:14.829 12:49:19 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w xor -y 00:06:14.829 12:49:19 -- accel/accel.sh@16 -- # local accel_opc 00:06:14.829 12:49:19 -- accel/accel.sh@17 -- # local accel_module 00:06:14.829 12:49:19 -- accel/accel.sh@19 -- # IFS=: 00:06:14.829 12:49:19 -- accel/accel.sh@19 -- # read -r var val 00:06:14.829 12:49:19 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:14.829 12:49:19 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:14.829 12:49:19 -- accel/accel.sh@12 -- # build_accel_config 00:06:14.829 12:49:19 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:14.829 12:49:19 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:14.829 12:49:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:14.829 12:49:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:14.829 12:49:19 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:14.829 12:49:19 -- accel/accel.sh@40 -- # local IFS=, 00:06:14.829 12:49:19 -- accel/accel.sh@41 -- # jq -r . 00:06:14.829 [2024-04-26 12:49:19.846080] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:06:14.829 [2024-04-26 12:49:19.846181] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3776013 ] 00:06:14.829 EAL: No free 2048 kB hugepages reported on node 1 00:06:15.090 [2024-04-26 12:49:19.912836] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.090 [2024-04-26 12:49:19.985127] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.090 12:49:20 -- accel/accel.sh@20 -- # val= 00:06:15.090 12:49:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.090 12:49:20 -- accel/accel.sh@19 -- # IFS=: 00:06:15.090 12:49:20 -- accel/accel.sh@19 -- # read -r var val 00:06:15.090 12:49:20 -- accel/accel.sh@20 -- # val= 00:06:15.090 12:49:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.090 12:49:20 -- accel/accel.sh@19 -- # IFS=: 00:06:15.090 12:49:20 -- accel/accel.sh@19 -- # read -r var val 00:06:15.090 12:49:20 -- accel/accel.sh@20 -- # val=0x1 00:06:15.090 12:49:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.090 12:49:20 -- accel/accel.sh@19 -- # IFS=: 00:06:15.090 12:49:20 -- accel/accel.sh@19 -- # read -r var val 00:06:15.090 12:49:20 -- accel/accel.sh@20 -- # val= 00:06:15.090 12:49:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.090 12:49:20 -- accel/accel.sh@19 -- # IFS=: 00:06:15.090 12:49:20 -- accel/accel.sh@19 -- # read -r var val 00:06:15.090 12:49:20 -- accel/accel.sh@20 -- # val= 00:06:15.090 12:49:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.090 12:49:20 -- accel/accel.sh@19 -- # IFS=: 00:06:15.090 12:49:20 -- accel/accel.sh@19 -- # read -r var val 00:06:15.090 12:49:20 -- accel/accel.sh@20 -- # val=xor 00:06:15.090 12:49:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.090 12:49:20 -- accel/accel.sh@23 -- # accel_opc=xor 00:06:15.090 12:49:20 -- accel/accel.sh@19 -- # IFS=: 00:06:15.090 12:49:20 -- accel/accel.sh@19 -- # read -r var val 00:06:15.090 12:49:20 -- accel/accel.sh@20 -- # val=2 00:06:15.090 12:49:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.090 12:49:20 -- accel/accel.sh@19 -- # IFS=: 00:06:15.090 12:49:20 -- accel/accel.sh@19 -- # read -r var val 00:06:15.090 12:49:20 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:15.090 12:49:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.090 12:49:20 -- accel/accel.sh@19 -- # IFS=: 00:06:15.090 12:49:20 -- accel/accel.sh@19 -- # read -r var val 00:06:15.090 12:49:20 -- accel/accel.sh@20 -- # val= 00:06:15.090 12:49:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.090 12:49:20 -- accel/accel.sh@19 -- # IFS=: 00:06:15.090 12:49:20 -- accel/accel.sh@19 -- # read -r var val 00:06:15.090 12:49:20 -- accel/accel.sh@20 -- # val=software 00:06:15.090 12:49:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.090 12:49:20 -- accel/accel.sh@22 -- # accel_module=software 00:06:15.090 12:49:20 -- accel/accel.sh@19 -- # IFS=: 00:06:15.090 12:49:20 -- accel/accel.sh@19 -- # read -r var val 00:06:15.090 12:49:20 -- accel/accel.sh@20 -- # val=32 00:06:15.090 12:49:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.090 12:49:20 -- accel/accel.sh@19 -- # IFS=: 00:06:15.090 12:49:20 -- accel/accel.sh@19 -- # read -r var val 00:06:15.090 12:49:20 -- accel/accel.sh@20 -- # val=32 00:06:15.090 12:49:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.090 12:49:20 -- accel/accel.sh@19 -- # IFS=: 00:06:15.090 12:49:20 -- accel/accel.sh@19 -- # read -r var val 00:06:15.090 12:49:20 -- accel/accel.sh@20 -- # val=1 00:06:15.090 12:49:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.090 12:49:20 -- accel/accel.sh@19 -- # IFS=: 00:06:15.090 12:49:20 -- accel/accel.sh@19 -- # read -r var val 00:06:15.091 12:49:20 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:15.091 12:49:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.091 12:49:20 -- accel/accel.sh@19 -- # IFS=: 00:06:15.091 12:49:20 -- accel/accel.sh@19 -- # read -r var val 00:06:15.091 12:49:20 -- accel/accel.sh@20 -- # val=Yes 00:06:15.091 12:49:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.091 12:49:20 -- accel/accel.sh@19 -- # IFS=: 00:06:15.091 12:49:20 -- accel/accel.sh@19 -- # read -r var val 00:06:15.091 12:49:20 -- accel/accel.sh@20 -- # val= 00:06:15.091 12:49:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.091 12:49:20 -- accel/accel.sh@19 -- # IFS=: 00:06:15.091 12:49:20 -- accel/accel.sh@19 -- # read -r var val 00:06:15.091 12:49:20 -- accel/accel.sh@20 -- # val= 00:06:15.091 12:49:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.091 12:49:20 -- accel/accel.sh@19 -- # IFS=: 00:06:15.091 12:49:20 -- accel/accel.sh@19 -- # read -r var val 00:06:16.475 12:49:21 -- accel/accel.sh@20 -- # val= 00:06:16.475 12:49:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.475 12:49:21 -- accel/accel.sh@19 -- # IFS=: 00:06:16.475 12:49:21 -- accel/accel.sh@19 -- # read -r var val 00:06:16.475 12:49:21 -- accel/accel.sh@20 -- # val= 00:06:16.475 12:49:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.475 12:49:21 -- accel/accel.sh@19 -- # IFS=: 00:06:16.475 12:49:21 -- accel/accel.sh@19 -- # read -r var val 00:06:16.475 12:49:21 -- accel/accel.sh@20 -- # val= 00:06:16.475 12:49:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.475 12:49:21 -- accel/accel.sh@19 -- # IFS=: 00:06:16.475 12:49:21 -- accel/accel.sh@19 -- # read -r var val 00:06:16.475 12:49:21 -- accel/accel.sh@20 -- # val= 00:06:16.475 12:49:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.475 12:49:21 -- accel/accel.sh@19 -- # IFS=: 00:06:16.475 12:49:21 -- accel/accel.sh@19 -- # read -r var val 00:06:16.475 12:49:21 -- accel/accel.sh@20 -- # val= 00:06:16.475 12:49:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.475 12:49:21 -- accel/accel.sh@19 -- # IFS=: 00:06:16.475 12:49:21 -- accel/accel.sh@19 -- # read -r var val 00:06:16.475 12:49:21 -- accel/accel.sh@20 -- # val= 00:06:16.475 12:49:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.475 12:49:21 -- accel/accel.sh@19 -- # IFS=: 00:06:16.475 12:49:21 -- accel/accel.sh@19 -- # read -r var val 00:06:16.475 12:49:21 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:16.475 12:49:21 -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:16.475 12:49:21 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:16.475 00:06:16.475 real 0m1.299s 00:06:16.475 user 0m1.200s 00:06:16.475 sys 0m0.108s 00:06:16.475 12:49:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:16.475 12:49:21 -- common/autotest_common.sh@10 -- # set +x 00:06:16.475 ************************************ 00:06:16.475 END TEST accel_xor 00:06:16.475 ************************************ 00:06:16.475 12:49:21 -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:16.475 12:49:21 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:16.475 12:49:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:16.475 12:49:21 -- common/autotest_common.sh@10 -- # set +x 00:06:16.475 ************************************ 00:06:16.475 START TEST accel_xor 00:06:16.475 ************************************ 00:06:16.475 12:49:21 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w xor -y -x 3 00:06:16.475 12:49:21 -- accel/accel.sh@16 -- # local accel_opc 00:06:16.475 12:49:21 -- accel/accel.sh@17 -- # local accel_module 00:06:16.475 12:49:21 -- accel/accel.sh@19 -- # IFS=: 00:06:16.475 12:49:21 -- accel/accel.sh@19 -- # read -r var val 00:06:16.475 12:49:21 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:16.475 12:49:21 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:16.475 12:49:21 -- accel/accel.sh@12 -- # build_accel_config 00:06:16.475 12:49:21 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:16.475 12:49:21 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:16.475 12:49:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:16.475 12:49:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:16.475 12:49:21 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:16.475 12:49:21 -- accel/accel.sh@40 -- # local IFS=, 00:06:16.475 12:49:21 -- accel/accel.sh@41 -- # jq -r . 00:06:16.475 [2024-04-26 12:49:21.326503] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:06:16.475 [2024-04-26 12:49:21.326585] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3776377 ] 00:06:16.475 EAL: No free 2048 kB hugepages reported on node 1 00:06:16.475 [2024-04-26 12:49:21.388229] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.475 [2024-04-26 12:49:21.450160] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.475 12:49:21 -- accel/accel.sh@20 -- # val= 00:06:16.475 12:49:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.475 12:49:21 -- accel/accel.sh@19 -- # IFS=: 00:06:16.475 12:49:21 -- accel/accel.sh@19 -- # read -r var val 00:06:16.475 12:49:21 -- accel/accel.sh@20 -- # val= 00:06:16.475 12:49:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.475 12:49:21 -- accel/accel.sh@19 -- # IFS=: 00:06:16.475 12:49:21 -- accel/accel.sh@19 -- # read -r var val 00:06:16.475 12:49:21 -- accel/accel.sh@20 -- # val=0x1 00:06:16.475 12:49:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.475 12:49:21 -- accel/accel.sh@19 -- # IFS=: 00:06:16.475 12:49:21 -- accel/accel.sh@19 -- # read -r var val 00:06:16.475 12:49:21 -- accel/accel.sh@20 -- # val= 00:06:16.475 12:49:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.475 12:49:21 -- accel/accel.sh@19 -- # IFS=: 00:06:16.475 12:49:21 -- accel/accel.sh@19 -- # read -r var val 00:06:16.475 12:49:21 -- accel/accel.sh@20 -- # val= 00:06:16.475 12:49:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.476 12:49:21 -- accel/accel.sh@19 -- # IFS=: 00:06:16.476 12:49:21 -- accel/accel.sh@19 -- # read -r var val 00:06:16.476 12:49:21 -- accel/accel.sh@20 -- # val=xor 00:06:16.476 12:49:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.476 12:49:21 -- accel/accel.sh@23 -- # accel_opc=xor 00:06:16.476 12:49:21 -- accel/accel.sh@19 -- # IFS=: 00:06:16.476 12:49:21 -- accel/accel.sh@19 -- # read -r var val 00:06:16.476 12:49:21 -- accel/accel.sh@20 -- # val=3 00:06:16.476 12:49:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.476 12:49:21 -- accel/accel.sh@19 -- # IFS=: 00:06:16.476 12:49:21 -- accel/accel.sh@19 -- # read -r var val 00:06:16.476 12:49:21 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:16.476 12:49:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.476 12:49:21 -- accel/accel.sh@19 -- # IFS=: 00:06:16.476 12:49:21 -- accel/accel.sh@19 -- # read -r var val 00:06:16.476 12:49:21 -- accel/accel.sh@20 -- # val= 00:06:16.476 12:49:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.476 12:49:21 -- accel/accel.sh@19 -- # IFS=: 00:06:16.476 12:49:21 -- accel/accel.sh@19 -- # read -r var val 00:06:16.476 12:49:21 -- accel/accel.sh@20 -- # val=software 00:06:16.476 12:49:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.476 12:49:21 -- accel/accel.sh@22 -- # accel_module=software 00:06:16.476 12:49:21 -- accel/accel.sh@19 -- # IFS=: 00:06:16.476 12:49:21 -- accel/accel.sh@19 -- # read -r var val 00:06:16.476 12:49:21 -- accel/accel.sh@20 -- # val=32 00:06:16.476 12:49:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.476 12:49:21 -- accel/accel.sh@19 -- # IFS=: 00:06:16.476 12:49:21 -- accel/accel.sh@19 -- # read -r var val 00:06:16.476 12:49:21 -- accel/accel.sh@20 -- # val=32 00:06:16.476 12:49:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.476 12:49:21 -- accel/accel.sh@19 -- # IFS=: 00:06:16.476 12:49:21 -- accel/accel.sh@19 -- # read -r var val 00:06:16.476 12:49:21 -- accel/accel.sh@20 -- # val=1 00:06:16.476 12:49:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.476 12:49:21 -- accel/accel.sh@19 -- # IFS=: 00:06:16.476 12:49:21 -- accel/accel.sh@19 -- # read -r var val 00:06:16.476 12:49:21 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:16.476 12:49:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.476 12:49:21 -- accel/accel.sh@19 -- # IFS=: 00:06:16.476 12:49:21 -- accel/accel.sh@19 -- # read -r var val 00:06:16.476 12:49:21 -- accel/accel.sh@20 -- # val=Yes 00:06:16.476 12:49:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.476 12:49:21 -- accel/accel.sh@19 -- # IFS=: 00:06:16.476 12:49:21 -- accel/accel.sh@19 -- # read -r var val 00:06:16.476 12:49:21 -- accel/accel.sh@20 -- # val= 00:06:16.476 12:49:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.476 12:49:21 -- accel/accel.sh@19 -- # IFS=: 00:06:16.476 12:49:21 -- accel/accel.sh@19 -- # read -r var val 00:06:16.476 12:49:21 -- accel/accel.sh@20 -- # val= 00:06:16.476 12:49:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.476 12:49:21 -- accel/accel.sh@19 -- # IFS=: 00:06:16.476 12:49:21 -- accel/accel.sh@19 -- # read -r var val 00:06:17.859 12:49:22 -- accel/accel.sh@20 -- # val= 00:06:17.859 12:49:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.859 12:49:22 -- accel/accel.sh@19 -- # IFS=: 00:06:17.859 12:49:22 -- accel/accel.sh@19 -- # read -r var val 00:06:17.859 12:49:22 -- accel/accel.sh@20 -- # val= 00:06:17.859 12:49:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.859 12:49:22 -- accel/accel.sh@19 -- # IFS=: 00:06:17.859 12:49:22 -- accel/accel.sh@19 -- # read -r var val 00:06:17.859 12:49:22 -- accel/accel.sh@20 -- # val= 00:06:17.859 12:49:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.859 12:49:22 -- accel/accel.sh@19 -- # IFS=: 00:06:17.859 12:49:22 -- accel/accel.sh@19 -- # read -r var val 00:06:17.859 12:49:22 -- accel/accel.sh@20 -- # val= 00:06:17.859 12:49:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.859 12:49:22 -- accel/accel.sh@19 -- # IFS=: 00:06:17.859 12:49:22 -- accel/accel.sh@19 -- # read -r var val 00:06:17.859 12:49:22 -- accel/accel.sh@20 -- # val= 00:06:17.859 12:49:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.860 12:49:22 -- accel/accel.sh@19 -- # IFS=: 00:06:17.860 12:49:22 -- accel/accel.sh@19 -- # read -r var val 00:06:17.860 12:49:22 -- accel/accel.sh@20 -- # val= 00:06:17.860 12:49:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.860 12:49:22 -- accel/accel.sh@19 -- # IFS=: 00:06:17.860 12:49:22 -- accel/accel.sh@19 -- # read -r var val 00:06:17.860 12:49:22 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:17.860 12:49:22 -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:17.860 12:49:22 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:17.860 00:06:17.860 real 0m1.280s 00:06:17.860 user 0m1.194s 00:06:17.860 sys 0m0.098s 00:06:17.860 12:49:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:17.860 12:49:22 -- common/autotest_common.sh@10 -- # set +x 00:06:17.860 ************************************ 00:06:17.860 END TEST accel_xor 00:06:17.860 ************************************ 00:06:17.860 12:49:22 -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:17.860 12:49:22 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:17.860 12:49:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:17.860 12:49:22 -- common/autotest_common.sh@10 -- # set +x 00:06:17.860 ************************************ 00:06:17.860 START TEST accel_dif_verify 00:06:17.860 ************************************ 00:06:17.860 12:49:22 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_verify 00:06:17.860 12:49:22 -- accel/accel.sh@16 -- # local accel_opc 00:06:17.860 12:49:22 -- accel/accel.sh@17 -- # local accel_module 00:06:17.860 12:49:22 -- accel/accel.sh@19 -- # IFS=: 00:06:17.860 12:49:22 -- accel/accel.sh@19 -- # read -r var val 00:06:17.860 12:49:22 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:17.860 12:49:22 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:17.860 12:49:22 -- accel/accel.sh@12 -- # build_accel_config 00:06:17.860 12:49:22 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:17.860 12:49:22 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:17.860 12:49:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:17.860 12:49:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:17.860 12:49:22 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:17.860 12:49:22 -- accel/accel.sh@40 -- # local IFS=, 00:06:17.860 12:49:22 -- accel/accel.sh@41 -- # jq -r . 00:06:17.860 [2024-04-26 12:49:22.791917] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:06:17.860 [2024-04-26 12:49:22.792010] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3776734 ] 00:06:17.860 EAL: No free 2048 kB hugepages reported on node 1 00:06:17.860 [2024-04-26 12:49:22.857851] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.120 [2024-04-26 12:49:22.928765] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.120 12:49:22 -- accel/accel.sh@20 -- # val= 00:06:18.120 12:49:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.120 12:49:22 -- accel/accel.sh@19 -- # IFS=: 00:06:18.120 12:49:22 -- accel/accel.sh@19 -- # read -r var val 00:06:18.120 12:49:22 -- accel/accel.sh@20 -- # val= 00:06:18.120 12:49:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.120 12:49:22 -- accel/accel.sh@19 -- # IFS=: 00:06:18.120 12:49:22 -- accel/accel.sh@19 -- # read -r var val 00:06:18.120 12:49:22 -- accel/accel.sh@20 -- # val=0x1 00:06:18.120 12:49:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.120 12:49:22 -- accel/accel.sh@19 -- # IFS=: 00:06:18.120 12:49:22 -- accel/accel.sh@19 -- # read -r var val 00:06:18.120 12:49:22 -- accel/accel.sh@20 -- # val= 00:06:18.120 12:49:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.120 12:49:22 -- accel/accel.sh@19 -- # IFS=: 00:06:18.120 12:49:22 -- accel/accel.sh@19 -- # read -r var val 00:06:18.120 12:49:22 -- accel/accel.sh@20 -- # val= 00:06:18.120 12:49:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.120 12:49:22 -- accel/accel.sh@19 -- # IFS=: 00:06:18.120 12:49:22 -- accel/accel.sh@19 -- # read -r var val 00:06:18.120 12:49:22 -- accel/accel.sh@20 -- # val=dif_verify 00:06:18.120 12:49:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.120 12:49:22 -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:06:18.120 12:49:22 -- accel/accel.sh@19 -- # IFS=: 00:06:18.120 12:49:22 -- accel/accel.sh@19 -- # read -r var val 00:06:18.120 12:49:22 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:18.120 12:49:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.120 12:49:22 -- accel/accel.sh@19 -- # IFS=: 00:06:18.120 12:49:22 -- accel/accel.sh@19 -- # read -r var val 00:06:18.120 12:49:22 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:18.120 12:49:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.120 12:49:22 -- accel/accel.sh@19 -- # IFS=: 00:06:18.120 12:49:22 -- accel/accel.sh@19 -- # read -r var val 00:06:18.120 12:49:22 -- accel/accel.sh@20 -- # val='512 bytes' 00:06:18.120 12:49:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.120 12:49:22 -- accel/accel.sh@19 -- # IFS=: 00:06:18.120 12:49:22 -- accel/accel.sh@19 -- # read -r var val 00:06:18.120 12:49:22 -- accel/accel.sh@20 -- # val='8 bytes' 00:06:18.120 12:49:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.120 12:49:22 -- accel/accel.sh@19 -- # IFS=: 00:06:18.120 12:49:22 -- accel/accel.sh@19 -- # read -r var val 00:06:18.120 12:49:22 -- accel/accel.sh@20 -- # val= 00:06:18.120 12:49:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.120 12:49:22 -- accel/accel.sh@19 -- # IFS=: 00:06:18.120 12:49:22 -- accel/accel.sh@19 -- # read -r var val 00:06:18.120 12:49:22 -- accel/accel.sh@20 -- # val=software 00:06:18.120 12:49:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.120 12:49:22 -- accel/accel.sh@22 -- # accel_module=software 00:06:18.120 12:49:22 -- accel/accel.sh@19 -- # IFS=: 00:06:18.120 12:49:22 -- accel/accel.sh@19 -- # read -r var val 00:06:18.120 12:49:22 -- accel/accel.sh@20 -- # val=32 00:06:18.120 12:49:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.120 12:49:22 -- accel/accel.sh@19 -- # IFS=: 00:06:18.120 12:49:22 -- accel/accel.sh@19 -- # read -r var val 00:06:18.120 12:49:22 -- accel/accel.sh@20 -- # val=32 00:06:18.120 12:49:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.120 12:49:22 -- accel/accel.sh@19 -- # IFS=: 00:06:18.120 12:49:22 -- accel/accel.sh@19 -- # read -r var val 00:06:18.120 12:49:22 -- accel/accel.sh@20 -- # val=1 00:06:18.120 12:49:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.120 12:49:22 -- accel/accel.sh@19 -- # IFS=: 00:06:18.120 12:49:22 -- accel/accel.sh@19 -- # read -r var val 00:06:18.120 12:49:22 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:18.120 12:49:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.120 12:49:22 -- accel/accel.sh@19 -- # IFS=: 00:06:18.120 12:49:22 -- accel/accel.sh@19 -- # read -r var val 00:06:18.120 12:49:22 -- accel/accel.sh@20 -- # val=No 00:06:18.120 12:49:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.120 12:49:22 -- accel/accel.sh@19 -- # IFS=: 00:06:18.120 12:49:22 -- accel/accel.sh@19 -- # read -r var val 00:06:18.120 12:49:22 -- accel/accel.sh@20 -- # val= 00:06:18.120 12:49:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.120 12:49:22 -- accel/accel.sh@19 -- # IFS=: 00:06:18.120 12:49:22 -- accel/accel.sh@19 -- # read -r var val 00:06:18.120 12:49:22 -- accel/accel.sh@20 -- # val= 00:06:18.120 12:49:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.120 12:49:22 -- accel/accel.sh@19 -- # IFS=: 00:06:18.120 12:49:22 -- accel/accel.sh@19 -- # read -r var val 00:06:19.059 12:49:24 -- accel/accel.sh@20 -- # val= 00:06:19.059 12:49:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.059 12:49:24 -- accel/accel.sh@19 -- # IFS=: 00:06:19.059 12:49:24 -- accel/accel.sh@19 -- # read -r var val 00:06:19.059 12:49:24 -- accel/accel.sh@20 -- # val= 00:06:19.059 12:49:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.059 12:49:24 -- accel/accel.sh@19 -- # IFS=: 00:06:19.059 12:49:24 -- accel/accel.sh@19 -- # read -r var val 00:06:19.059 12:49:24 -- accel/accel.sh@20 -- # val= 00:06:19.059 12:49:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.059 12:49:24 -- accel/accel.sh@19 -- # IFS=: 00:06:19.059 12:49:24 -- accel/accel.sh@19 -- # read -r var val 00:06:19.059 12:49:24 -- accel/accel.sh@20 -- # val= 00:06:19.059 12:49:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.059 12:49:24 -- accel/accel.sh@19 -- # IFS=: 00:06:19.059 12:49:24 -- accel/accel.sh@19 -- # read -r var val 00:06:19.059 12:49:24 -- accel/accel.sh@20 -- # val= 00:06:19.059 12:49:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.059 12:49:24 -- accel/accel.sh@19 -- # IFS=: 00:06:19.059 12:49:24 -- accel/accel.sh@19 -- # read -r var val 00:06:19.059 12:49:24 -- accel/accel.sh@20 -- # val= 00:06:19.059 12:49:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.059 12:49:24 -- accel/accel.sh@19 -- # IFS=: 00:06:19.059 12:49:24 -- accel/accel.sh@19 -- # read -r var val 00:06:19.059 12:49:24 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:19.059 12:49:24 -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:06:19.059 12:49:24 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:19.059 00:06:19.059 real 0m1.296s 00:06:19.059 user 0m1.194s 00:06:19.059 sys 0m0.113s 00:06:19.059 12:49:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:19.060 12:49:24 -- common/autotest_common.sh@10 -- # set +x 00:06:19.060 ************************************ 00:06:19.060 END TEST accel_dif_verify 00:06:19.060 ************************************ 00:06:19.060 12:49:24 -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:19.060 12:49:24 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:19.060 12:49:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:19.060 12:49:24 -- common/autotest_common.sh@10 -- # set +x 00:06:19.320 ************************************ 00:06:19.320 START TEST accel_dif_generate 00:06:19.320 ************************************ 00:06:19.320 12:49:24 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_generate 00:06:19.320 12:49:24 -- accel/accel.sh@16 -- # local accel_opc 00:06:19.320 12:49:24 -- accel/accel.sh@17 -- # local accel_module 00:06:19.320 12:49:24 -- accel/accel.sh@19 -- # IFS=: 00:06:19.320 12:49:24 -- accel/accel.sh@19 -- # read -r var val 00:06:19.320 12:49:24 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:19.320 12:49:24 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:19.320 12:49:24 -- accel/accel.sh@12 -- # build_accel_config 00:06:19.320 12:49:24 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:19.320 12:49:24 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:19.320 12:49:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:19.320 12:49:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:19.320 12:49:24 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:19.320 12:49:24 -- accel/accel.sh@40 -- # local IFS=, 00:06:19.320 12:49:24 -- accel/accel.sh@41 -- # jq -r . 00:06:19.320 [2024-04-26 12:49:24.278894] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:06:19.320 [2024-04-26 12:49:24.278994] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3776999 ] 00:06:19.320 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.320 [2024-04-26 12:49:24.346022] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.580 [2024-04-26 12:49:24.417676] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.580 12:49:24 -- accel/accel.sh@20 -- # val= 00:06:19.580 12:49:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.580 12:49:24 -- accel/accel.sh@19 -- # IFS=: 00:06:19.580 12:49:24 -- accel/accel.sh@19 -- # read -r var val 00:06:19.580 12:49:24 -- accel/accel.sh@20 -- # val= 00:06:19.580 12:49:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.580 12:49:24 -- accel/accel.sh@19 -- # IFS=: 00:06:19.580 12:49:24 -- accel/accel.sh@19 -- # read -r var val 00:06:19.580 12:49:24 -- accel/accel.sh@20 -- # val=0x1 00:06:19.580 12:49:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.580 12:49:24 -- accel/accel.sh@19 -- # IFS=: 00:06:19.580 12:49:24 -- accel/accel.sh@19 -- # read -r var val 00:06:19.580 12:49:24 -- accel/accel.sh@20 -- # val= 00:06:19.580 12:49:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.580 12:49:24 -- accel/accel.sh@19 -- # IFS=: 00:06:19.580 12:49:24 -- accel/accel.sh@19 -- # read -r var val 00:06:19.580 12:49:24 -- accel/accel.sh@20 -- # val= 00:06:19.580 12:49:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.580 12:49:24 -- accel/accel.sh@19 -- # IFS=: 00:06:19.580 12:49:24 -- accel/accel.sh@19 -- # read -r var val 00:06:19.580 12:49:24 -- accel/accel.sh@20 -- # val=dif_generate 00:06:19.580 12:49:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.580 12:49:24 -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:06:19.580 12:49:24 -- accel/accel.sh@19 -- # IFS=: 00:06:19.580 12:49:24 -- accel/accel.sh@19 -- # read -r var val 00:06:19.580 12:49:24 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:19.580 12:49:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.580 12:49:24 -- accel/accel.sh@19 -- # IFS=: 00:06:19.580 12:49:24 -- accel/accel.sh@19 -- # read -r var val 00:06:19.580 12:49:24 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:19.580 12:49:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.580 12:49:24 -- accel/accel.sh@19 -- # IFS=: 00:06:19.580 12:49:24 -- accel/accel.sh@19 -- # read -r var val 00:06:19.580 12:49:24 -- accel/accel.sh@20 -- # val='512 bytes' 00:06:19.580 12:49:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.580 12:49:24 -- accel/accel.sh@19 -- # IFS=: 00:06:19.580 12:49:24 -- accel/accel.sh@19 -- # read -r var val 00:06:19.580 12:49:24 -- accel/accel.sh@20 -- # val='8 bytes' 00:06:19.580 12:49:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.580 12:49:24 -- accel/accel.sh@19 -- # IFS=: 00:06:19.580 12:49:24 -- accel/accel.sh@19 -- # read -r var val 00:06:19.580 12:49:24 -- accel/accel.sh@20 -- # val= 00:06:19.580 12:49:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.580 12:49:24 -- accel/accel.sh@19 -- # IFS=: 00:06:19.580 12:49:24 -- accel/accel.sh@19 -- # read -r var val 00:06:19.580 12:49:24 -- accel/accel.sh@20 -- # val=software 00:06:19.580 12:49:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.580 12:49:24 -- accel/accel.sh@22 -- # accel_module=software 00:06:19.580 12:49:24 -- accel/accel.sh@19 -- # IFS=: 00:06:19.580 12:49:24 -- accel/accel.sh@19 -- # read -r var val 00:06:19.580 12:49:24 -- accel/accel.sh@20 -- # val=32 00:06:19.580 12:49:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.580 12:49:24 -- accel/accel.sh@19 -- # IFS=: 00:06:19.580 12:49:24 -- accel/accel.sh@19 -- # read -r var val 00:06:19.580 12:49:24 -- accel/accel.sh@20 -- # val=32 00:06:19.580 12:49:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.580 12:49:24 -- accel/accel.sh@19 -- # IFS=: 00:06:19.580 12:49:24 -- accel/accel.sh@19 -- # read -r var val 00:06:19.580 12:49:24 -- accel/accel.sh@20 -- # val=1 00:06:19.580 12:49:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.580 12:49:24 -- accel/accel.sh@19 -- # IFS=: 00:06:19.580 12:49:24 -- accel/accel.sh@19 -- # read -r var val 00:06:19.580 12:49:24 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:19.580 12:49:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.580 12:49:24 -- accel/accel.sh@19 -- # IFS=: 00:06:19.580 12:49:24 -- accel/accel.sh@19 -- # read -r var val 00:06:19.580 12:49:24 -- accel/accel.sh@20 -- # val=No 00:06:19.580 12:49:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.580 12:49:24 -- accel/accel.sh@19 -- # IFS=: 00:06:19.580 12:49:24 -- accel/accel.sh@19 -- # read -r var val 00:06:19.580 12:49:24 -- accel/accel.sh@20 -- # val= 00:06:19.580 12:49:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.580 12:49:24 -- accel/accel.sh@19 -- # IFS=: 00:06:19.580 12:49:24 -- accel/accel.sh@19 -- # read -r var val 00:06:19.580 12:49:24 -- accel/accel.sh@20 -- # val= 00:06:19.580 12:49:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.580 12:49:24 -- accel/accel.sh@19 -- # IFS=: 00:06:19.580 12:49:24 -- accel/accel.sh@19 -- # read -r var val 00:06:20.520 12:49:25 -- accel/accel.sh@20 -- # val= 00:06:20.520 12:49:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.520 12:49:25 -- accel/accel.sh@19 -- # IFS=: 00:06:20.520 12:49:25 -- accel/accel.sh@19 -- # read -r var val 00:06:20.520 12:49:25 -- accel/accel.sh@20 -- # val= 00:06:20.520 12:49:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.520 12:49:25 -- accel/accel.sh@19 -- # IFS=: 00:06:20.520 12:49:25 -- accel/accel.sh@19 -- # read -r var val 00:06:20.520 12:49:25 -- accel/accel.sh@20 -- # val= 00:06:20.520 12:49:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.520 12:49:25 -- accel/accel.sh@19 -- # IFS=: 00:06:20.520 12:49:25 -- accel/accel.sh@19 -- # read -r var val 00:06:20.520 12:49:25 -- accel/accel.sh@20 -- # val= 00:06:20.520 12:49:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.520 12:49:25 -- accel/accel.sh@19 -- # IFS=: 00:06:20.520 12:49:25 -- accel/accel.sh@19 -- # read -r var val 00:06:20.520 12:49:25 -- accel/accel.sh@20 -- # val= 00:06:20.520 12:49:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.520 12:49:25 -- accel/accel.sh@19 -- # IFS=: 00:06:20.520 12:49:25 -- accel/accel.sh@19 -- # read -r var val 00:06:20.520 12:49:25 -- accel/accel.sh@20 -- # val= 00:06:20.520 12:49:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.520 12:49:25 -- accel/accel.sh@19 -- # IFS=: 00:06:20.520 12:49:25 -- accel/accel.sh@19 -- # read -r var val 00:06:20.520 12:49:25 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:20.520 12:49:25 -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:06:20.520 12:49:25 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:20.520 00:06:20.520 real 0m1.299s 00:06:20.520 user 0m1.206s 00:06:20.520 sys 0m0.108s 00:06:20.520 12:49:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:20.520 12:49:25 -- common/autotest_common.sh@10 -- # set +x 00:06:20.520 ************************************ 00:06:20.520 END TEST accel_dif_generate 00:06:20.520 ************************************ 00:06:20.781 12:49:25 -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:20.781 12:49:25 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:20.781 12:49:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:20.781 12:49:25 -- common/autotest_common.sh@10 -- # set +x 00:06:20.781 ************************************ 00:06:20.781 START TEST accel_dif_generate_copy 00:06:20.781 ************************************ 00:06:20.781 12:49:25 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_generate_copy 00:06:20.781 12:49:25 -- accel/accel.sh@16 -- # local accel_opc 00:06:20.781 12:49:25 -- accel/accel.sh@17 -- # local accel_module 00:06:20.781 12:49:25 -- accel/accel.sh@19 -- # IFS=: 00:06:20.781 12:49:25 -- accel/accel.sh@19 -- # read -r var val 00:06:20.781 12:49:25 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:20.781 12:49:25 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:20.781 12:49:25 -- accel/accel.sh@12 -- # build_accel_config 00:06:20.781 12:49:25 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:20.781 12:49:25 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:20.781 12:49:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:20.781 12:49:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:20.781 12:49:25 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:20.781 12:49:25 -- accel/accel.sh@40 -- # local IFS=, 00:06:20.781 12:49:25 -- accel/accel.sh@41 -- # jq -r . 00:06:20.781 [2024-04-26 12:49:25.755950] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:06:20.781 [2024-04-26 12:49:25.756013] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3777238 ] 00:06:20.781 EAL: No free 2048 kB hugepages reported on node 1 00:06:20.781 [2024-04-26 12:49:25.816615] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.041 [2024-04-26 12:49:25.879330] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.041 12:49:25 -- accel/accel.sh@20 -- # val= 00:06:21.041 12:49:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.041 12:49:25 -- accel/accel.sh@19 -- # IFS=: 00:06:21.041 12:49:25 -- accel/accel.sh@19 -- # read -r var val 00:06:21.041 12:49:25 -- accel/accel.sh@20 -- # val= 00:06:21.041 12:49:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.042 12:49:25 -- accel/accel.sh@19 -- # IFS=: 00:06:21.042 12:49:25 -- accel/accel.sh@19 -- # read -r var val 00:06:21.042 12:49:25 -- accel/accel.sh@20 -- # val=0x1 00:06:21.042 12:49:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.042 12:49:25 -- accel/accel.sh@19 -- # IFS=: 00:06:21.042 12:49:25 -- accel/accel.sh@19 -- # read -r var val 00:06:21.042 12:49:25 -- accel/accel.sh@20 -- # val= 00:06:21.042 12:49:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.042 12:49:25 -- accel/accel.sh@19 -- # IFS=: 00:06:21.042 12:49:25 -- accel/accel.sh@19 -- # read -r var val 00:06:21.042 12:49:25 -- accel/accel.sh@20 -- # val= 00:06:21.042 12:49:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.042 12:49:25 -- accel/accel.sh@19 -- # IFS=: 00:06:21.042 12:49:25 -- accel/accel.sh@19 -- # read -r var val 00:06:21.042 12:49:25 -- accel/accel.sh@20 -- # val=dif_generate_copy 00:06:21.042 12:49:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.042 12:49:25 -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:06:21.042 12:49:25 -- accel/accel.sh@19 -- # IFS=: 00:06:21.042 12:49:25 -- accel/accel.sh@19 -- # read -r var val 00:06:21.042 12:49:25 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:21.042 12:49:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.042 12:49:25 -- accel/accel.sh@19 -- # IFS=: 00:06:21.042 12:49:25 -- accel/accel.sh@19 -- # read -r var val 00:06:21.042 12:49:25 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:21.042 12:49:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.042 12:49:25 -- accel/accel.sh@19 -- # IFS=: 00:06:21.042 12:49:25 -- accel/accel.sh@19 -- # read -r var val 00:06:21.042 12:49:25 -- accel/accel.sh@20 -- # val= 00:06:21.042 12:49:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.042 12:49:25 -- accel/accel.sh@19 -- # IFS=: 00:06:21.042 12:49:25 -- accel/accel.sh@19 -- # read -r var val 00:06:21.042 12:49:25 -- accel/accel.sh@20 -- # val=software 00:06:21.042 12:49:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.042 12:49:25 -- accel/accel.sh@22 -- # accel_module=software 00:06:21.042 12:49:25 -- accel/accel.sh@19 -- # IFS=: 00:06:21.042 12:49:25 -- accel/accel.sh@19 -- # read -r var val 00:06:21.042 12:49:25 -- accel/accel.sh@20 -- # val=32 00:06:21.042 12:49:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.042 12:49:25 -- accel/accel.sh@19 -- # IFS=: 00:06:21.042 12:49:25 -- accel/accel.sh@19 -- # read -r var val 00:06:21.042 12:49:25 -- accel/accel.sh@20 -- # val=32 00:06:21.042 12:49:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.042 12:49:25 -- accel/accel.sh@19 -- # IFS=: 00:06:21.042 12:49:25 -- accel/accel.sh@19 -- # read -r var val 00:06:21.042 12:49:25 -- accel/accel.sh@20 -- # val=1 00:06:21.042 12:49:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.042 12:49:25 -- accel/accel.sh@19 -- # IFS=: 00:06:21.042 12:49:25 -- accel/accel.sh@19 -- # read -r var val 00:06:21.042 12:49:25 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:21.042 12:49:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.042 12:49:25 -- accel/accel.sh@19 -- # IFS=: 00:06:21.042 12:49:25 -- accel/accel.sh@19 -- # read -r var val 00:06:21.042 12:49:25 -- accel/accel.sh@20 -- # val=No 00:06:21.042 12:49:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.042 12:49:25 -- accel/accel.sh@19 -- # IFS=: 00:06:21.042 12:49:25 -- accel/accel.sh@19 -- # read -r var val 00:06:21.042 12:49:25 -- accel/accel.sh@20 -- # val= 00:06:21.042 12:49:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.042 12:49:25 -- accel/accel.sh@19 -- # IFS=: 00:06:21.042 12:49:25 -- accel/accel.sh@19 -- # read -r var val 00:06:21.042 12:49:25 -- accel/accel.sh@20 -- # val= 00:06:21.042 12:49:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.042 12:49:25 -- accel/accel.sh@19 -- # IFS=: 00:06:21.042 12:49:25 -- accel/accel.sh@19 -- # read -r var val 00:06:21.983 12:49:27 -- accel/accel.sh@20 -- # val= 00:06:21.983 12:49:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.983 12:49:27 -- accel/accel.sh@19 -- # IFS=: 00:06:21.983 12:49:27 -- accel/accel.sh@19 -- # read -r var val 00:06:21.983 12:49:27 -- accel/accel.sh@20 -- # val= 00:06:21.983 12:49:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.983 12:49:27 -- accel/accel.sh@19 -- # IFS=: 00:06:21.983 12:49:27 -- accel/accel.sh@19 -- # read -r var val 00:06:21.983 12:49:27 -- accel/accel.sh@20 -- # val= 00:06:21.983 12:49:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.983 12:49:27 -- accel/accel.sh@19 -- # IFS=: 00:06:21.983 12:49:27 -- accel/accel.sh@19 -- # read -r var val 00:06:21.983 12:49:27 -- accel/accel.sh@20 -- # val= 00:06:21.983 12:49:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.983 12:49:27 -- accel/accel.sh@19 -- # IFS=: 00:06:21.983 12:49:27 -- accel/accel.sh@19 -- # read -r var val 00:06:21.983 12:49:27 -- accel/accel.sh@20 -- # val= 00:06:21.983 12:49:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.983 12:49:27 -- accel/accel.sh@19 -- # IFS=: 00:06:21.983 12:49:27 -- accel/accel.sh@19 -- # read -r var val 00:06:21.983 12:49:27 -- accel/accel.sh@20 -- # val= 00:06:21.983 12:49:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.983 12:49:27 -- accel/accel.sh@19 -- # IFS=: 00:06:21.983 12:49:27 -- accel/accel.sh@19 -- # read -r var val 00:06:21.983 12:49:27 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:21.983 12:49:27 -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:06:21.983 12:49:27 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:21.983 00:06:21.983 real 0m1.280s 00:06:21.983 user 0m1.193s 00:06:21.983 sys 0m0.098s 00:06:21.983 12:49:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:21.983 12:49:27 -- common/autotest_common.sh@10 -- # set +x 00:06:21.983 ************************************ 00:06:21.983 END TEST accel_dif_generate_copy 00:06:21.983 ************************************ 00:06:22.245 12:49:27 -- accel/accel.sh@115 -- # [[ y == y ]] 00:06:22.245 12:49:27 -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:22.245 12:49:27 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:22.245 12:49:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:22.245 12:49:27 -- common/autotest_common.sh@10 -- # set +x 00:06:22.245 ************************************ 00:06:22.245 START TEST accel_comp 00:06:22.245 ************************************ 00:06:22.245 12:49:27 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:22.245 12:49:27 -- accel/accel.sh@16 -- # local accel_opc 00:06:22.245 12:49:27 -- accel/accel.sh@17 -- # local accel_module 00:06:22.245 12:49:27 -- accel/accel.sh@19 -- # IFS=: 00:06:22.245 12:49:27 -- accel/accel.sh@19 -- # read -r var val 00:06:22.245 12:49:27 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:22.245 12:49:27 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:22.245 12:49:27 -- accel/accel.sh@12 -- # build_accel_config 00:06:22.245 12:49:27 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:22.245 12:49:27 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:22.245 12:49:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:22.245 12:49:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:22.245 12:49:27 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:22.245 12:49:27 -- accel/accel.sh@40 -- # local IFS=, 00:06:22.245 12:49:27 -- accel/accel.sh@41 -- # jq -r . 00:06:22.245 [2024-04-26 12:49:27.227408] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:06:22.245 [2024-04-26 12:49:27.227521] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3777500 ] 00:06:22.245 EAL: No free 2048 kB hugepages reported on node 1 00:06:22.245 [2024-04-26 12:49:27.295545] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.505 [2024-04-26 12:49:27.369935] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.505 12:49:27 -- accel/accel.sh@20 -- # val= 00:06:22.505 12:49:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.505 12:49:27 -- accel/accel.sh@19 -- # IFS=: 00:06:22.505 12:49:27 -- accel/accel.sh@19 -- # read -r var val 00:06:22.505 12:49:27 -- accel/accel.sh@20 -- # val= 00:06:22.505 12:49:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.505 12:49:27 -- accel/accel.sh@19 -- # IFS=: 00:06:22.505 12:49:27 -- accel/accel.sh@19 -- # read -r var val 00:06:22.505 12:49:27 -- accel/accel.sh@20 -- # val= 00:06:22.505 12:49:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.505 12:49:27 -- accel/accel.sh@19 -- # IFS=: 00:06:22.505 12:49:27 -- accel/accel.sh@19 -- # read -r var val 00:06:22.505 12:49:27 -- accel/accel.sh@20 -- # val=0x1 00:06:22.505 12:49:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.505 12:49:27 -- accel/accel.sh@19 -- # IFS=: 00:06:22.505 12:49:27 -- accel/accel.sh@19 -- # read -r var val 00:06:22.505 12:49:27 -- accel/accel.sh@20 -- # val= 00:06:22.505 12:49:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.505 12:49:27 -- accel/accel.sh@19 -- # IFS=: 00:06:22.505 12:49:27 -- accel/accel.sh@19 -- # read -r var val 00:06:22.505 12:49:27 -- accel/accel.sh@20 -- # val= 00:06:22.505 12:49:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.505 12:49:27 -- accel/accel.sh@19 -- # IFS=: 00:06:22.505 12:49:27 -- accel/accel.sh@19 -- # read -r var val 00:06:22.505 12:49:27 -- accel/accel.sh@20 -- # val=compress 00:06:22.506 12:49:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.506 12:49:27 -- accel/accel.sh@23 -- # accel_opc=compress 00:06:22.506 12:49:27 -- accel/accel.sh@19 -- # IFS=: 00:06:22.506 12:49:27 -- accel/accel.sh@19 -- # read -r var val 00:06:22.506 12:49:27 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:22.506 12:49:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.506 12:49:27 -- accel/accel.sh@19 -- # IFS=: 00:06:22.506 12:49:27 -- accel/accel.sh@19 -- # read -r var val 00:06:22.506 12:49:27 -- accel/accel.sh@20 -- # val= 00:06:22.506 12:49:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.506 12:49:27 -- accel/accel.sh@19 -- # IFS=: 00:06:22.506 12:49:27 -- accel/accel.sh@19 -- # read -r var val 00:06:22.506 12:49:27 -- accel/accel.sh@20 -- # val=software 00:06:22.506 12:49:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.506 12:49:27 -- accel/accel.sh@22 -- # accel_module=software 00:06:22.506 12:49:27 -- accel/accel.sh@19 -- # IFS=: 00:06:22.506 12:49:27 -- accel/accel.sh@19 -- # read -r var val 00:06:22.506 12:49:27 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:22.506 12:49:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.506 12:49:27 -- accel/accel.sh@19 -- # IFS=: 00:06:22.506 12:49:27 -- accel/accel.sh@19 -- # read -r var val 00:06:22.506 12:49:27 -- accel/accel.sh@20 -- # val=32 00:06:22.506 12:49:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.506 12:49:27 -- accel/accel.sh@19 -- # IFS=: 00:06:22.506 12:49:27 -- accel/accel.sh@19 -- # read -r var val 00:06:22.506 12:49:27 -- accel/accel.sh@20 -- # val=32 00:06:22.506 12:49:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.506 12:49:27 -- accel/accel.sh@19 -- # IFS=: 00:06:22.506 12:49:27 -- accel/accel.sh@19 -- # read -r var val 00:06:22.506 12:49:27 -- accel/accel.sh@20 -- # val=1 00:06:22.506 12:49:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.506 12:49:27 -- accel/accel.sh@19 -- # IFS=: 00:06:22.506 12:49:27 -- accel/accel.sh@19 -- # read -r var val 00:06:22.506 12:49:27 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:22.506 12:49:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.506 12:49:27 -- accel/accel.sh@19 -- # IFS=: 00:06:22.506 12:49:27 -- accel/accel.sh@19 -- # read -r var val 00:06:22.506 12:49:27 -- accel/accel.sh@20 -- # val=No 00:06:22.506 12:49:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.506 12:49:27 -- accel/accel.sh@19 -- # IFS=: 00:06:22.506 12:49:27 -- accel/accel.sh@19 -- # read -r var val 00:06:22.506 12:49:27 -- accel/accel.sh@20 -- # val= 00:06:22.506 12:49:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.506 12:49:27 -- accel/accel.sh@19 -- # IFS=: 00:06:22.506 12:49:27 -- accel/accel.sh@19 -- # read -r var val 00:06:22.506 12:49:27 -- accel/accel.sh@20 -- # val= 00:06:22.506 12:49:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.506 12:49:27 -- accel/accel.sh@19 -- # IFS=: 00:06:22.506 12:49:27 -- accel/accel.sh@19 -- # read -r var val 00:06:23.447 12:49:28 -- accel/accel.sh@20 -- # val= 00:06:23.447 12:49:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.447 12:49:28 -- accel/accel.sh@19 -- # IFS=: 00:06:23.447 12:49:28 -- accel/accel.sh@19 -- # read -r var val 00:06:23.447 12:49:28 -- accel/accel.sh@20 -- # val= 00:06:23.447 12:49:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.447 12:49:28 -- accel/accel.sh@19 -- # IFS=: 00:06:23.447 12:49:28 -- accel/accel.sh@19 -- # read -r var val 00:06:23.447 12:49:28 -- accel/accel.sh@20 -- # val= 00:06:23.447 12:49:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.447 12:49:28 -- accel/accel.sh@19 -- # IFS=: 00:06:23.447 12:49:28 -- accel/accel.sh@19 -- # read -r var val 00:06:23.447 12:49:28 -- accel/accel.sh@20 -- # val= 00:06:23.447 12:49:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.447 12:49:28 -- accel/accel.sh@19 -- # IFS=: 00:06:23.447 12:49:28 -- accel/accel.sh@19 -- # read -r var val 00:06:23.447 12:49:28 -- accel/accel.sh@20 -- # val= 00:06:23.447 12:49:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.447 12:49:28 -- accel/accel.sh@19 -- # IFS=: 00:06:23.447 12:49:28 -- accel/accel.sh@19 -- # read -r var val 00:06:23.447 12:49:28 -- accel/accel.sh@20 -- # val= 00:06:23.447 12:49:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.447 12:49:28 -- accel/accel.sh@19 -- # IFS=: 00:06:23.447 12:49:28 -- accel/accel.sh@19 -- # read -r var val 00:06:23.447 12:49:28 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:23.447 12:49:28 -- accel/accel.sh@27 -- # [[ -n compress ]] 00:06:23.447 12:49:28 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:23.447 00:06:23.447 real 0m1.307s 00:06:23.447 user 0m1.204s 00:06:23.447 sys 0m0.114s 00:06:23.447 12:49:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:23.447 12:49:28 -- common/autotest_common.sh@10 -- # set +x 00:06:23.447 ************************************ 00:06:23.447 END TEST accel_comp 00:06:23.447 ************************************ 00:06:23.708 12:49:28 -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:23.708 12:49:28 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:23.708 12:49:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:23.708 12:49:28 -- common/autotest_common.sh@10 -- # set +x 00:06:23.708 ************************************ 00:06:23.708 START TEST accel_decomp 00:06:23.708 ************************************ 00:06:23.708 12:49:28 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:23.708 12:49:28 -- accel/accel.sh@16 -- # local accel_opc 00:06:23.708 12:49:28 -- accel/accel.sh@17 -- # local accel_module 00:06:23.708 12:49:28 -- accel/accel.sh@19 -- # IFS=: 00:06:23.708 12:49:28 -- accel/accel.sh@19 -- # read -r var val 00:06:23.708 12:49:28 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:23.708 12:49:28 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:23.708 12:49:28 -- accel/accel.sh@12 -- # build_accel_config 00:06:23.708 12:49:28 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:23.708 12:49:28 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:23.708 12:49:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:23.708 12:49:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:23.708 12:49:28 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:23.708 12:49:28 -- accel/accel.sh@40 -- # local IFS=, 00:06:23.708 12:49:28 -- accel/accel.sh@41 -- # jq -r . 00:06:23.708 [2024-04-26 12:49:28.716646] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:06:23.708 [2024-04-26 12:49:28.716727] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3777859 ] 00:06:23.708 EAL: No free 2048 kB hugepages reported on node 1 00:06:23.969 [2024-04-26 12:49:28.782193] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.969 [2024-04-26 12:49:28.852992] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.969 12:49:28 -- accel/accel.sh@20 -- # val= 00:06:23.969 12:49:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.969 12:49:28 -- accel/accel.sh@19 -- # IFS=: 00:06:23.969 12:49:28 -- accel/accel.sh@19 -- # read -r var val 00:06:23.969 12:49:28 -- accel/accel.sh@20 -- # val= 00:06:23.969 12:49:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.969 12:49:28 -- accel/accel.sh@19 -- # IFS=: 00:06:23.969 12:49:28 -- accel/accel.sh@19 -- # read -r var val 00:06:23.969 12:49:28 -- accel/accel.sh@20 -- # val= 00:06:23.969 12:49:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.969 12:49:28 -- accel/accel.sh@19 -- # IFS=: 00:06:23.969 12:49:28 -- accel/accel.sh@19 -- # read -r var val 00:06:23.969 12:49:28 -- accel/accel.sh@20 -- # val=0x1 00:06:23.969 12:49:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.969 12:49:28 -- accel/accel.sh@19 -- # IFS=: 00:06:23.969 12:49:28 -- accel/accel.sh@19 -- # read -r var val 00:06:23.969 12:49:28 -- accel/accel.sh@20 -- # val= 00:06:23.969 12:49:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.969 12:49:28 -- accel/accel.sh@19 -- # IFS=: 00:06:23.969 12:49:28 -- accel/accel.sh@19 -- # read -r var val 00:06:23.969 12:49:28 -- accel/accel.sh@20 -- # val= 00:06:23.969 12:49:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.969 12:49:28 -- accel/accel.sh@19 -- # IFS=: 00:06:23.969 12:49:28 -- accel/accel.sh@19 -- # read -r var val 00:06:23.969 12:49:28 -- accel/accel.sh@20 -- # val=decompress 00:06:23.969 12:49:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.969 12:49:28 -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:23.969 12:49:28 -- accel/accel.sh@19 -- # IFS=: 00:06:23.969 12:49:28 -- accel/accel.sh@19 -- # read -r var val 00:06:23.969 12:49:28 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:23.969 12:49:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.969 12:49:28 -- accel/accel.sh@19 -- # IFS=: 00:06:23.969 12:49:28 -- accel/accel.sh@19 -- # read -r var val 00:06:23.969 12:49:28 -- accel/accel.sh@20 -- # val= 00:06:23.969 12:49:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.969 12:49:28 -- accel/accel.sh@19 -- # IFS=: 00:06:23.969 12:49:28 -- accel/accel.sh@19 -- # read -r var val 00:06:23.969 12:49:28 -- accel/accel.sh@20 -- # val=software 00:06:23.969 12:49:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.969 12:49:28 -- accel/accel.sh@22 -- # accel_module=software 00:06:23.969 12:49:28 -- accel/accel.sh@19 -- # IFS=: 00:06:23.969 12:49:28 -- accel/accel.sh@19 -- # read -r var val 00:06:23.969 12:49:28 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:23.969 12:49:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.969 12:49:28 -- accel/accel.sh@19 -- # IFS=: 00:06:23.969 12:49:28 -- accel/accel.sh@19 -- # read -r var val 00:06:23.969 12:49:28 -- accel/accel.sh@20 -- # val=32 00:06:23.969 12:49:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.969 12:49:28 -- accel/accel.sh@19 -- # IFS=: 00:06:23.969 12:49:28 -- accel/accel.sh@19 -- # read -r var val 00:06:23.969 12:49:28 -- accel/accel.sh@20 -- # val=32 00:06:23.969 12:49:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.969 12:49:28 -- accel/accel.sh@19 -- # IFS=: 00:06:23.969 12:49:28 -- accel/accel.sh@19 -- # read -r var val 00:06:23.969 12:49:28 -- accel/accel.sh@20 -- # val=1 00:06:23.969 12:49:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.969 12:49:28 -- accel/accel.sh@19 -- # IFS=: 00:06:23.969 12:49:28 -- accel/accel.sh@19 -- # read -r var val 00:06:23.969 12:49:28 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:23.969 12:49:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.969 12:49:28 -- accel/accel.sh@19 -- # IFS=: 00:06:23.969 12:49:28 -- accel/accel.sh@19 -- # read -r var val 00:06:23.969 12:49:28 -- accel/accel.sh@20 -- # val=Yes 00:06:23.969 12:49:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.969 12:49:28 -- accel/accel.sh@19 -- # IFS=: 00:06:23.969 12:49:28 -- accel/accel.sh@19 -- # read -r var val 00:06:23.969 12:49:28 -- accel/accel.sh@20 -- # val= 00:06:23.969 12:49:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.969 12:49:28 -- accel/accel.sh@19 -- # IFS=: 00:06:23.969 12:49:28 -- accel/accel.sh@19 -- # read -r var val 00:06:23.969 12:49:28 -- accel/accel.sh@20 -- # val= 00:06:23.969 12:49:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.969 12:49:28 -- accel/accel.sh@19 -- # IFS=: 00:06:23.969 12:49:28 -- accel/accel.sh@19 -- # read -r var val 00:06:25.353 12:49:29 -- accel/accel.sh@20 -- # val= 00:06:25.353 12:49:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.353 12:49:29 -- accel/accel.sh@19 -- # IFS=: 00:06:25.353 12:49:29 -- accel/accel.sh@19 -- # read -r var val 00:06:25.353 12:49:29 -- accel/accel.sh@20 -- # val= 00:06:25.353 12:49:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.353 12:49:29 -- accel/accel.sh@19 -- # IFS=: 00:06:25.353 12:49:29 -- accel/accel.sh@19 -- # read -r var val 00:06:25.353 12:49:29 -- accel/accel.sh@20 -- # val= 00:06:25.353 12:49:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.353 12:49:29 -- accel/accel.sh@19 -- # IFS=: 00:06:25.353 12:49:29 -- accel/accel.sh@19 -- # read -r var val 00:06:25.353 12:49:29 -- accel/accel.sh@20 -- # val= 00:06:25.353 12:49:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.353 12:49:29 -- accel/accel.sh@19 -- # IFS=: 00:06:25.353 12:49:29 -- accel/accel.sh@19 -- # read -r var val 00:06:25.353 12:49:29 -- accel/accel.sh@20 -- # val= 00:06:25.353 12:49:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.353 12:49:29 -- accel/accel.sh@19 -- # IFS=: 00:06:25.353 12:49:29 -- accel/accel.sh@19 -- # read -r var val 00:06:25.353 12:49:29 -- accel/accel.sh@20 -- # val= 00:06:25.353 12:49:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.353 12:49:29 -- accel/accel.sh@19 -- # IFS=: 00:06:25.353 12:49:29 -- accel/accel.sh@19 -- # read -r var val 00:06:25.353 12:49:29 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:25.353 12:49:29 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:25.353 12:49:29 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:25.353 00:06:25.353 real 0m1.298s 00:06:25.353 user 0m1.211s 00:06:25.353 sys 0m0.098s 00:06:25.353 12:49:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:25.353 12:49:29 -- common/autotest_common.sh@10 -- # set +x 00:06:25.353 ************************************ 00:06:25.353 END TEST accel_decomp 00:06:25.353 ************************************ 00:06:25.353 12:49:30 -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:25.353 12:49:30 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:06:25.353 12:49:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:25.353 12:49:30 -- common/autotest_common.sh@10 -- # set +x 00:06:25.353 ************************************ 00:06:25.353 START TEST accel_decmop_full 00:06:25.353 ************************************ 00:06:25.353 12:49:30 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:25.353 12:49:30 -- accel/accel.sh@16 -- # local accel_opc 00:06:25.353 12:49:30 -- accel/accel.sh@17 -- # local accel_module 00:06:25.353 12:49:30 -- accel/accel.sh@19 -- # IFS=: 00:06:25.353 12:49:30 -- accel/accel.sh@19 -- # read -r var val 00:06:25.353 12:49:30 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:25.353 12:49:30 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:25.353 12:49:30 -- accel/accel.sh@12 -- # build_accel_config 00:06:25.353 12:49:30 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:25.353 12:49:30 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:25.353 12:49:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:25.353 12:49:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:25.353 12:49:30 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:25.353 12:49:30 -- accel/accel.sh@40 -- # local IFS=, 00:06:25.353 12:49:30 -- accel/accel.sh@41 -- # jq -r . 00:06:25.353 [2024-04-26 12:49:30.206114] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:06:25.353 [2024-04-26 12:49:30.206214] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3778214 ] 00:06:25.353 EAL: No free 2048 kB hugepages reported on node 1 00:06:25.353 [2024-04-26 12:49:30.272231] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.353 [2024-04-26 12:49:30.343967] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.353 12:49:30 -- accel/accel.sh@20 -- # val= 00:06:25.353 12:49:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.353 12:49:30 -- accel/accel.sh@19 -- # IFS=: 00:06:25.353 12:49:30 -- accel/accel.sh@19 -- # read -r var val 00:06:25.353 12:49:30 -- accel/accel.sh@20 -- # val= 00:06:25.353 12:49:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.353 12:49:30 -- accel/accel.sh@19 -- # IFS=: 00:06:25.353 12:49:30 -- accel/accel.sh@19 -- # read -r var val 00:06:25.353 12:49:30 -- accel/accel.sh@20 -- # val= 00:06:25.353 12:49:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.353 12:49:30 -- accel/accel.sh@19 -- # IFS=: 00:06:25.353 12:49:30 -- accel/accel.sh@19 -- # read -r var val 00:06:25.353 12:49:30 -- accel/accel.sh@20 -- # val=0x1 00:06:25.353 12:49:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.353 12:49:30 -- accel/accel.sh@19 -- # IFS=: 00:06:25.353 12:49:30 -- accel/accel.sh@19 -- # read -r var val 00:06:25.353 12:49:30 -- accel/accel.sh@20 -- # val= 00:06:25.353 12:49:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.353 12:49:30 -- accel/accel.sh@19 -- # IFS=: 00:06:25.353 12:49:30 -- accel/accel.sh@19 -- # read -r var val 00:06:25.353 12:49:30 -- accel/accel.sh@20 -- # val= 00:06:25.353 12:49:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.353 12:49:30 -- accel/accel.sh@19 -- # IFS=: 00:06:25.353 12:49:30 -- accel/accel.sh@19 -- # read -r var val 00:06:25.353 12:49:30 -- accel/accel.sh@20 -- # val=decompress 00:06:25.353 12:49:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.353 12:49:30 -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:25.353 12:49:30 -- accel/accel.sh@19 -- # IFS=: 00:06:25.353 12:49:30 -- accel/accel.sh@19 -- # read -r var val 00:06:25.353 12:49:30 -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:25.353 12:49:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.353 12:49:30 -- accel/accel.sh@19 -- # IFS=: 00:06:25.353 12:49:30 -- accel/accel.sh@19 -- # read -r var val 00:06:25.353 12:49:30 -- accel/accel.sh@20 -- # val= 00:06:25.353 12:49:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.353 12:49:30 -- accel/accel.sh@19 -- # IFS=: 00:06:25.353 12:49:30 -- accel/accel.sh@19 -- # read -r var val 00:06:25.353 12:49:30 -- accel/accel.sh@20 -- # val=software 00:06:25.353 12:49:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.353 12:49:30 -- accel/accel.sh@22 -- # accel_module=software 00:06:25.353 12:49:30 -- accel/accel.sh@19 -- # IFS=: 00:06:25.353 12:49:30 -- accel/accel.sh@19 -- # read -r var val 00:06:25.353 12:49:30 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:25.353 12:49:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.353 12:49:30 -- accel/accel.sh@19 -- # IFS=: 00:06:25.353 12:49:30 -- accel/accel.sh@19 -- # read -r var val 00:06:25.353 12:49:30 -- accel/accel.sh@20 -- # val=32 00:06:25.353 12:49:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.353 12:49:30 -- accel/accel.sh@19 -- # IFS=: 00:06:25.353 12:49:30 -- accel/accel.sh@19 -- # read -r var val 00:06:25.353 12:49:30 -- accel/accel.sh@20 -- # val=32 00:06:25.353 12:49:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.353 12:49:30 -- accel/accel.sh@19 -- # IFS=: 00:06:25.353 12:49:30 -- accel/accel.sh@19 -- # read -r var val 00:06:25.353 12:49:30 -- accel/accel.sh@20 -- # val=1 00:06:25.353 12:49:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.353 12:49:30 -- accel/accel.sh@19 -- # IFS=: 00:06:25.353 12:49:30 -- accel/accel.sh@19 -- # read -r var val 00:06:25.353 12:49:30 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:25.353 12:49:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.353 12:49:30 -- accel/accel.sh@19 -- # IFS=: 00:06:25.353 12:49:30 -- accel/accel.sh@19 -- # read -r var val 00:06:25.353 12:49:30 -- accel/accel.sh@20 -- # val=Yes 00:06:25.353 12:49:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.353 12:49:30 -- accel/accel.sh@19 -- # IFS=: 00:06:25.354 12:49:30 -- accel/accel.sh@19 -- # read -r var val 00:06:25.354 12:49:30 -- accel/accel.sh@20 -- # val= 00:06:25.354 12:49:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.354 12:49:30 -- accel/accel.sh@19 -- # IFS=: 00:06:25.354 12:49:30 -- accel/accel.sh@19 -- # read -r var val 00:06:25.354 12:49:30 -- accel/accel.sh@20 -- # val= 00:06:25.354 12:49:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.354 12:49:30 -- accel/accel.sh@19 -- # IFS=: 00:06:25.354 12:49:30 -- accel/accel.sh@19 -- # read -r var val 00:06:26.737 12:49:31 -- accel/accel.sh@20 -- # val= 00:06:26.737 12:49:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.737 12:49:31 -- accel/accel.sh@19 -- # IFS=: 00:06:26.737 12:49:31 -- accel/accel.sh@19 -- # read -r var val 00:06:26.737 12:49:31 -- accel/accel.sh@20 -- # val= 00:06:26.737 12:49:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.737 12:49:31 -- accel/accel.sh@19 -- # IFS=: 00:06:26.737 12:49:31 -- accel/accel.sh@19 -- # read -r var val 00:06:26.737 12:49:31 -- accel/accel.sh@20 -- # val= 00:06:26.737 12:49:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.737 12:49:31 -- accel/accel.sh@19 -- # IFS=: 00:06:26.737 12:49:31 -- accel/accel.sh@19 -- # read -r var val 00:06:26.737 12:49:31 -- accel/accel.sh@20 -- # val= 00:06:26.737 12:49:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.737 12:49:31 -- accel/accel.sh@19 -- # IFS=: 00:06:26.737 12:49:31 -- accel/accel.sh@19 -- # read -r var val 00:06:26.737 12:49:31 -- accel/accel.sh@20 -- # val= 00:06:26.737 12:49:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.737 12:49:31 -- accel/accel.sh@19 -- # IFS=: 00:06:26.737 12:49:31 -- accel/accel.sh@19 -- # read -r var val 00:06:26.737 12:49:31 -- accel/accel.sh@20 -- # val= 00:06:26.737 12:49:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.737 12:49:31 -- accel/accel.sh@19 -- # IFS=: 00:06:26.737 12:49:31 -- accel/accel.sh@19 -- # read -r var val 00:06:26.737 12:49:31 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:26.737 12:49:31 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:26.737 12:49:31 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:26.737 00:06:26.737 real 0m1.308s 00:06:26.737 user 0m1.217s 00:06:26.737 sys 0m0.103s 00:06:26.737 12:49:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:26.737 12:49:31 -- common/autotest_common.sh@10 -- # set +x 00:06:26.737 ************************************ 00:06:26.737 END TEST accel_decmop_full 00:06:26.737 ************************************ 00:06:26.737 12:49:31 -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:26.737 12:49:31 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:06:26.737 12:49:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:26.737 12:49:31 -- common/autotest_common.sh@10 -- # set +x 00:06:26.737 ************************************ 00:06:26.737 START TEST accel_decomp_mcore 00:06:26.737 ************************************ 00:06:26.737 12:49:31 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:26.737 12:49:31 -- accel/accel.sh@16 -- # local accel_opc 00:06:26.737 12:49:31 -- accel/accel.sh@17 -- # local accel_module 00:06:26.737 12:49:31 -- accel/accel.sh@19 -- # IFS=: 00:06:26.737 12:49:31 -- accel/accel.sh@19 -- # read -r var val 00:06:26.737 12:49:31 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:26.737 12:49:31 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:26.737 12:49:31 -- accel/accel.sh@12 -- # build_accel_config 00:06:26.737 12:49:31 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:26.737 12:49:31 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:26.737 12:49:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:26.737 12:49:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:26.737 12:49:31 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:26.737 12:49:31 -- accel/accel.sh@40 -- # local IFS=, 00:06:26.737 12:49:31 -- accel/accel.sh@41 -- # jq -r . 00:06:26.737 [2024-04-26 12:49:31.695306] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:06:26.737 [2024-04-26 12:49:31.695366] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3778573 ] 00:06:26.737 EAL: No free 2048 kB hugepages reported on node 1 00:06:26.737 [2024-04-26 12:49:31.757853] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:26.998 [2024-04-26 12:49:31.824909] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:26.998 [2024-04-26 12:49:31.825168] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:26.998 [2024-04-26 12:49:31.825321] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:26.998 [2024-04-26 12:49:31.825321] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.998 12:49:31 -- accel/accel.sh@20 -- # val= 00:06:26.998 12:49:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.998 12:49:31 -- accel/accel.sh@19 -- # IFS=: 00:06:26.998 12:49:31 -- accel/accel.sh@19 -- # read -r var val 00:06:26.998 12:49:31 -- accel/accel.sh@20 -- # val= 00:06:26.998 12:49:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.998 12:49:31 -- accel/accel.sh@19 -- # IFS=: 00:06:26.998 12:49:31 -- accel/accel.sh@19 -- # read -r var val 00:06:26.998 12:49:31 -- accel/accel.sh@20 -- # val= 00:06:26.998 12:49:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.998 12:49:31 -- accel/accel.sh@19 -- # IFS=: 00:06:26.998 12:49:31 -- accel/accel.sh@19 -- # read -r var val 00:06:26.998 12:49:31 -- accel/accel.sh@20 -- # val=0xf 00:06:26.998 12:49:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.998 12:49:31 -- accel/accel.sh@19 -- # IFS=: 00:06:26.998 12:49:31 -- accel/accel.sh@19 -- # read -r var val 00:06:26.998 12:49:31 -- accel/accel.sh@20 -- # val= 00:06:26.998 12:49:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.998 12:49:31 -- accel/accel.sh@19 -- # IFS=: 00:06:26.998 12:49:31 -- accel/accel.sh@19 -- # read -r var val 00:06:26.998 12:49:31 -- accel/accel.sh@20 -- # val= 00:06:26.998 12:49:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.998 12:49:31 -- accel/accel.sh@19 -- # IFS=: 00:06:26.998 12:49:31 -- accel/accel.sh@19 -- # read -r var val 00:06:26.998 12:49:31 -- accel/accel.sh@20 -- # val=decompress 00:06:26.998 12:49:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.998 12:49:31 -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:26.998 12:49:31 -- accel/accel.sh@19 -- # IFS=: 00:06:26.998 12:49:31 -- accel/accel.sh@19 -- # read -r var val 00:06:26.998 12:49:31 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:26.998 12:49:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.998 12:49:31 -- accel/accel.sh@19 -- # IFS=: 00:06:26.998 12:49:31 -- accel/accel.sh@19 -- # read -r var val 00:06:26.998 12:49:31 -- accel/accel.sh@20 -- # val= 00:06:26.998 12:49:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.998 12:49:31 -- accel/accel.sh@19 -- # IFS=: 00:06:26.998 12:49:31 -- accel/accel.sh@19 -- # read -r var val 00:06:26.998 12:49:31 -- accel/accel.sh@20 -- # val=software 00:06:26.998 12:49:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.998 12:49:31 -- accel/accel.sh@22 -- # accel_module=software 00:06:26.998 12:49:31 -- accel/accel.sh@19 -- # IFS=: 00:06:26.998 12:49:31 -- accel/accel.sh@19 -- # read -r var val 00:06:26.998 12:49:31 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:26.998 12:49:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.998 12:49:31 -- accel/accel.sh@19 -- # IFS=: 00:06:26.998 12:49:31 -- accel/accel.sh@19 -- # read -r var val 00:06:26.998 12:49:31 -- accel/accel.sh@20 -- # val=32 00:06:26.998 12:49:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.998 12:49:31 -- accel/accel.sh@19 -- # IFS=: 00:06:26.998 12:49:31 -- accel/accel.sh@19 -- # read -r var val 00:06:26.998 12:49:31 -- accel/accel.sh@20 -- # val=32 00:06:26.998 12:49:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.998 12:49:31 -- accel/accel.sh@19 -- # IFS=: 00:06:26.998 12:49:31 -- accel/accel.sh@19 -- # read -r var val 00:06:26.998 12:49:31 -- accel/accel.sh@20 -- # val=1 00:06:26.998 12:49:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.998 12:49:31 -- accel/accel.sh@19 -- # IFS=: 00:06:26.998 12:49:31 -- accel/accel.sh@19 -- # read -r var val 00:06:26.998 12:49:31 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:26.998 12:49:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.998 12:49:31 -- accel/accel.sh@19 -- # IFS=: 00:06:26.998 12:49:31 -- accel/accel.sh@19 -- # read -r var val 00:06:26.998 12:49:31 -- accel/accel.sh@20 -- # val=Yes 00:06:26.998 12:49:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.998 12:49:31 -- accel/accel.sh@19 -- # IFS=: 00:06:26.998 12:49:31 -- accel/accel.sh@19 -- # read -r var val 00:06:26.998 12:49:31 -- accel/accel.sh@20 -- # val= 00:06:26.998 12:49:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.998 12:49:31 -- accel/accel.sh@19 -- # IFS=: 00:06:26.998 12:49:31 -- accel/accel.sh@19 -- # read -r var val 00:06:26.998 12:49:31 -- accel/accel.sh@20 -- # val= 00:06:26.998 12:49:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.998 12:49:31 -- accel/accel.sh@19 -- # IFS=: 00:06:26.998 12:49:31 -- accel/accel.sh@19 -- # read -r var val 00:06:27.939 12:49:32 -- accel/accel.sh@20 -- # val= 00:06:27.939 12:49:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.940 12:49:32 -- accel/accel.sh@19 -- # IFS=: 00:06:27.940 12:49:32 -- accel/accel.sh@19 -- # read -r var val 00:06:27.940 12:49:32 -- accel/accel.sh@20 -- # val= 00:06:27.940 12:49:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.940 12:49:32 -- accel/accel.sh@19 -- # IFS=: 00:06:27.940 12:49:32 -- accel/accel.sh@19 -- # read -r var val 00:06:27.940 12:49:32 -- accel/accel.sh@20 -- # val= 00:06:27.940 12:49:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.940 12:49:32 -- accel/accel.sh@19 -- # IFS=: 00:06:27.940 12:49:32 -- accel/accel.sh@19 -- # read -r var val 00:06:27.940 12:49:32 -- accel/accel.sh@20 -- # val= 00:06:27.940 12:49:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.940 12:49:32 -- accel/accel.sh@19 -- # IFS=: 00:06:27.940 12:49:32 -- accel/accel.sh@19 -- # read -r var val 00:06:27.940 12:49:32 -- accel/accel.sh@20 -- # val= 00:06:27.940 12:49:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.940 12:49:32 -- accel/accel.sh@19 -- # IFS=: 00:06:27.940 12:49:32 -- accel/accel.sh@19 -- # read -r var val 00:06:27.940 12:49:32 -- accel/accel.sh@20 -- # val= 00:06:27.940 12:49:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.940 12:49:32 -- accel/accel.sh@19 -- # IFS=: 00:06:27.940 12:49:32 -- accel/accel.sh@19 -- # read -r var val 00:06:27.940 12:49:32 -- accel/accel.sh@20 -- # val= 00:06:27.940 12:49:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.940 12:49:32 -- accel/accel.sh@19 -- # IFS=: 00:06:27.940 12:49:32 -- accel/accel.sh@19 -- # read -r var val 00:06:27.940 12:49:32 -- accel/accel.sh@20 -- # val= 00:06:27.940 12:49:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.940 12:49:32 -- accel/accel.sh@19 -- # IFS=: 00:06:27.940 12:49:32 -- accel/accel.sh@19 -- # read -r var val 00:06:27.940 12:49:32 -- accel/accel.sh@20 -- # val= 00:06:27.940 12:49:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.940 12:49:32 -- accel/accel.sh@19 -- # IFS=: 00:06:27.940 12:49:32 -- accel/accel.sh@19 -- # read -r var val 00:06:27.940 12:49:32 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:27.940 12:49:32 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:27.940 12:49:32 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:27.940 00:06:27.940 real 0m1.297s 00:06:27.940 user 0m4.443s 00:06:27.940 sys 0m0.103s 00:06:27.940 12:49:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:27.940 12:49:32 -- common/autotest_common.sh@10 -- # set +x 00:06:27.940 ************************************ 00:06:27.940 END TEST accel_decomp_mcore 00:06:27.940 ************************************ 00:06:28.200 12:49:33 -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:28.200 12:49:33 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:06:28.200 12:49:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:28.200 12:49:33 -- common/autotest_common.sh@10 -- # set +x 00:06:28.200 ************************************ 00:06:28.200 START TEST accel_decomp_full_mcore 00:06:28.200 ************************************ 00:06:28.200 12:49:33 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:28.200 12:49:33 -- accel/accel.sh@16 -- # local accel_opc 00:06:28.200 12:49:33 -- accel/accel.sh@17 -- # local accel_module 00:06:28.200 12:49:33 -- accel/accel.sh@19 -- # IFS=: 00:06:28.200 12:49:33 -- accel/accel.sh@19 -- # read -r var val 00:06:28.200 12:49:33 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:28.200 12:49:33 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:28.200 12:49:33 -- accel/accel.sh@12 -- # build_accel_config 00:06:28.200 12:49:33 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:28.200 12:49:33 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:28.200 12:49:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:28.200 12:49:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:28.200 12:49:33 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:28.200 12:49:33 -- accel/accel.sh@40 -- # local IFS=, 00:06:28.200 12:49:33 -- accel/accel.sh@41 -- # jq -r . 00:06:28.200 [2024-04-26 12:49:33.174764] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:06:28.200 [2024-04-26 12:49:33.174834] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3778936 ] 00:06:28.200 EAL: No free 2048 kB hugepages reported on node 1 00:06:28.200 [2024-04-26 12:49:33.240488] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:28.463 [2024-04-26 12:49:33.314005] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:28.463 [2024-04-26 12:49:33.314136] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:28.463 [2024-04-26 12:49:33.314296] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.463 [2024-04-26 12:49:33.314296] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:28.463 12:49:33 -- accel/accel.sh@20 -- # val= 00:06:28.463 12:49:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.463 12:49:33 -- accel/accel.sh@19 -- # IFS=: 00:06:28.463 12:49:33 -- accel/accel.sh@19 -- # read -r var val 00:06:28.463 12:49:33 -- accel/accel.sh@20 -- # val= 00:06:28.463 12:49:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.463 12:49:33 -- accel/accel.sh@19 -- # IFS=: 00:06:28.463 12:49:33 -- accel/accel.sh@19 -- # read -r var val 00:06:28.463 12:49:33 -- accel/accel.sh@20 -- # val= 00:06:28.463 12:49:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.463 12:49:33 -- accel/accel.sh@19 -- # IFS=: 00:06:28.463 12:49:33 -- accel/accel.sh@19 -- # read -r var val 00:06:28.463 12:49:33 -- accel/accel.sh@20 -- # val=0xf 00:06:28.463 12:49:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.463 12:49:33 -- accel/accel.sh@19 -- # IFS=: 00:06:28.463 12:49:33 -- accel/accel.sh@19 -- # read -r var val 00:06:28.463 12:49:33 -- accel/accel.sh@20 -- # val= 00:06:28.463 12:49:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.463 12:49:33 -- accel/accel.sh@19 -- # IFS=: 00:06:28.463 12:49:33 -- accel/accel.sh@19 -- # read -r var val 00:06:28.463 12:49:33 -- accel/accel.sh@20 -- # val= 00:06:28.463 12:49:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.463 12:49:33 -- accel/accel.sh@19 -- # IFS=: 00:06:28.463 12:49:33 -- accel/accel.sh@19 -- # read -r var val 00:06:28.463 12:49:33 -- accel/accel.sh@20 -- # val=decompress 00:06:28.463 12:49:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.463 12:49:33 -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:28.463 12:49:33 -- accel/accel.sh@19 -- # IFS=: 00:06:28.463 12:49:33 -- accel/accel.sh@19 -- # read -r var val 00:06:28.463 12:49:33 -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:28.463 12:49:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.463 12:49:33 -- accel/accel.sh@19 -- # IFS=: 00:06:28.463 12:49:33 -- accel/accel.sh@19 -- # read -r var val 00:06:28.463 12:49:33 -- accel/accel.sh@20 -- # val= 00:06:28.463 12:49:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.463 12:49:33 -- accel/accel.sh@19 -- # IFS=: 00:06:28.463 12:49:33 -- accel/accel.sh@19 -- # read -r var val 00:06:28.463 12:49:33 -- accel/accel.sh@20 -- # val=software 00:06:28.463 12:49:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.463 12:49:33 -- accel/accel.sh@22 -- # accel_module=software 00:06:28.463 12:49:33 -- accel/accel.sh@19 -- # IFS=: 00:06:28.463 12:49:33 -- accel/accel.sh@19 -- # read -r var val 00:06:28.463 12:49:33 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:28.463 12:49:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.463 12:49:33 -- accel/accel.sh@19 -- # IFS=: 00:06:28.463 12:49:33 -- accel/accel.sh@19 -- # read -r var val 00:06:28.463 12:49:33 -- accel/accel.sh@20 -- # val=32 00:06:28.463 12:49:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.463 12:49:33 -- accel/accel.sh@19 -- # IFS=: 00:06:28.463 12:49:33 -- accel/accel.sh@19 -- # read -r var val 00:06:28.463 12:49:33 -- accel/accel.sh@20 -- # val=32 00:06:28.463 12:49:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.463 12:49:33 -- accel/accel.sh@19 -- # IFS=: 00:06:28.463 12:49:33 -- accel/accel.sh@19 -- # read -r var val 00:06:28.463 12:49:33 -- accel/accel.sh@20 -- # val=1 00:06:28.463 12:49:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.463 12:49:33 -- accel/accel.sh@19 -- # IFS=: 00:06:28.463 12:49:33 -- accel/accel.sh@19 -- # read -r var val 00:06:28.463 12:49:33 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:28.463 12:49:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.463 12:49:33 -- accel/accel.sh@19 -- # IFS=: 00:06:28.463 12:49:33 -- accel/accel.sh@19 -- # read -r var val 00:06:28.463 12:49:33 -- accel/accel.sh@20 -- # val=Yes 00:06:28.463 12:49:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.463 12:49:33 -- accel/accel.sh@19 -- # IFS=: 00:06:28.463 12:49:33 -- accel/accel.sh@19 -- # read -r var val 00:06:28.463 12:49:33 -- accel/accel.sh@20 -- # val= 00:06:28.463 12:49:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.463 12:49:33 -- accel/accel.sh@19 -- # IFS=: 00:06:28.463 12:49:33 -- accel/accel.sh@19 -- # read -r var val 00:06:28.463 12:49:33 -- accel/accel.sh@20 -- # val= 00:06:28.463 12:49:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.463 12:49:33 -- accel/accel.sh@19 -- # IFS=: 00:06:28.463 12:49:33 -- accel/accel.sh@19 -- # read -r var val 00:06:29.402 12:49:34 -- accel/accel.sh@20 -- # val= 00:06:29.402 12:49:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.402 12:49:34 -- accel/accel.sh@19 -- # IFS=: 00:06:29.402 12:49:34 -- accel/accel.sh@19 -- # read -r var val 00:06:29.402 12:49:34 -- accel/accel.sh@20 -- # val= 00:06:29.402 12:49:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.402 12:49:34 -- accel/accel.sh@19 -- # IFS=: 00:06:29.402 12:49:34 -- accel/accel.sh@19 -- # read -r var val 00:06:29.402 12:49:34 -- accel/accel.sh@20 -- # val= 00:06:29.402 12:49:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.402 12:49:34 -- accel/accel.sh@19 -- # IFS=: 00:06:29.663 12:49:34 -- accel/accel.sh@19 -- # read -r var val 00:06:29.663 12:49:34 -- accel/accel.sh@20 -- # val= 00:06:29.663 12:49:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.663 12:49:34 -- accel/accel.sh@19 -- # IFS=: 00:06:29.663 12:49:34 -- accel/accel.sh@19 -- # read -r var val 00:06:29.663 12:49:34 -- accel/accel.sh@20 -- # val= 00:06:29.663 12:49:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.663 12:49:34 -- accel/accel.sh@19 -- # IFS=: 00:06:29.663 12:49:34 -- accel/accel.sh@19 -- # read -r var val 00:06:29.663 12:49:34 -- accel/accel.sh@20 -- # val= 00:06:29.663 12:49:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.663 12:49:34 -- accel/accel.sh@19 -- # IFS=: 00:06:29.663 12:49:34 -- accel/accel.sh@19 -- # read -r var val 00:06:29.663 12:49:34 -- accel/accel.sh@20 -- # val= 00:06:29.663 12:49:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.663 12:49:34 -- accel/accel.sh@19 -- # IFS=: 00:06:29.663 12:49:34 -- accel/accel.sh@19 -- # read -r var val 00:06:29.663 12:49:34 -- accel/accel.sh@20 -- # val= 00:06:29.663 12:49:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.663 12:49:34 -- accel/accel.sh@19 -- # IFS=: 00:06:29.663 12:49:34 -- accel/accel.sh@19 -- # read -r var val 00:06:29.663 12:49:34 -- accel/accel.sh@20 -- # val= 00:06:29.663 12:49:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.663 12:49:34 -- accel/accel.sh@19 -- # IFS=: 00:06:29.663 12:49:34 -- accel/accel.sh@19 -- # read -r var val 00:06:29.663 12:49:34 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:29.663 12:49:34 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:29.663 12:49:34 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:29.663 00:06:29.663 real 0m1.320s 00:06:29.663 user 0m4.497s 00:06:29.663 sys 0m0.113s 00:06:29.663 12:49:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:29.663 12:49:34 -- common/autotest_common.sh@10 -- # set +x 00:06:29.663 ************************************ 00:06:29.663 END TEST accel_decomp_full_mcore 00:06:29.663 ************************************ 00:06:29.663 12:49:34 -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:29.663 12:49:34 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:06:29.663 12:49:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:29.663 12:49:34 -- common/autotest_common.sh@10 -- # set +x 00:06:29.663 ************************************ 00:06:29.663 START TEST accel_decomp_mthread 00:06:29.663 ************************************ 00:06:29.663 12:49:34 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:29.663 12:49:34 -- accel/accel.sh@16 -- # local accel_opc 00:06:29.663 12:49:34 -- accel/accel.sh@17 -- # local accel_module 00:06:29.663 12:49:34 -- accel/accel.sh@19 -- # IFS=: 00:06:29.663 12:49:34 -- accel/accel.sh@19 -- # read -r var val 00:06:29.663 12:49:34 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:29.663 12:49:34 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:29.663 12:49:34 -- accel/accel.sh@12 -- # build_accel_config 00:06:29.663 12:49:34 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:29.663 12:49:34 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:29.663 12:49:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:29.663 12:49:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:29.663 12:49:34 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:29.663 12:49:34 -- accel/accel.sh@40 -- # local IFS=, 00:06:29.663 12:49:34 -- accel/accel.sh@41 -- # jq -r . 00:06:29.663 [2024-04-26 12:49:34.684786] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:06:29.663 [2024-04-26 12:49:34.684865] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3779280 ] 00:06:29.663 EAL: No free 2048 kB hugepages reported on node 1 00:06:29.923 [2024-04-26 12:49:34.750111] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.923 [2024-04-26 12:49:34.821679] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.923 12:49:34 -- accel/accel.sh@20 -- # val= 00:06:29.923 12:49:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.923 12:49:34 -- accel/accel.sh@19 -- # IFS=: 00:06:29.923 12:49:34 -- accel/accel.sh@19 -- # read -r var val 00:06:29.923 12:49:34 -- accel/accel.sh@20 -- # val= 00:06:29.923 12:49:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.923 12:49:34 -- accel/accel.sh@19 -- # IFS=: 00:06:29.923 12:49:34 -- accel/accel.sh@19 -- # read -r var val 00:06:29.923 12:49:34 -- accel/accel.sh@20 -- # val= 00:06:29.923 12:49:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.923 12:49:34 -- accel/accel.sh@19 -- # IFS=: 00:06:29.923 12:49:34 -- accel/accel.sh@19 -- # read -r var val 00:06:29.923 12:49:34 -- accel/accel.sh@20 -- # val=0x1 00:06:29.923 12:49:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.923 12:49:34 -- accel/accel.sh@19 -- # IFS=: 00:06:29.923 12:49:34 -- accel/accel.sh@19 -- # read -r var val 00:06:29.923 12:49:34 -- accel/accel.sh@20 -- # val= 00:06:29.923 12:49:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.923 12:49:34 -- accel/accel.sh@19 -- # IFS=: 00:06:29.923 12:49:34 -- accel/accel.sh@19 -- # read -r var val 00:06:29.923 12:49:34 -- accel/accel.sh@20 -- # val= 00:06:29.923 12:49:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.923 12:49:34 -- accel/accel.sh@19 -- # IFS=: 00:06:29.923 12:49:34 -- accel/accel.sh@19 -- # read -r var val 00:06:29.923 12:49:34 -- accel/accel.sh@20 -- # val=decompress 00:06:29.923 12:49:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.923 12:49:34 -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:29.923 12:49:34 -- accel/accel.sh@19 -- # IFS=: 00:06:29.923 12:49:34 -- accel/accel.sh@19 -- # read -r var val 00:06:29.923 12:49:34 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:29.923 12:49:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.923 12:49:34 -- accel/accel.sh@19 -- # IFS=: 00:06:29.923 12:49:34 -- accel/accel.sh@19 -- # read -r var val 00:06:29.923 12:49:34 -- accel/accel.sh@20 -- # val= 00:06:29.923 12:49:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.923 12:49:34 -- accel/accel.sh@19 -- # IFS=: 00:06:29.923 12:49:34 -- accel/accel.sh@19 -- # read -r var val 00:06:29.923 12:49:34 -- accel/accel.sh@20 -- # val=software 00:06:29.923 12:49:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.923 12:49:34 -- accel/accel.sh@22 -- # accel_module=software 00:06:29.923 12:49:34 -- accel/accel.sh@19 -- # IFS=: 00:06:29.923 12:49:34 -- accel/accel.sh@19 -- # read -r var val 00:06:29.923 12:49:34 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:29.923 12:49:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.923 12:49:34 -- accel/accel.sh@19 -- # IFS=: 00:06:29.923 12:49:34 -- accel/accel.sh@19 -- # read -r var val 00:06:29.923 12:49:34 -- accel/accel.sh@20 -- # val=32 00:06:29.923 12:49:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.923 12:49:34 -- accel/accel.sh@19 -- # IFS=: 00:06:29.923 12:49:34 -- accel/accel.sh@19 -- # read -r var val 00:06:29.923 12:49:34 -- accel/accel.sh@20 -- # val=32 00:06:29.923 12:49:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.923 12:49:34 -- accel/accel.sh@19 -- # IFS=: 00:06:29.923 12:49:34 -- accel/accel.sh@19 -- # read -r var val 00:06:29.923 12:49:34 -- accel/accel.sh@20 -- # val=2 00:06:29.923 12:49:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.923 12:49:34 -- accel/accel.sh@19 -- # IFS=: 00:06:29.923 12:49:34 -- accel/accel.sh@19 -- # read -r var val 00:06:29.923 12:49:34 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:29.923 12:49:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.924 12:49:34 -- accel/accel.sh@19 -- # IFS=: 00:06:29.924 12:49:34 -- accel/accel.sh@19 -- # read -r var val 00:06:29.924 12:49:34 -- accel/accel.sh@20 -- # val=Yes 00:06:29.924 12:49:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.924 12:49:34 -- accel/accel.sh@19 -- # IFS=: 00:06:29.924 12:49:34 -- accel/accel.sh@19 -- # read -r var val 00:06:29.924 12:49:34 -- accel/accel.sh@20 -- # val= 00:06:29.924 12:49:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.924 12:49:34 -- accel/accel.sh@19 -- # IFS=: 00:06:29.924 12:49:34 -- accel/accel.sh@19 -- # read -r var val 00:06:29.924 12:49:34 -- accel/accel.sh@20 -- # val= 00:06:29.924 12:49:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.924 12:49:34 -- accel/accel.sh@19 -- # IFS=: 00:06:29.924 12:49:34 -- accel/accel.sh@19 -- # read -r var val 00:06:31.305 12:49:35 -- accel/accel.sh@20 -- # val= 00:06:31.305 12:49:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.305 12:49:35 -- accel/accel.sh@19 -- # IFS=: 00:06:31.305 12:49:35 -- accel/accel.sh@19 -- # read -r var val 00:06:31.305 12:49:35 -- accel/accel.sh@20 -- # val= 00:06:31.305 12:49:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.305 12:49:35 -- accel/accel.sh@19 -- # IFS=: 00:06:31.305 12:49:35 -- accel/accel.sh@19 -- # read -r var val 00:06:31.305 12:49:35 -- accel/accel.sh@20 -- # val= 00:06:31.305 12:49:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.305 12:49:35 -- accel/accel.sh@19 -- # IFS=: 00:06:31.305 12:49:35 -- accel/accel.sh@19 -- # read -r var val 00:06:31.305 12:49:35 -- accel/accel.sh@20 -- # val= 00:06:31.305 12:49:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.305 12:49:35 -- accel/accel.sh@19 -- # IFS=: 00:06:31.305 12:49:35 -- accel/accel.sh@19 -- # read -r var val 00:06:31.305 12:49:35 -- accel/accel.sh@20 -- # val= 00:06:31.305 12:49:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.305 12:49:35 -- accel/accel.sh@19 -- # IFS=: 00:06:31.305 12:49:35 -- accel/accel.sh@19 -- # read -r var val 00:06:31.305 12:49:35 -- accel/accel.sh@20 -- # val= 00:06:31.305 12:49:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.305 12:49:35 -- accel/accel.sh@19 -- # IFS=: 00:06:31.305 12:49:35 -- accel/accel.sh@19 -- # read -r var val 00:06:31.305 12:49:35 -- accel/accel.sh@20 -- # val= 00:06:31.305 12:49:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.305 12:49:35 -- accel/accel.sh@19 -- # IFS=: 00:06:31.305 12:49:35 -- accel/accel.sh@19 -- # read -r var val 00:06:31.305 12:49:35 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:31.305 12:49:35 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:31.305 12:49:35 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:31.305 00:06:31.305 real 0m1.304s 00:06:31.305 user 0m1.212s 00:06:31.305 sys 0m0.105s 00:06:31.305 12:49:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:31.305 12:49:35 -- common/autotest_common.sh@10 -- # set +x 00:06:31.305 ************************************ 00:06:31.305 END TEST accel_decomp_mthread 00:06:31.305 ************************************ 00:06:31.305 12:49:35 -- accel/accel.sh@122 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:31.305 12:49:35 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:06:31.305 12:49:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:31.305 12:49:35 -- common/autotest_common.sh@10 -- # set +x 00:06:31.305 ************************************ 00:06:31.305 START TEST accel_deomp_full_mthread 00:06:31.305 ************************************ 00:06:31.305 12:49:36 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:31.305 12:49:36 -- accel/accel.sh@16 -- # local accel_opc 00:06:31.305 12:49:36 -- accel/accel.sh@17 -- # local accel_module 00:06:31.305 12:49:36 -- accel/accel.sh@19 -- # IFS=: 00:06:31.305 12:49:36 -- accel/accel.sh@19 -- # read -r var val 00:06:31.306 12:49:36 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:31.306 12:49:36 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:31.306 12:49:36 -- accel/accel.sh@12 -- # build_accel_config 00:06:31.306 12:49:36 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:31.306 12:49:36 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:31.306 12:49:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:31.306 12:49:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:31.306 12:49:36 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:31.306 12:49:36 -- accel/accel.sh@40 -- # local IFS=, 00:06:31.306 12:49:36 -- accel/accel.sh@41 -- # jq -r . 00:06:31.306 [2024-04-26 12:49:36.168717] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:06:31.306 [2024-04-26 12:49:36.168824] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3779525 ] 00:06:31.306 EAL: No free 2048 kB hugepages reported on node 1 00:06:31.306 [2024-04-26 12:49:36.234438] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.306 [2024-04-26 12:49:36.302556] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.306 12:49:36 -- accel/accel.sh@20 -- # val= 00:06:31.306 12:49:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.306 12:49:36 -- accel/accel.sh@19 -- # IFS=: 00:06:31.306 12:49:36 -- accel/accel.sh@19 -- # read -r var val 00:06:31.306 12:49:36 -- accel/accel.sh@20 -- # val= 00:06:31.306 12:49:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.306 12:49:36 -- accel/accel.sh@19 -- # IFS=: 00:06:31.306 12:49:36 -- accel/accel.sh@19 -- # read -r var val 00:06:31.306 12:49:36 -- accel/accel.sh@20 -- # val= 00:06:31.306 12:49:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.306 12:49:36 -- accel/accel.sh@19 -- # IFS=: 00:06:31.306 12:49:36 -- accel/accel.sh@19 -- # read -r var val 00:06:31.306 12:49:36 -- accel/accel.sh@20 -- # val=0x1 00:06:31.306 12:49:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.306 12:49:36 -- accel/accel.sh@19 -- # IFS=: 00:06:31.306 12:49:36 -- accel/accel.sh@19 -- # read -r var val 00:06:31.306 12:49:36 -- accel/accel.sh@20 -- # val= 00:06:31.306 12:49:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.306 12:49:36 -- accel/accel.sh@19 -- # IFS=: 00:06:31.306 12:49:36 -- accel/accel.sh@19 -- # read -r var val 00:06:31.306 12:49:36 -- accel/accel.sh@20 -- # val= 00:06:31.306 12:49:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.306 12:49:36 -- accel/accel.sh@19 -- # IFS=: 00:06:31.306 12:49:36 -- accel/accel.sh@19 -- # read -r var val 00:06:31.306 12:49:36 -- accel/accel.sh@20 -- # val=decompress 00:06:31.306 12:49:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.306 12:49:36 -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:31.306 12:49:36 -- accel/accel.sh@19 -- # IFS=: 00:06:31.306 12:49:36 -- accel/accel.sh@19 -- # read -r var val 00:06:31.306 12:49:36 -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:31.306 12:49:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.306 12:49:36 -- accel/accel.sh@19 -- # IFS=: 00:06:31.306 12:49:36 -- accel/accel.sh@19 -- # read -r var val 00:06:31.306 12:49:36 -- accel/accel.sh@20 -- # val= 00:06:31.306 12:49:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.306 12:49:36 -- accel/accel.sh@19 -- # IFS=: 00:06:31.306 12:49:36 -- accel/accel.sh@19 -- # read -r var val 00:06:31.306 12:49:36 -- accel/accel.sh@20 -- # val=software 00:06:31.306 12:49:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.306 12:49:36 -- accel/accel.sh@22 -- # accel_module=software 00:06:31.306 12:49:36 -- accel/accel.sh@19 -- # IFS=: 00:06:31.306 12:49:36 -- accel/accel.sh@19 -- # read -r var val 00:06:31.306 12:49:36 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:31.306 12:49:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.306 12:49:36 -- accel/accel.sh@19 -- # IFS=: 00:06:31.306 12:49:36 -- accel/accel.sh@19 -- # read -r var val 00:06:31.306 12:49:36 -- accel/accel.sh@20 -- # val=32 00:06:31.306 12:49:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.306 12:49:36 -- accel/accel.sh@19 -- # IFS=: 00:06:31.306 12:49:36 -- accel/accel.sh@19 -- # read -r var val 00:06:31.306 12:49:36 -- accel/accel.sh@20 -- # val=32 00:06:31.306 12:49:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.306 12:49:36 -- accel/accel.sh@19 -- # IFS=: 00:06:31.306 12:49:36 -- accel/accel.sh@19 -- # read -r var val 00:06:31.306 12:49:36 -- accel/accel.sh@20 -- # val=2 00:06:31.306 12:49:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.306 12:49:36 -- accel/accel.sh@19 -- # IFS=: 00:06:31.306 12:49:36 -- accel/accel.sh@19 -- # read -r var val 00:06:31.306 12:49:36 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:31.306 12:49:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.306 12:49:36 -- accel/accel.sh@19 -- # IFS=: 00:06:31.306 12:49:36 -- accel/accel.sh@19 -- # read -r var val 00:06:31.306 12:49:36 -- accel/accel.sh@20 -- # val=Yes 00:06:31.306 12:49:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.306 12:49:36 -- accel/accel.sh@19 -- # IFS=: 00:06:31.306 12:49:36 -- accel/accel.sh@19 -- # read -r var val 00:06:31.306 12:49:36 -- accel/accel.sh@20 -- # val= 00:06:31.306 12:49:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.306 12:49:36 -- accel/accel.sh@19 -- # IFS=: 00:06:31.306 12:49:36 -- accel/accel.sh@19 -- # read -r var val 00:06:31.306 12:49:36 -- accel/accel.sh@20 -- # val= 00:06:31.306 12:49:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.306 12:49:36 -- accel/accel.sh@19 -- # IFS=: 00:06:31.306 12:49:36 -- accel/accel.sh@19 -- # read -r var val 00:06:32.779 12:49:37 -- accel/accel.sh@20 -- # val= 00:06:32.779 12:49:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.779 12:49:37 -- accel/accel.sh@19 -- # IFS=: 00:06:32.779 12:49:37 -- accel/accel.sh@19 -- # read -r var val 00:06:32.779 12:49:37 -- accel/accel.sh@20 -- # val= 00:06:32.779 12:49:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.779 12:49:37 -- accel/accel.sh@19 -- # IFS=: 00:06:32.779 12:49:37 -- accel/accel.sh@19 -- # read -r var val 00:06:32.779 12:49:37 -- accel/accel.sh@20 -- # val= 00:06:32.779 12:49:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.779 12:49:37 -- accel/accel.sh@19 -- # IFS=: 00:06:32.779 12:49:37 -- accel/accel.sh@19 -- # read -r var val 00:06:32.779 12:49:37 -- accel/accel.sh@20 -- # val= 00:06:32.779 12:49:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.779 12:49:37 -- accel/accel.sh@19 -- # IFS=: 00:06:32.779 12:49:37 -- accel/accel.sh@19 -- # read -r var val 00:06:32.779 12:49:37 -- accel/accel.sh@20 -- # val= 00:06:32.779 12:49:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.779 12:49:37 -- accel/accel.sh@19 -- # IFS=: 00:06:32.779 12:49:37 -- accel/accel.sh@19 -- # read -r var val 00:06:32.779 12:49:37 -- accel/accel.sh@20 -- # val= 00:06:32.779 12:49:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.779 12:49:37 -- accel/accel.sh@19 -- # IFS=: 00:06:32.779 12:49:37 -- accel/accel.sh@19 -- # read -r var val 00:06:32.779 12:49:37 -- accel/accel.sh@20 -- # val= 00:06:32.779 12:49:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.779 12:49:37 -- accel/accel.sh@19 -- # IFS=: 00:06:32.779 12:49:37 -- accel/accel.sh@19 -- # read -r var val 00:06:32.779 12:49:37 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:32.779 12:49:37 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:32.779 12:49:37 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:32.779 00:06:32.779 real 0m1.329s 00:06:32.779 user 0m1.244s 00:06:32.779 sys 0m0.096s 00:06:32.779 12:49:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:32.779 12:49:37 -- common/autotest_common.sh@10 -- # set +x 00:06:32.779 ************************************ 00:06:32.779 END TEST accel_deomp_full_mthread 00:06:32.779 ************************************ 00:06:32.779 12:49:37 -- accel/accel.sh@124 -- # [[ n == y ]] 00:06:32.779 12:49:37 -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:32.779 12:49:37 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:32.779 12:49:37 -- accel/accel.sh@137 -- # build_accel_config 00:06:32.779 12:49:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:32.779 12:49:37 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:32.779 12:49:37 -- common/autotest_common.sh@10 -- # set +x 00:06:32.779 12:49:37 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:32.779 12:49:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:32.779 12:49:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:32.779 12:49:37 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:32.779 12:49:37 -- accel/accel.sh@40 -- # local IFS=, 00:06:32.779 12:49:37 -- accel/accel.sh@41 -- # jq -r . 00:06:32.779 ************************************ 00:06:32.779 START TEST accel_dif_functional_tests 00:06:32.779 ************************************ 00:06:32.779 12:49:37 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:32.779 [2024-04-26 12:49:37.698470] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:06:32.779 [2024-04-26 12:49:37.698524] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3779801 ] 00:06:32.779 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.779 [2024-04-26 12:49:37.762822] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:32.779 [2024-04-26 12:49:37.836515] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:32.779 [2024-04-26 12:49:37.836632] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:32.779 [2024-04-26 12:49:37.836635] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.040 00:06:33.040 00:06:33.040 CUnit - A unit testing framework for C - Version 2.1-3 00:06:33.040 http://cunit.sourceforge.net/ 00:06:33.040 00:06:33.040 00:06:33.040 Suite: accel_dif 00:06:33.040 Test: verify: DIF generated, GUARD check ...passed 00:06:33.040 Test: verify: DIF generated, APPTAG check ...passed 00:06:33.040 Test: verify: DIF generated, REFTAG check ...passed 00:06:33.040 Test: verify: DIF not generated, GUARD check ...[2024-04-26 12:49:37.893097] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:33.040 [2024-04-26 12:49:37.893137] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:33.040 passed 00:06:33.040 Test: verify: DIF not generated, APPTAG check ...[2024-04-26 12:49:37.893167] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:33.040 [2024-04-26 12:49:37.893182] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:33.040 passed 00:06:33.040 Test: verify: DIF not generated, REFTAG check ...[2024-04-26 12:49:37.893199] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:33.040 [2024-04-26 12:49:37.893214] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:33.040 passed 00:06:33.040 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:33.040 Test: verify: APPTAG incorrect, APPTAG check ...[2024-04-26 12:49:37.893256] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:33.040 passed 00:06:33.040 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:33.040 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:33.040 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:33.040 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-04-26 12:49:37.893378] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:33.040 passed 00:06:33.040 Test: generate copy: DIF generated, GUARD check ...passed 00:06:33.040 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:33.040 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:33.040 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:33.040 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:33.040 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:33.041 Test: generate copy: iovecs-len validate ...[2024-04-26 12:49:37.893564] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:33.041 passed 00:06:33.041 Test: generate copy: buffer alignment validate ...passed 00:06:33.041 00:06:33.041 Run Summary: Type Total Ran Passed Failed Inactive 00:06:33.041 suites 1 1 n/a 0 0 00:06:33.041 tests 20 20 20 0 0 00:06:33.041 asserts 204 204 204 0 n/a 00:06:33.041 00:06:33.041 Elapsed time = 0.000 seconds 00:06:33.041 00:06:33.041 real 0m0.358s 00:06:33.041 user 0m0.456s 00:06:33.041 sys 0m0.124s 00:06:33.041 12:49:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:33.041 12:49:38 -- common/autotest_common.sh@10 -- # set +x 00:06:33.041 ************************************ 00:06:33.041 END TEST accel_dif_functional_tests 00:06:33.041 ************************************ 00:06:33.041 00:06:33.041 real 0m33.057s 00:06:33.041 user 0m34.820s 00:06:33.041 sys 0m5.462s 00:06:33.041 12:49:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:33.041 12:49:38 -- common/autotest_common.sh@10 -- # set +x 00:06:33.041 ************************************ 00:06:33.041 END TEST accel 00:06:33.041 ************************************ 00:06:33.041 12:49:38 -- spdk/autotest.sh@180 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:33.041 12:49:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:33.041 12:49:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:33.041 12:49:38 -- common/autotest_common.sh@10 -- # set +x 00:06:33.302 ************************************ 00:06:33.302 START TEST accel_rpc 00:06:33.302 ************************************ 00:06:33.302 12:49:38 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:33.302 * Looking for test storage... 00:06:33.302 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:33.302 12:49:38 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:33.302 12:49:38 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=3780097 00:06:33.302 12:49:38 -- accel/accel_rpc.sh@15 -- # waitforlisten 3780097 00:06:33.302 12:49:38 -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:33.302 12:49:38 -- common/autotest_common.sh@817 -- # '[' -z 3780097 ']' 00:06:33.302 12:49:38 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:33.302 12:49:38 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:33.302 12:49:38 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:33.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:33.302 12:49:38 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:33.302 12:49:38 -- common/autotest_common.sh@10 -- # set +x 00:06:33.564 [2024-04-26 12:49:38.393271] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:06:33.564 [2024-04-26 12:49:38.393327] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3780097 ] 00:06:33.564 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.564 [2024-04-26 12:49:38.455798] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.564 [2024-04-26 12:49:38.518618] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.135 12:49:39 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:34.135 12:49:39 -- common/autotest_common.sh@850 -- # return 0 00:06:34.135 12:49:39 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:34.135 12:49:39 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:34.135 12:49:39 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:34.135 12:49:39 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:34.135 12:49:39 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:34.135 12:49:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:34.135 12:49:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:34.135 12:49:39 -- common/autotest_common.sh@10 -- # set +x 00:06:34.395 ************************************ 00:06:34.395 START TEST accel_assign_opcode 00:06:34.395 ************************************ 00:06:34.395 12:49:39 -- common/autotest_common.sh@1111 -- # accel_assign_opcode_test_suite 00:06:34.395 12:49:39 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:34.395 12:49:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:34.395 12:49:39 -- common/autotest_common.sh@10 -- # set +x 00:06:34.395 [2024-04-26 12:49:39.316902] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:34.396 12:49:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:34.396 12:49:39 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:34.396 12:49:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:34.396 12:49:39 -- common/autotest_common.sh@10 -- # set +x 00:06:34.396 [2024-04-26 12:49:39.328927] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:34.396 12:49:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:34.396 12:49:39 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:34.396 12:49:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:34.396 12:49:39 -- common/autotest_common.sh@10 -- # set +x 00:06:34.657 12:49:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:34.657 12:49:39 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:34.657 12:49:39 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:34.657 12:49:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:34.657 12:49:39 -- common/autotest_common.sh@10 -- # set +x 00:06:34.657 12:49:39 -- accel/accel_rpc.sh@42 -- # grep software 00:06:34.657 12:49:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:34.657 software 00:06:34.657 00:06:34.657 real 0m0.213s 00:06:34.657 user 0m0.051s 00:06:34.657 sys 0m0.009s 00:06:34.657 12:49:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:34.657 12:49:39 -- common/autotest_common.sh@10 -- # set +x 00:06:34.657 ************************************ 00:06:34.657 END TEST accel_assign_opcode 00:06:34.657 ************************************ 00:06:34.657 12:49:39 -- accel/accel_rpc.sh@55 -- # killprocess 3780097 00:06:34.657 12:49:39 -- common/autotest_common.sh@936 -- # '[' -z 3780097 ']' 00:06:34.657 12:49:39 -- common/autotest_common.sh@940 -- # kill -0 3780097 00:06:34.657 12:49:39 -- common/autotest_common.sh@941 -- # uname 00:06:34.657 12:49:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:34.657 12:49:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3780097 00:06:34.657 12:49:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:34.657 12:49:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:34.657 12:49:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3780097' 00:06:34.657 killing process with pid 3780097 00:06:34.657 12:49:39 -- common/autotest_common.sh@955 -- # kill 3780097 00:06:34.657 12:49:39 -- common/autotest_common.sh@960 -- # wait 3780097 00:06:34.918 00:06:34.918 real 0m1.593s 00:06:34.918 user 0m1.737s 00:06:34.918 sys 0m0.439s 00:06:34.918 12:49:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:34.918 12:49:39 -- common/autotest_common.sh@10 -- # set +x 00:06:34.918 ************************************ 00:06:34.918 END TEST accel_rpc 00:06:34.918 ************************************ 00:06:34.918 12:49:39 -- spdk/autotest.sh@181 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:34.918 12:49:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:34.918 12:49:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:34.918 12:49:39 -- common/autotest_common.sh@10 -- # set +x 00:06:35.179 ************************************ 00:06:35.179 START TEST app_cmdline 00:06:35.179 ************************************ 00:06:35.179 12:49:40 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:35.179 * Looking for test storage... 00:06:35.179 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:35.179 12:49:40 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:35.179 12:49:40 -- app/cmdline.sh@17 -- # spdk_tgt_pid=3780521 00:06:35.179 12:49:40 -- app/cmdline.sh@18 -- # waitforlisten 3780521 00:06:35.179 12:49:40 -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:35.179 12:49:40 -- common/autotest_common.sh@817 -- # '[' -z 3780521 ']' 00:06:35.179 12:49:40 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.179 12:49:40 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:35.179 12:49:40 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.179 12:49:40 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:35.179 12:49:40 -- common/autotest_common.sh@10 -- # set +x 00:06:35.179 [2024-04-26 12:49:40.178996] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:06:35.179 [2024-04-26 12:49:40.179070] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3780521 ] 00:06:35.179 EAL: No free 2048 kB hugepages reported on node 1 00:06:35.438 [2024-04-26 12:49:40.243345] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.438 [2024-04-26 12:49:40.315374] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.010 12:49:40 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:36.010 12:49:40 -- common/autotest_common.sh@850 -- # return 0 00:06:36.010 12:49:40 -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:36.270 { 00:06:36.270 "version": "SPDK v24.05-pre git sha1 06472fb6d", 00:06:36.270 "fields": { 00:06:36.270 "major": 24, 00:06:36.270 "minor": 5, 00:06:36.270 "patch": 0, 00:06:36.270 "suffix": "-pre", 00:06:36.270 "commit": "06472fb6d" 00:06:36.270 } 00:06:36.270 } 00:06:36.270 12:49:41 -- app/cmdline.sh@22 -- # expected_methods=() 00:06:36.270 12:49:41 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:36.270 12:49:41 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:36.270 12:49:41 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:36.270 12:49:41 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:36.270 12:49:41 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:36.270 12:49:41 -- app/cmdline.sh@26 -- # sort 00:06:36.270 12:49:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:36.270 12:49:41 -- common/autotest_common.sh@10 -- # set +x 00:06:36.270 12:49:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:36.271 12:49:41 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:36.271 12:49:41 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:36.271 12:49:41 -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:36.271 12:49:41 -- common/autotest_common.sh@638 -- # local es=0 00:06:36.271 12:49:41 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:36.271 12:49:41 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:36.271 12:49:41 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:36.271 12:49:41 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:36.271 12:49:41 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:36.271 12:49:41 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:36.271 12:49:41 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:36.271 12:49:41 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:36.271 12:49:41 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:36.271 12:49:41 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:36.271 request: 00:06:36.271 { 00:06:36.271 "method": "env_dpdk_get_mem_stats", 00:06:36.271 "req_id": 1 00:06:36.271 } 00:06:36.271 Got JSON-RPC error response 00:06:36.271 response: 00:06:36.271 { 00:06:36.271 "code": -32601, 00:06:36.271 "message": "Method not found" 00:06:36.271 } 00:06:36.271 12:49:41 -- common/autotest_common.sh@641 -- # es=1 00:06:36.271 12:49:41 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:36.271 12:49:41 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:06:36.271 12:49:41 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:36.271 12:49:41 -- app/cmdline.sh@1 -- # killprocess 3780521 00:06:36.271 12:49:41 -- common/autotest_common.sh@936 -- # '[' -z 3780521 ']' 00:06:36.271 12:49:41 -- common/autotest_common.sh@940 -- # kill -0 3780521 00:06:36.271 12:49:41 -- common/autotest_common.sh@941 -- # uname 00:06:36.271 12:49:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:36.271 12:49:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3780521 00:06:36.531 12:49:41 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:36.531 12:49:41 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:36.531 12:49:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3780521' 00:06:36.531 killing process with pid 3780521 00:06:36.531 12:49:41 -- common/autotest_common.sh@955 -- # kill 3780521 00:06:36.531 12:49:41 -- common/autotest_common.sh@960 -- # wait 3780521 00:06:36.531 00:06:36.531 real 0m1.524s 00:06:36.531 user 0m1.810s 00:06:36.531 sys 0m0.396s 00:06:36.531 12:49:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:36.531 12:49:41 -- common/autotest_common.sh@10 -- # set +x 00:06:36.531 ************************************ 00:06:36.531 END TEST app_cmdline 00:06:36.531 ************************************ 00:06:36.531 12:49:41 -- spdk/autotest.sh@182 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:36.531 12:49:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:36.531 12:49:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:36.531 12:49:41 -- common/autotest_common.sh@10 -- # set +x 00:06:36.791 ************************************ 00:06:36.791 START TEST version 00:06:36.791 ************************************ 00:06:36.791 12:49:41 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:36.791 * Looking for test storage... 00:06:36.791 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:36.791 12:49:41 -- app/version.sh@17 -- # get_header_version major 00:06:36.791 12:49:41 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:36.791 12:49:41 -- app/version.sh@14 -- # cut -f2 00:06:36.791 12:49:41 -- app/version.sh@14 -- # tr -d '"' 00:06:36.791 12:49:41 -- app/version.sh@17 -- # major=24 00:06:36.791 12:49:41 -- app/version.sh@18 -- # get_header_version minor 00:06:36.791 12:49:41 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:36.791 12:49:41 -- app/version.sh@14 -- # cut -f2 00:06:36.791 12:49:41 -- app/version.sh@14 -- # tr -d '"' 00:06:36.791 12:49:41 -- app/version.sh@18 -- # minor=5 00:06:36.791 12:49:41 -- app/version.sh@19 -- # get_header_version patch 00:06:36.791 12:49:41 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:36.791 12:49:41 -- app/version.sh@14 -- # cut -f2 00:06:36.791 12:49:41 -- app/version.sh@14 -- # tr -d '"' 00:06:37.051 12:49:41 -- app/version.sh@19 -- # patch=0 00:06:37.051 12:49:41 -- app/version.sh@20 -- # get_header_version suffix 00:06:37.051 12:49:41 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:37.051 12:49:41 -- app/version.sh@14 -- # cut -f2 00:06:37.051 12:49:41 -- app/version.sh@14 -- # tr -d '"' 00:06:37.051 12:49:41 -- app/version.sh@20 -- # suffix=-pre 00:06:37.051 12:49:41 -- app/version.sh@22 -- # version=24.5 00:06:37.051 12:49:41 -- app/version.sh@25 -- # (( patch != 0 )) 00:06:37.051 12:49:41 -- app/version.sh@28 -- # version=24.5rc0 00:06:37.051 12:49:41 -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:37.051 12:49:41 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:37.051 12:49:41 -- app/version.sh@30 -- # py_version=24.5rc0 00:06:37.051 12:49:41 -- app/version.sh@31 -- # [[ 24.5rc0 == \2\4\.\5\r\c\0 ]] 00:06:37.051 00:06:37.051 real 0m0.167s 00:06:37.051 user 0m0.084s 00:06:37.051 sys 0m0.120s 00:06:37.051 12:49:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:37.051 12:49:41 -- common/autotest_common.sh@10 -- # set +x 00:06:37.051 ************************************ 00:06:37.051 END TEST version 00:06:37.051 ************************************ 00:06:37.051 12:49:41 -- spdk/autotest.sh@184 -- # '[' 0 -eq 1 ']' 00:06:37.051 12:49:41 -- spdk/autotest.sh@194 -- # uname -s 00:06:37.051 12:49:41 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:37.051 12:49:41 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:37.051 12:49:41 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:37.051 12:49:41 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:06:37.051 12:49:41 -- spdk/autotest.sh@254 -- # '[' 0 -eq 1 ']' 00:06:37.051 12:49:41 -- spdk/autotest.sh@258 -- # timing_exit lib 00:06:37.051 12:49:41 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:37.051 12:49:41 -- common/autotest_common.sh@10 -- # set +x 00:06:37.051 12:49:41 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:06:37.051 12:49:41 -- spdk/autotest.sh@268 -- # '[' 0 -eq 1 ']' 00:06:37.051 12:49:41 -- spdk/autotest.sh@277 -- # '[' 1 -eq 1 ']' 00:06:37.051 12:49:41 -- spdk/autotest.sh@278 -- # export NET_TYPE 00:06:37.051 12:49:41 -- spdk/autotest.sh@281 -- # '[' tcp = rdma ']' 00:06:37.051 12:49:41 -- spdk/autotest.sh@284 -- # '[' tcp = tcp ']' 00:06:37.051 12:49:41 -- spdk/autotest.sh@285 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:37.051 12:49:41 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:37.051 12:49:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:37.051 12:49:41 -- common/autotest_common.sh@10 -- # set +x 00:06:37.312 ************************************ 00:06:37.312 START TEST nvmf_tcp 00:06:37.312 ************************************ 00:06:37.312 12:49:42 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:37.312 * Looking for test storage... 00:06:37.312 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:37.312 12:49:42 -- nvmf/nvmf.sh@10 -- # uname -s 00:06:37.312 12:49:42 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:37.312 12:49:42 -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:37.312 12:49:42 -- nvmf/common.sh@7 -- # uname -s 00:06:37.312 12:49:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:37.312 12:49:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:37.312 12:49:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:37.312 12:49:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:37.312 12:49:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:37.312 12:49:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:37.312 12:49:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:37.312 12:49:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:37.312 12:49:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:37.312 12:49:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:37.312 12:49:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:37.312 12:49:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:37.312 12:49:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:37.312 12:49:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:37.312 12:49:42 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:37.312 12:49:42 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:37.312 12:49:42 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:37.312 12:49:42 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:37.312 12:49:42 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:37.312 12:49:42 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:37.312 12:49:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.312 12:49:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.312 12:49:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.312 12:49:42 -- paths/export.sh@5 -- # export PATH 00:06:37.312 12:49:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.312 12:49:42 -- nvmf/common.sh@47 -- # : 0 00:06:37.312 12:49:42 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:37.312 12:49:42 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:37.312 12:49:42 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:37.312 12:49:42 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:37.312 12:49:42 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:37.312 12:49:42 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:37.312 12:49:42 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:37.312 12:49:42 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:37.312 12:49:42 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:37.312 12:49:42 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:06:37.312 12:49:42 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:06:37.312 12:49:42 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:37.312 12:49:42 -- common/autotest_common.sh@10 -- # set +x 00:06:37.312 12:49:42 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:06:37.312 12:49:42 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:37.312 12:49:42 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:37.312 12:49:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:37.312 12:49:42 -- common/autotest_common.sh@10 -- # set +x 00:06:37.573 ************************************ 00:06:37.573 START TEST nvmf_example 00:06:37.573 ************************************ 00:06:37.573 12:49:42 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:37.573 * Looking for test storage... 00:06:37.573 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:37.573 12:49:42 -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:37.573 12:49:42 -- nvmf/common.sh@7 -- # uname -s 00:06:37.573 12:49:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:37.573 12:49:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:37.573 12:49:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:37.573 12:49:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:37.573 12:49:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:37.573 12:49:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:37.573 12:49:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:37.573 12:49:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:37.573 12:49:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:37.573 12:49:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:37.573 12:49:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:37.573 12:49:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:37.573 12:49:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:37.573 12:49:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:37.573 12:49:42 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:37.573 12:49:42 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:37.573 12:49:42 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:37.573 12:49:42 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:37.573 12:49:42 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:37.573 12:49:42 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:37.573 12:49:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.573 12:49:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.573 12:49:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.573 12:49:42 -- paths/export.sh@5 -- # export PATH 00:06:37.573 12:49:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.573 12:49:42 -- nvmf/common.sh@47 -- # : 0 00:06:37.573 12:49:42 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:37.573 12:49:42 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:37.573 12:49:42 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:37.573 12:49:42 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:37.573 12:49:42 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:37.573 12:49:42 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:37.573 12:49:42 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:37.573 12:49:42 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:37.573 12:49:42 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:06:37.573 12:49:42 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:06:37.573 12:49:42 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:06:37.573 12:49:42 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:06:37.573 12:49:42 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:06:37.573 12:49:42 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:06:37.573 12:49:42 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:06:37.573 12:49:42 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:06:37.573 12:49:42 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:37.573 12:49:42 -- common/autotest_common.sh@10 -- # set +x 00:06:37.573 12:49:42 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:06:37.574 12:49:42 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:06:37.574 12:49:42 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:37.574 12:49:42 -- nvmf/common.sh@437 -- # prepare_net_devs 00:06:37.574 12:49:42 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:06:37.574 12:49:42 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:06:37.574 12:49:42 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:37.574 12:49:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:37.574 12:49:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:37.574 12:49:42 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:06:37.574 12:49:42 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:06:37.574 12:49:42 -- nvmf/common.sh@285 -- # xtrace_disable 00:06:37.574 12:49:42 -- common/autotest_common.sh@10 -- # set +x 00:06:45.709 12:49:49 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:06:45.709 12:49:49 -- nvmf/common.sh@291 -- # pci_devs=() 00:06:45.709 12:49:49 -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:45.709 12:49:49 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:45.709 12:49:49 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:45.709 12:49:49 -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:45.709 12:49:49 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:45.709 12:49:49 -- nvmf/common.sh@295 -- # net_devs=() 00:06:45.709 12:49:49 -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:45.709 12:49:49 -- nvmf/common.sh@296 -- # e810=() 00:06:45.709 12:49:49 -- nvmf/common.sh@296 -- # local -ga e810 00:06:45.709 12:49:49 -- nvmf/common.sh@297 -- # x722=() 00:06:45.709 12:49:49 -- nvmf/common.sh@297 -- # local -ga x722 00:06:45.709 12:49:49 -- nvmf/common.sh@298 -- # mlx=() 00:06:45.709 12:49:49 -- nvmf/common.sh@298 -- # local -ga mlx 00:06:45.709 12:49:49 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:45.709 12:49:49 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:45.709 12:49:49 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:45.709 12:49:49 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:45.709 12:49:49 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:45.709 12:49:49 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:45.709 12:49:49 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:45.709 12:49:49 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:45.709 12:49:49 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:45.709 12:49:49 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:45.709 12:49:49 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:45.709 12:49:49 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:45.709 12:49:49 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:45.709 12:49:49 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:45.709 12:49:49 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:45.709 12:49:49 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:45.709 12:49:49 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:45.709 12:49:49 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:45.709 12:49:49 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:06:45.709 Found 0000:31:00.0 (0x8086 - 0x159b) 00:06:45.709 12:49:49 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:45.709 12:49:49 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:45.709 12:49:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:45.709 12:49:49 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:45.709 12:49:49 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:45.709 12:49:49 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:45.709 12:49:49 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:06:45.709 Found 0000:31:00.1 (0x8086 - 0x159b) 00:06:45.709 12:49:49 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:45.709 12:49:49 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:45.709 12:49:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:45.709 12:49:49 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:45.709 12:49:49 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:45.709 12:49:49 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:45.709 12:49:49 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:45.709 12:49:49 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:45.709 12:49:49 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:45.709 12:49:49 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:45.709 12:49:49 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:06:45.709 12:49:49 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:45.709 12:49:49 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:06:45.709 Found net devices under 0000:31:00.0: cvl_0_0 00:06:45.709 12:49:49 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:06:45.709 12:49:49 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:45.709 12:49:49 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:45.709 12:49:49 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:06:45.709 12:49:49 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:45.709 12:49:49 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:06:45.709 Found net devices under 0000:31:00.1: cvl_0_1 00:06:45.709 12:49:49 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:06:45.709 12:49:49 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:06:45.709 12:49:49 -- nvmf/common.sh@403 -- # is_hw=yes 00:06:45.709 12:49:49 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:06:45.709 12:49:49 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:06:45.709 12:49:49 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:06:45.709 12:49:49 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:45.709 12:49:49 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:45.709 12:49:49 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:45.709 12:49:49 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:45.709 12:49:49 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:45.709 12:49:49 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:45.709 12:49:49 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:45.709 12:49:49 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:45.709 12:49:49 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:45.709 12:49:49 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:45.709 12:49:49 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:45.709 12:49:49 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:45.709 12:49:49 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:45.709 12:49:49 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:45.709 12:49:49 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:45.709 12:49:49 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:45.709 12:49:49 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:45.709 12:49:49 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:45.709 12:49:49 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:45.709 12:49:49 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:45.709 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:45.709 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.523 ms 00:06:45.709 00:06:45.709 --- 10.0.0.2 ping statistics --- 00:06:45.709 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:45.709 rtt min/avg/max/mdev = 0.523/0.523/0.523/0.000 ms 00:06:45.709 12:49:49 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:45.709 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:45.709 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.225 ms 00:06:45.709 00:06:45.709 --- 10.0.0.1 ping statistics --- 00:06:45.709 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:45.709 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:06:45.709 12:49:49 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:45.709 12:49:49 -- nvmf/common.sh@411 -- # return 0 00:06:45.709 12:49:49 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:06:45.709 12:49:49 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:45.709 12:49:49 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:06:45.709 12:49:49 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:06:45.710 12:49:49 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:45.710 12:49:49 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:06:45.710 12:49:49 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:06:45.710 12:49:49 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:06:45.710 12:49:49 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:06:45.710 12:49:49 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:45.710 12:49:49 -- common/autotest_common.sh@10 -- # set +x 00:06:45.710 12:49:49 -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:06:45.710 12:49:49 -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:06:45.710 12:49:49 -- target/nvmf_example.sh@34 -- # nvmfpid=3784864 00:06:45.710 12:49:49 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:45.710 12:49:49 -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:06:45.710 12:49:49 -- target/nvmf_example.sh@36 -- # waitforlisten 3784864 00:06:45.710 12:49:49 -- common/autotest_common.sh@817 -- # '[' -z 3784864 ']' 00:06:45.710 12:49:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.710 12:49:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:45.710 12:49:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.710 12:49:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:45.710 12:49:49 -- common/autotest_common.sh@10 -- # set +x 00:06:45.710 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.710 12:49:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:45.710 12:49:50 -- common/autotest_common.sh@850 -- # return 0 00:06:45.710 12:49:50 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:06:45.710 12:49:50 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:45.710 12:49:50 -- common/autotest_common.sh@10 -- # set +x 00:06:45.710 12:49:50 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:45.710 12:49:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:45.710 12:49:50 -- common/autotest_common.sh@10 -- # set +x 00:06:45.710 12:49:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:45.710 12:49:50 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:06:45.710 12:49:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:45.710 12:49:50 -- common/autotest_common.sh@10 -- # set +x 00:06:45.710 12:49:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:45.710 12:49:50 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:06:45.710 12:49:50 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:45.710 12:49:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:45.710 12:49:50 -- common/autotest_common.sh@10 -- # set +x 00:06:45.970 12:49:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:45.970 12:49:50 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:06:45.970 12:49:50 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:06:45.970 12:49:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:45.970 12:49:50 -- common/autotest_common.sh@10 -- # set +x 00:06:45.970 12:49:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:45.970 12:49:50 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:45.970 12:49:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:45.970 12:49:50 -- common/autotest_common.sh@10 -- # set +x 00:06:45.970 12:49:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:45.970 12:49:50 -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:06:45.970 12:49:50 -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:06:45.970 EAL: No free 2048 kB hugepages reported on node 1 00:06:55.963 Initializing NVMe Controllers 00:06:55.963 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:55.963 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:55.963 Initialization complete. Launching workers. 00:06:55.963 ======================================================== 00:06:55.963 Latency(us) 00:06:55.963 Device Information : IOPS MiB/s Average min max 00:06:55.963 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18450.85 72.07 3469.56 669.80 19988.68 00:06:55.963 ======================================================== 00:06:55.963 Total : 18450.85 72.07 3469.56 669.80 19988.68 00:06:55.963 00:06:55.963 12:50:00 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:06:55.963 12:50:00 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:06:55.963 12:50:00 -- nvmf/common.sh@477 -- # nvmfcleanup 00:06:55.963 12:50:00 -- nvmf/common.sh@117 -- # sync 00:06:55.963 12:50:00 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:55.963 12:50:00 -- nvmf/common.sh@120 -- # set +e 00:06:55.963 12:50:00 -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:55.963 12:50:00 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:55.963 rmmod nvme_tcp 00:06:55.963 rmmod nvme_fabrics 00:06:55.963 rmmod nvme_keyring 00:06:56.223 12:50:01 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:56.223 12:50:01 -- nvmf/common.sh@124 -- # set -e 00:06:56.223 12:50:01 -- nvmf/common.sh@125 -- # return 0 00:06:56.223 12:50:01 -- nvmf/common.sh@478 -- # '[' -n 3784864 ']' 00:06:56.223 12:50:01 -- nvmf/common.sh@479 -- # killprocess 3784864 00:06:56.223 12:50:01 -- common/autotest_common.sh@936 -- # '[' -z 3784864 ']' 00:06:56.223 12:50:01 -- common/autotest_common.sh@940 -- # kill -0 3784864 00:06:56.223 12:50:01 -- common/autotest_common.sh@941 -- # uname 00:06:56.223 12:50:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:56.223 12:50:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3784864 00:06:56.223 12:50:01 -- common/autotest_common.sh@942 -- # process_name=nvmf 00:06:56.223 12:50:01 -- common/autotest_common.sh@946 -- # '[' nvmf = sudo ']' 00:06:56.223 12:50:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3784864' 00:06:56.223 killing process with pid 3784864 00:06:56.223 12:50:01 -- common/autotest_common.sh@955 -- # kill 3784864 00:06:56.223 12:50:01 -- common/autotest_common.sh@960 -- # wait 3784864 00:06:56.223 nvmf threads initialize successfully 00:06:56.223 bdev subsystem init successfully 00:06:56.223 created a nvmf target service 00:06:56.223 create targets's poll groups done 00:06:56.223 all subsystems of target started 00:06:56.223 nvmf target is running 00:06:56.223 all subsystems of target stopped 00:06:56.223 destroy targets's poll groups done 00:06:56.223 destroyed the nvmf target service 00:06:56.223 bdev subsystem finish successfully 00:06:56.223 nvmf threads destroy successfully 00:06:56.223 12:50:01 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:06:56.223 12:50:01 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:06:56.223 12:50:01 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:06:56.223 12:50:01 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:56.223 12:50:01 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:56.223 12:50:01 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:56.223 12:50:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:56.223 12:50:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:58.774 12:50:03 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:58.774 12:50:03 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:06:58.774 12:50:03 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:58.774 12:50:03 -- common/autotest_common.sh@10 -- # set +x 00:06:58.774 00:06:58.774 real 0m20.940s 00:06:58.774 user 0m46.232s 00:06:58.774 sys 0m6.411s 00:06:58.774 12:50:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:58.774 12:50:03 -- common/autotest_common.sh@10 -- # set +x 00:06:58.774 ************************************ 00:06:58.774 END TEST nvmf_example 00:06:58.774 ************************************ 00:06:58.774 12:50:03 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:58.774 12:50:03 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:58.774 12:50:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:58.774 12:50:03 -- common/autotest_common.sh@10 -- # set +x 00:06:58.774 ************************************ 00:06:58.774 START TEST nvmf_filesystem 00:06:58.774 ************************************ 00:06:58.774 12:50:03 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:58.774 * Looking for test storage... 00:06:58.774 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:58.774 12:50:03 -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:06:58.774 12:50:03 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:06:58.774 12:50:03 -- common/autotest_common.sh@34 -- # set -e 00:06:58.774 12:50:03 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:06:58.774 12:50:03 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:06:58.774 12:50:03 -- common/autotest_common.sh@38 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:06:58.774 12:50:03 -- common/autotest_common.sh@43 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:06:58.774 12:50:03 -- common/autotest_common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:06:58.774 12:50:03 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:58.774 12:50:03 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:58.774 12:50:03 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:58.774 12:50:03 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:58.774 12:50:03 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:06:58.774 12:50:03 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:58.774 12:50:03 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:58.774 12:50:03 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:58.774 12:50:03 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:58.774 12:50:03 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:58.774 12:50:03 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:58.774 12:50:03 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:58.774 12:50:03 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:58.774 12:50:03 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:58.774 12:50:03 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:58.774 12:50:03 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:58.774 12:50:03 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:06:58.774 12:50:03 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:58.774 12:50:03 -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:06:58.774 12:50:03 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:06:58.774 12:50:03 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:06:58.774 12:50:03 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:06:58.774 12:50:03 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:58.774 12:50:03 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:06:58.774 12:50:03 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:06:58.774 12:50:03 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:58.774 12:50:03 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:58.774 12:50:03 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:06:58.774 12:50:03 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:06:58.774 12:50:03 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:06:58.774 12:50:03 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:06:58.774 12:50:03 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:06:58.774 12:50:03 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:06:58.774 12:50:03 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:06:58.774 12:50:03 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:06:58.774 12:50:03 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:06:58.774 12:50:03 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:06:58.774 12:50:03 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:06:58.774 12:50:03 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:06:58.774 12:50:03 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:06:58.774 12:50:03 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:06:58.774 12:50:03 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:06:58.774 12:50:03 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:06:58.774 12:50:03 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:58.774 12:50:03 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:06:58.774 12:50:03 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:06:58.774 12:50:03 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:06:58.774 12:50:03 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:58.774 12:50:03 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:06:58.774 12:50:03 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:06:58.774 12:50:03 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:06:58.774 12:50:03 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:06:58.774 12:50:03 -- common/build_config.sh@53 -- # CONFIG_HAVE_EVP_MAC=y 00:06:58.774 12:50:03 -- common/build_config.sh@54 -- # CONFIG_URING_ZNS=n 00:06:58.774 12:50:03 -- common/build_config.sh@55 -- # CONFIG_WERROR=y 00:06:58.774 12:50:03 -- common/build_config.sh@56 -- # CONFIG_HAVE_LIBBSD=n 00:06:58.774 12:50:03 -- common/build_config.sh@57 -- # CONFIG_UBSAN=y 00:06:58.774 12:50:03 -- common/build_config.sh@58 -- # CONFIG_IPSEC_MB_DIR= 00:06:58.774 12:50:03 -- common/build_config.sh@59 -- # CONFIG_GOLANG=n 00:06:58.774 12:50:03 -- common/build_config.sh@60 -- # CONFIG_ISAL=y 00:06:58.774 12:50:03 -- common/build_config.sh@61 -- # CONFIG_IDXD_KERNEL=n 00:06:58.774 12:50:03 -- common/build_config.sh@62 -- # CONFIG_DPDK_LIB_DIR= 00:06:58.774 12:50:03 -- common/build_config.sh@63 -- # CONFIG_RDMA_PROV=verbs 00:06:58.774 12:50:03 -- common/build_config.sh@64 -- # CONFIG_APPS=y 00:06:58.774 12:50:03 -- common/build_config.sh@65 -- # CONFIG_SHARED=y 00:06:58.774 12:50:03 -- common/build_config.sh@66 -- # CONFIG_HAVE_KEYUTILS=n 00:06:58.774 12:50:03 -- common/build_config.sh@67 -- # CONFIG_FC_PATH= 00:06:58.774 12:50:03 -- common/build_config.sh@68 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:58.774 12:50:03 -- common/build_config.sh@69 -- # CONFIG_FC=n 00:06:58.774 12:50:03 -- common/build_config.sh@70 -- # CONFIG_AVAHI=n 00:06:58.774 12:50:03 -- common/build_config.sh@71 -- # CONFIG_FIO_PLUGIN=y 00:06:58.774 12:50:03 -- common/build_config.sh@72 -- # CONFIG_RAID5F=n 00:06:58.774 12:50:03 -- common/build_config.sh@73 -- # CONFIG_EXAMPLES=y 00:06:58.774 12:50:03 -- common/build_config.sh@74 -- # CONFIG_TESTS=y 00:06:58.774 12:50:03 -- common/build_config.sh@75 -- # CONFIG_CRYPTO_MLX5=n 00:06:58.774 12:50:03 -- common/build_config.sh@76 -- # CONFIG_MAX_LCORES= 00:06:58.774 12:50:03 -- common/build_config.sh@77 -- # CONFIG_IPSEC_MB=n 00:06:58.774 12:50:03 -- common/build_config.sh@78 -- # CONFIG_PGO_DIR= 00:06:58.774 12:50:03 -- common/build_config.sh@79 -- # CONFIG_DEBUG=y 00:06:58.774 12:50:03 -- common/build_config.sh@80 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:58.774 12:50:03 -- common/build_config.sh@81 -- # CONFIG_CROSS_PREFIX= 00:06:58.774 12:50:03 -- common/build_config.sh@82 -- # CONFIG_URING=n 00:06:58.774 12:50:03 -- common/autotest_common.sh@53 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:06:58.774 12:50:03 -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:06:58.774 12:50:03 -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:06:58.774 12:50:03 -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:06:58.774 12:50:03 -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:58.775 12:50:03 -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:58.775 12:50:03 -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:58.775 12:50:03 -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:58.775 12:50:03 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:06:58.775 12:50:03 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:06:58.775 12:50:03 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:06:58.775 12:50:03 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:06:58.775 12:50:03 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:06:58.775 12:50:03 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:06:58.775 12:50:03 -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:06:58.775 12:50:03 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:06:58.775 #define SPDK_CONFIG_H 00:06:58.775 #define SPDK_CONFIG_APPS 1 00:06:58.775 #define SPDK_CONFIG_ARCH native 00:06:58.775 #undef SPDK_CONFIG_ASAN 00:06:58.775 #undef SPDK_CONFIG_AVAHI 00:06:58.775 #undef SPDK_CONFIG_CET 00:06:58.775 #define SPDK_CONFIG_COVERAGE 1 00:06:58.775 #define SPDK_CONFIG_CROSS_PREFIX 00:06:58.775 #undef SPDK_CONFIG_CRYPTO 00:06:58.775 #undef SPDK_CONFIG_CRYPTO_MLX5 00:06:58.775 #undef SPDK_CONFIG_CUSTOMOCF 00:06:58.775 #undef SPDK_CONFIG_DAOS 00:06:58.775 #define SPDK_CONFIG_DAOS_DIR 00:06:58.775 #define SPDK_CONFIG_DEBUG 1 00:06:58.775 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:06:58.775 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:06:58.775 #define SPDK_CONFIG_DPDK_INC_DIR 00:06:58.775 #define SPDK_CONFIG_DPDK_LIB_DIR 00:06:58.775 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:06:58.775 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:06:58.775 #define SPDK_CONFIG_EXAMPLES 1 00:06:58.775 #undef SPDK_CONFIG_FC 00:06:58.775 #define SPDK_CONFIG_FC_PATH 00:06:58.775 #define SPDK_CONFIG_FIO_PLUGIN 1 00:06:58.775 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:06:58.775 #undef SPDK_CONFIG_FUSE 00:06:58.775 #undef SPDK_CONFIG_FUZZER 00:06:58.775 #define SPDK_CONFIG_FUZZER_LIB 00:06:58.775 #undef SPDK_CONFIG_GOLANG 00:06:58.775 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:06:58.775 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:06:58.775 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:06:58.775 #undef SPDK_CONFIG_HAVE_KEYUTILS 00:06:58.775 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:06:58.775 #undef SPDK_CONFIG_HAVE_LIBBSD 00:06:58.775 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:06:58.775 #define SPDK_CONFIG_IDXD 1 00:06:58.775 #undef SPDK_CONFIG_IDXD_KERNEL 00:06:58.775 #undef SPDK_CONFIG_IPSEC_MB 00:06:58.775 #define SPDK_CONFIG_IPSEC_MB_DIR 00:06:58.775 #define SPDK_CONFIG_ISAL 1 00:06:58.775 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:06:58.775 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:06:58.775 #define SPDK_CONFIG_LIBDIR 00:06:58.775 #undef SPDK_CONFIG_LTO 00:06:58.775 #define SPDK_CONFIG_MAX_LCORES 00:06:58.775 #define SPDK_CONFIG_NVME_CUSE 1 00:06:58.775 #undef SPDK_CONFIG_OCF 00:06:58.775 #define SPDK_CONFIG_OCF_PATH 00:06:58.775 #define SPDK_CONFIG_OPENSSL_PATH 00:06:58.775 #undef SPDK_CONFIG_PGO_CAPTURE 00:06:58.775 #define SPDK_CONFIG_PGO_DIR 00:06:58.775 #undef SPDK_CONFIG_PGO_USE 00:06:58.775 #define SPDK_CONFIG_PREFIX /usr/local 00:06:58.775 #undef SPDK_CONFIG_RAID5F 00:06:58.775 #undef SPDK_CONFIG_RBD 00:06:58.775 #define SPDK_CONFIG_RDMA 1 00:06:58.775 #define SPDK_CONFIG_RDMA_PROV verbs 00:06:58.775 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:06:58.775 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:06:58.775 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:06:58.775 #define SPDK_CONFIG_SHARED 1 00:06:58.775 #undef SPDK_CONFIG_SMA 00:06:58.775 #define SPDK_CONFIG_TESTS 1 00:06:58.775 #undef SPDK_CONFIG_TSAN 00:06:58.775 #define SPDK_CONFIG_UBLK 1 00:06:58.775 #define SPDK_CONFIG_UBSAN 1 00:06:58.775 #undef SPDK_CONFIG_UNIT_TESTS 00:06:58.775 #undef SPDK_CONFIG_URING 00:06:58.775 #define SPDK_CONFIG_URING_PATH 00:06:58.775 #undef SPDK_CONFIG_URING_ZNS 00:06:58.775 #undef SPDK_CONFIG_USDT 00:06:58.775 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:06:58.775 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:06:58.775 #undef SPDK_CONFIG_VFIO_USER 00:06:58.775 #define SPDK_CONFIG_VFIO_USER_DIR 00:06:58.775 #define SPDK_CONFIG_VHOST 1 00:06:58.775 #define SPDK_CONFIG_VIRTIO 1 00:06:58.775 #undef SPDK_CONFIG_VTUNE 00:06:58.775 #define SPDK_CONFIG_VTUNE_DIR 00:06:58.775 #define SPDK_CONFIG_WERROR 1 00:06:58.775 #define SPDK_CONFIG_WPDK_DIR 00:06:58.775 #undef SPDK_CONFIG_XNVME 00:06:58.775 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:06:58.775 12:50:03 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:06:58.775 12:50:03 -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:58.775 12:50:03 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:58.775 12:50:03 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:58.775 12:50:03 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:58.775 12:50:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.775 12:50:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.775 12:50:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.775 12:50:03 -- paths/export.sh@5 -- # export PATH 00:06:58.775 12:50:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.775 12:50:03 -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:06:58.775 12:50:03 -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:06:58.775 12:50:03 -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:06:58.775 12:50:03 -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:06:58.775 12:50:03 -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:06:58.775 12:50:03 -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:58.775 12:50:03 -- pm/common@67 -- # TEST_TAG=N/A 00:06:58.775 12:50:03 -- pm/common@68 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:06:58.775 12:50:03 -- pm/common@70 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:06:58.775 12:50:03 -- pm/common@71 -- # uname -s 00:06:58.775 12:50:03 -- pm/common@71 -- # PM_OS=Linux 00:06:58.775 12:50:03 -- pm/common@73 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:06:58.775 12:50:03 -- pm/common@74 -- # [[ Linux == FreeBSD ]] 00:06:58.775 12:50:03 -- pm/common@76 -- # [[ Linux == Linux ]] 00:06:58.775 12:50:03 -- pm/common@76 -- # [[ ............................... != QEMU ]] 00:06:58.775 12:50:03 -- pm/common@76 -- # [[ ! -e /.dockerenv ]] 00:06:58.775 12:50:03 -- pm/common@79 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:06:58.775 12:50:03 -- pm/common@80 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:06:58.775 12:50:03 -- pm/common@83 -- # MONITOR_RESOURCES_PIDS=() 00:06:58.775 12:50:03 -- pm/common@83 -- # declare -A MONITOR_RESOURCES_PIDS 00:06:58.775 12:50:03 -- pm/common@85 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:06:58.775 12:50:03 -- common/autotest_common.sh@57 -- # : 1 00:06:58.775 12:50:03 -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:06:58.775 12:50:03 -- common/autotest_common.sh@61 -- # : 0 00:06:58.775 12:50:03 -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:06:58.775 12:50:03 -- common/autotest_common.sh@63 -- # : 0 00:06:58.775 12:50:03 -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:06:58.775 12:50:03 -- common/autotest_common.sh@65 -- # : 1 00:06:58.775 12:50:03 -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:06:58.775 12:50:03 -- common/autotest_common.sh@67 -- # : 0 00:06:58.775 12:50:03 -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:06:58.775 12:50:03 -- common/autotest_common.sh@69 -- # : 00:06:58.775 12:50:03 -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:06:58.776 12:50:03 -- common/autotest_common.sh@71 -- # : 0 00:06:58.776 12:50:03 -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:06:58.776 12:50:03 -- common/autotest_common.sh@73 -- # : 0 00:06:58.776 12:50:03 -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:06:58.776 12:50:03 -- common/autotest_common.sh@75 -- # : 0 00:06:58.776 12:50:03 -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:06:58.776 12:50:03 -- common/autotest_common.sh@77 -- # : 0 00:06:58.776 12:50:03 -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:06:58.776 12:50:03 -- common/autotest_common.sh@79 -- # : 0 00:06:58.776 12:50:03 -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:06:58.776 12:50:03 -- common/autotest_common.sh@81 -- # : 0 00:06:58.776 12:50:03 -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:06:58.776 12:50:03 -- common/autotest_common.sh@83 -- # : 0 00:06:58.776 12:50:03 -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:06:58.776 12:50:03 -- common/autotest_common.sh@85 -- # : 1 00:06:58.776 12:50:03 -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:06:58.776 12:50:03 -- common/autotest_common.sh@87 -- # : 0 00:06:58.776 12:50:03 -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:06:58.776 12:50:03 -- common/autotest_common.sh@89 -- # : 0 00:06:58.776 12:50:03 -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:06:58.776 12:50:03 -- common/autotest_common.sh@91 -- # : 1 00:06:58.776 12:50:03 -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:06:58.776 12:50:03 -- common/autotest_common.sh@93 -- # : 0 00:06:58.776 12:50:03 -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:06:58.776 12:50:03 -- common/autotest_common.sh@95 -- # : 0 00:06:58.776 12:50:03 -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:06:58.776 12:50:03 -- common/autotest_common.sh@97 -- # : 0 00:06:58.776 12:50:03 -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:06:58.776 12:50:03 -- common/autotest_common.sh@99 -- # : 0 00:06:58.776 12:50:03 -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:06:58.776 12:50:03 -- common/autotest_common.sh@101 -- # : tcp 00:06:58.776 12:50:03 -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:06:58.776 12:50:03 -- common/autotest_common.sh@103 -- # : 0 00:06:58.776 12:50:03 -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:06:58.776 12:50:03 -- common/autotest_common.sh@105 -- # : 0 00:06:58.776 12:50:03 -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:06:58.776 12:50:03 -- common/autotest_common.sh@107 -- # : 0 00:06:58.776 12:50:03 -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:06:58.776 12:50:03 -- common/autotest_common.sh@109 -- # : 0 00:06:58.776 12:50:03 -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:06:58.776 12:50:03 -- common/autotest_common.sh@111 -- # : 0 00:06:58.776 12:50:03 -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:06:58.776 12:50:03 -- common/autotest_common.sh@113 -- # : 0 00:06:58.776 12:50:03 -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:06:58.776 12:50:03 -- common/autotest_common.sh@115 -- # : 0 00:06:58.776 12:50:03 -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:06:58.776 12:50:03 -- common/autotest_common.sh@117 -- # : 0 00:06:58.776 12:50:03 -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:06:58.776 12:50:03 -- common/autotest_common.sh@119 -- # : 0 00:06:58.776 12:50:03 -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:06:58.776 12:50:03 -- common/autotest_common.sh@121 -- # : 1 00:06:58.776 12:50:03 -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:06:58.776 12:50:03 -- common/autotest_common.sh@123 -- # : 00:06:58.776 12:50:03 -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:06:58.776 12:50:03 -- common/autotest_common.sh@125 -- # : 0 00:06:58.776 12:50:03 -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:06:58.776 12:50:03 -- common/autotest_common.sh@127 -- # : 0 00:06:58.776 12:50:03 -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:06:58.776 12:50:03 -- common/autotest_common.sh@129 -- # : 0 00:06:58.776 12:50:03 -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:06:58.776 12:50:03 -- common/autotest_common.sh@131 -- # : 0 00:06:58.776 12:50:03 -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:06:58.776 12:50:03 -- common/autotest_common.sh@133 -- # : 0 00:06:58.776 12:50:03 -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:06:58.776 12:50:03 -- common/autotest_common.sh@135 -- # : 0 00:06:58.776 12:50:03 -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:06:58.776 12:50:03 -- common/autotest_common.sh@137 -- # : 00:06:58.776 12:50:03 -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:06:58.776 12:50:03 -- common/autotest_common.sh@139 -- # : true 00:06:58.776 12:50:03 -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:06:58.776 12:50:03 -- common/autotest_common.sh@141 -- # : 0 00:06:58.776 12:50:03 -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:06:58.776 12:50:03 -- common/autotest_common.sh@143 -- # : 0 00:06:58.776 12:50:03 -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:06:58.776 12:50:03 -- common/autotest_common.sh@145 -- # : 0 00:06:58.776 12:50:03 -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:06:58.776 12:50:03 -- common/autotest_common.sh@147 -- # : 0 00:06:58.776 12:50:03 -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:06:58.776 12:50:03 -- common/autotest_common.sh@149 -- # : 0 00:06:58.776 12:50:03 -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:06:58.776 12:50:03 -- common/autotest_common.sh@151 -- # : 0 00:06:58.776 12:50:03 -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:06:58.776 12:50:03 -- common/autotest_common.sh@153 -- # : e810 00:06:58.776 12:50:03 -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:06:58.776 12:50:03 -- common/autotest_common.sh@155 -- # : 0 00:06:58.776 12:50:03 -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:06:58.776 12:50:03 -- common/autotest_common.sh@157 -- # : 0 00:06:58.776 12:50:03 -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:06:58.776 12:50:03 -- common/autotest_common.sh@159 -- # : 0 00:06:58.776 12:50:03 -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:06:58.776 12:50:03 -- common/autotest_common.sh@161 -- # : 0 00:06:58.776 12:50:03 -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:06:58.776 12:50:03 -- common/autotest_common.sh@163 -- # : 0 00:06:58.776 12:50:03 -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:06:58.776 12:50:03 -- common/autotest_common.sh@166 -- # : 00:06:58.776 12:50:03 -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:06:58.776 12:50:03 -- common/autotest_common.sh@168 -- # : 0 00:06:58.776 12:50:03 -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:06:58.776 12:50:03 -- common/autotest_common.sh@170 -- # : 0 00:06:58.776 12:50:03 -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:06:58.776 12:50:03 -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:06:58.776 12:50:03 -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:06:58.776 12:50:03 -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:06:58.776 12:50:03 -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:06:58.776 12:50:03 -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:58.776 12:50:03 -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:58.776 12:50:03 -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:58.776 12:50:03 -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:58.776 12:50:03 -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:06:58.776 12:50:03 -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:06:58.776 12:50:03 -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:58.776 12:50:03 -- common/autotest_common.sh@184 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:58.776 12:50:03 -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:06:58.776 12:50:03 -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:06:58.776 12:50:03 -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:58.776 12:50:03 -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:58.777 12:50:03 -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:58.777 12:50:03 -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:58.777 12:50:03 -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:06:58.777 12:50:03 -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:06:58.777 12:50:03 -- common/autotest_common.sh@199 -- # cat 00:06:58.777 12:50:03 -- common/autotest_common.sh@225 -- # echo leak:libfuse3.so 00:06:58.777 12:50:03 -- common/autotest_common.sh@227 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:58.777 12:50:03 -- common/autotest_common.sh@227 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:58.777 12:50:03 -- common/autotest_common.sh@229 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:58.777 12:50:03 -- common/autotest_common.sh@229 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:58.777 12:50:03 -- common/autotest_common.sh@231 -- # '[' -z /var/spdk/dependencies ']' 00:06:58.777 12:50:03 -- common/autotest_common.sh@234 -- # export DEPENDENCY_DIR 00:06:58.777 12:50:03 -- common/autotest_common.sh@238 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:58.777 12:50:03 -- common/autotest_common.sh@238 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:58.777 12:50:03 -- common/autotest_common.sh@239 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:58.777 12:50:03 -- common/autotest_common.sh@239 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:58.777 12:50:03 -- common/autotest_common.sh@242 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:58.777 12:50:03 -- common/autotest_common.sh@242 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:58.777 12:50:03 -- common/autotest_common.sh@243 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:58.777 12:50:03 -- common/autotest_common.sh@243 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:58.777 12:50:03 -- common/autotest_common.sh@245 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:58.777 12:50:03 -- common/autotest_common.sh@245 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:58.777 12:50:03 -- common/autotest_common.sh@248 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:58.777 12:50:03 -- common/autotest_common.sh@248 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:58.777 12:50:03 -- common/autotest_common.sh@251 -- # '[' 0 -eq 0 ']' 00:06:58.777 12:50:03 -- common/autotest_common.sh@252 -- # export valgrind= 00:06:58.777 12:50:03 -- common/autotest_common.sh@252 -- # valgrind= 00:06:58.777 12:50:03 -- common/autotest_common.sh@258 -- # uname -s 00:06:58.777 12:50:03 -- common/autotest_common.sh@258 -- # '[' Linux = Linux ']' 00:06:58.777 12:50:03 -- common/autotest_common.sh@259 -- # HUGEMEM=4096 00:06:58.777 12:50:03 -- common/autotest_common.sh@260 -- # export CLEAR_HUGE=yes 00:06:58.777 12:50:03 -- common/autotest_common.sh@260 -- # CLEAR_HUGE=yes 00:06:58.777 12:50:03 -- common/autotest_common.sh@261 -- # [[ 0 -eq 1 ]] 00:06:58.777 12:50:03 -- common/autotest_common.sh@261 -- # [[ 0 -eq 1 ]] 00:06:58.777 12:50:03 -- common/autotest_common.sh@268 -- # MAKE=make 00:06:58.777 12:50:03 -- common/autotest_common.sh@269 -- # MAKEFLAGS=-j144 00:06:58.777 12:50:03 -- common/autotest_common.sh@285 -- # export HUGEMEM=4096 00:06:58.777 12:50:03 -- common/autotest_common.sh@285 -- # HUGEMEM=4096 00:06:58.777 12:50:03 -- common/autotest_common.sh@287 -- # NO_HUGE=() 00:06:58.777 12:50:03 -- common/autotest_common.sh@288 -- # TEST_MODE= 00:06:58.777 12:50:03 -- common/autotest_common.sh@289 -- # for i in "$@" 00:06:58.777 12:50:03 -- common/autotest_common.sh@290 -- # case "$i" in 00:06:58.777 12:50:03 -- common/autotest_common.sh@295 -- # TEST_TRANSPORT=tcp 00:06:58.777 12:50:03 -- common/autotest_common.sh@307 -- # [[ -z 3787692 ]] 00:06:58.777 12:50:03 -- common/autotest_common.sh@307 -- # kill -0 3787692 00:06:58.777 12:50:03 -- common/autotest_common.sh@1666 -- # set_test_storage 2147483648 00:06:58.777 12:50:03 -- common/autotest_common.sh@317 -- # [[ -v testdir ]] 00:06:58.777 12:50:03 -- common/autotest_common.sh@319 -- # local requested_size=2147483648 00:06:58.777 12:50:03 -- common/autotest_common.sh@320 -- # local mount target_dir 00:06:58.777 12:50:03 -- common/autotest_common.sh@322 -- # local -A mounts fss sizes avails uses 00:06:58.777 12:50:03 -- common/autotest_common.sh@323 -- # local source fs size avail mount use 00:06:58.777 12:50:03 -- common/autotest_common.sh@325 -- # local storage_fallback storage_candidates 00:06:58.777 12:50:03 -- common/autotest_common.sh@327 -- # mktemp -udt spdk.XXXXXX 00:06:58.777 12:50:03 -- common/autotest_common.sh@327 -- # storage_fallback=/tmp/spdk.DxHjiO 00:06:58.777 12:50:03 -- common/autotest_common.sh@332 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:06:58.777 12:50:03 -- common/autotest_common.sh@334 -- # [[ -n '' ]] 00:06:58.777 12:50:03 -- common/autotest_common.sh@339 -- # [[ -n '' ]] 00:06:58.777 12:50:03 -- common/autotest_common.sh@344 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.DxHjiO/tests/target /tmp/spdk.DxHjiO 00:06:58.777 12:50:03 -- common/autotest_common.sh@347 -- # requested_size=2214592512 00:06:58.777 12:50:03 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:06:58.777 12:50:03 -- common/autotest_common.sh@316 -- # df -T 00:06:58.777 12:50:03 -- common/autotest_common.sh@316 -- # grep -v Filesystem 00:06:58.777 12:50:03 -- common/autotest_common.sh@350 -- # mounts["$mount"]=spdk_devtmpfs 00:06:58.777 12:50:03 -- common/autotest_common.sh@350 -- # fss["$mount"]=devtmpfs 00:06:58.777 12:50:03 -- common/autotest_common.sh@351 -- # avails["$mount"]=67108864 00:06:58.777 12:50:03 -- common/autotest_common.sh@351 -- # sizes["$mount"]=67108864 00:06:58.777 12:50:03 -- common/autotest_common.sh@352 -- # uses["$mount"]=0 00:06:58.777 12:50:03 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:06:58.777 12:50:03 -- common/autotest_common.sh@350 -- # mounts["$mount"]=/dev/pmem0 00:06:58.777 12:50:03 -- common/autotest_common.sh@350 -- # fss["$mount"]=ext2 00:06:58.777 12:50:03 -- common/autotest_common.sh@351 -- # avails["$mount"]=1052192768 00:06:58.777 12:50:03 -- common/autotest_common.sh@351 -- # sizes["$mount"]=5284429824 00:06:58.777 12:50:03 -- common/autotest_common.sh@352 -- # uses["$mount"]=4232237056 00:06:58.777 12:50:03 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:06:58.777 12:50:03 -- common/autotest_common.sh@350 -- # mounts["$mount"]=spdk_root 00:06:58.777 12:50:03 -- common/autotest_common.sh@350 -- # fss["$mount"]=overlay 00:06:58.777 12:50:03 -- common/autotest_common.sh@351 -- # avails["$mount"]=123161776128 00:06:58.777 12:50:03 -- common/autotest_common.sh@351 -- # sizes["$mount"]=129371000832 00:06:58.777 12:50:03 -- common/autotest_common.sh@352 -- # uses["$mount"]=6209224704 00:06:58.777 12:50:03 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:06:58.777 12:50:03 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:06:58.777 12:50:03 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:06:58.777 12:50:03 -- common/autotest_common.sh@351 -- # avails["$mount"]=64682885120 00:06:58.777 12:50:03 -- common/autotest_common.sh@351 -- # sizes["$mount"]=64685498368 00:06:58.777 12:50:03 -- common/autotest_common.sh@352 -- # uses["$mount"]=2613248 00:06:58.777 12:50:03 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:06:58.777 12:50:03 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:06:58.777 12:50:03 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:06:58.777 12:50:03 -- common/autotest_common.sh@351 -- # avails["$mount"]=25864454144 00:06:58.777 12:50:03 -- common/autotest_common.sh@351 -- # sizes["$mount"]=25874202624 00:06:58.777 12:50:03 -- common/autotest_common.sh@352 -- # uses["$mount"]=9748480 00:06:58.777 12:50:03 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:06:58.777 12:50:03 -- common/autotest_common.sh@350 -- # mounts["$mount"]=efivarfs 00:06:58.777 12:50:03 -- common/autotest_common.sh@350 -- # fss["$mount"]=efivarfs 00:06:58.777 12:50:03 -- common/autotest_common.sh@351 -- # avails["$mount"]=189440 00:06:58.777 12:50:03 -- common/autotest_common.sh@351 -- # sizes["$mount"]=507904 00:06:58.777 12:50:03 -- common/autotest_common.sh@352 -- # uses["$mount"]=314368 00:06:58.777 12:50:03 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:06:58.777 12:50:03 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:06:58.777 12:50:03 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:06:58.777 12:50:03 -- common/autotest_common.sh@351 -- # avails["$mount"]=64684949504 00:06:58.777 12:50:03 -- common/autotest_common.sh@351 -- # sizes["$mount"]=64685502464 00:06:58.777 12:50:03 -- common/autotest_common.sh@352 -- # uses["$mount"]=552960 00:06:58.777 12:50:03 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:06:58.777 12:50:03 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:06:58.777 12:50:03 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:06:58.777 12:50:03 -- common/autotest_common.sh@351 -- # avails["$mount"]=12937093120 00:06:58.777 12:50:03 -- common/autotest_common.sh@351 -- # sizes["$mount"]=12937097216 00:06:58.777 12:50:03 -- common/autotest_common.sh@352 -- # uses["$mount"]=4096 00:06:58.777 12:50:03 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:06:58.777 12:50:03 -- common/autotest_common.sh@355 -- # printf '* Looking for test storage...\n' 00:06:58.777 * Looking for test storage... 00:06:58.777 12:50:03 -- common/autotest_common.sh@357 -- # local target_space new_size 00:06:58.777 12:50:03 -- common/autotest_common.sh@358 -- # for target_dir in "${storage_candidates[@]}" 00:06:58.777 12:50:03 -- common/autotest_common.sh@361 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:58.777 12:50:03 -- common/autotest_common.sh@361 -- # awk '$1 !~ /Filesystem/{print $6}' 00:06:58.777 12:50:03 -- common/autotest_common.sh@361 -- # mount=/ 00:06:58.777 12:50:03 -- common/autotest_common.sh@363 -- # target_space=123161776128 00:06:58.777 12:50:03 -- common/autotest_common.sh@364 -- # (( target_space == 0 || target_space < requested_size )) 00:06:58.777 12:50:03 -- common/autotest_common.sh@367 -- # (( target_space >= requested_size )) 00:06:58.777 12:50:03 -- common/autotest_common.sh@369 -- # [[ overlay == tmpfs ]] 00:06:58.777 12:50:03 -- common/autotest_common.sh@369 -- # [[ overlay == ramfs ]] 00:06:58.777 12:50:03 -- common/autotest_common.sh@369 -- # [[ / == / ]] 00:06:58.777 12:50:03 -- common/autotest_common.sh@370 -- # new_size=8423817216 00:06:58.777 12:50:03 -- common/autotest_common.sh@371 -- # (( new_size * 100 / sizes[/] > 95 )) 00:06:58.778 12:50:03 -- common/autotest_common.sh@376 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:58.778 12:50:03 -- common/autotest_common.sh@376 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:58.778 12:50:03 -- common/autotest_common.sh@377 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:58.778 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:58.778 12:50:03 -- common/autotest_common.sh@378 -- # return 0 00:06:58.778 12:50:03 -- common/autotest_common.sh@1668 -- # set -o errtrace 00:06:58.778 12:50:03 -- common/autotest_common.sh@1669 -- # shopt -s extdebug 00:06:58.778 12:50:03 -- common/autotest_common.sh@1670 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:06:58.778 12:50:03 -- common/autotest_common.sh@1672 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:06:58.778 12:50:03 -- common/autotest_common.sh@1673 -- # true 00:06:58.778 12:50:03 -- common/autotest_common.sh@1675 -- # xtrace_fd 00:06:58.778 12:50:03 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:06:58.778 12:50:03 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:06:58.778 12:50:03 -- common/autotest_common.sh@27 -- # exec 00:06:58.778 12:50:03 -- common/autotest_common.sh@29 -- # exec 00:06:58.778 12:50:03 -- common/autotest_common.sh@31 -- # xtrace_restore 00:06:58.778 12:50:03 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:06:58.778 12:50:03 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:06:58.778 12:50:03 -- common/autotest_common.sh@18 -- # set -x 00:06:58.778 12:50:03 -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:58.778 12:50:03 -- nvmf/common.sh@7 -- # uname -s 00:06:58.778 12:50:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:58.778 12:50:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:58.778 12:50:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:58.778 12:50:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:58.778 12:50:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:58.778 12:50:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:58.778 12:50:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:58.778 12:50:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:58.778 12:50:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:58.778 12:50:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:58.778 12:50:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:58.778 12:50:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:58.778 12:50:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:58.778 12:50:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:58.778 12:50:03 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:58.778 12:50:03 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:58.778 12:50:03 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:58.778 12:50:03 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:58.778 12:50:03 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:58.778 12:50:03 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:58.778 12:50:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.778 12:50:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.778 12:50:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.778 12:50:03 -- paths/export.sh@5 -- # export PATH 00:06:58.778 12:50:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.778 12:50:03 -- nvmf/common.sh@47 -- # : 0 00:06:58.778 12:50:03 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:58.778 12:50:03 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:58.778 12:50:03 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:58.778 12:50:03 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:58.778 12:50:03 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:58.778 12:50:03 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:58.778 12:50:03 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:58.778 12:50:03 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:58.778 12:50:03 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:06:58.778 12:50:03 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:06:58.778 12:50:03 -- target/filesystem.sh@15 -- # nvmftestinit 00:06:58.778 12:50:03 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:06:58.778 12:50:03 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:58.778 12:50:03 -- nvmf/common.sh@437 -- # prepare_net_devs 00:06:58.778 12:50:03 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:06:58.778 12:50:03 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:06:58.778 12:50:03 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:58.778 12:50:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:58.778 12:50:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:58.778 12:50:03 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:06:58.778 12:50:03 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:06:58.778 12:50:03 -- nvmf/common.sh@285 -- # xtrace_disable 00:06:58.778 12:50:03 -- common/autotest_common.sh@10 -- # set +x 00:07:06.916 12:50:10 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:06.916 12:50:10 -- nvmf/common.sh@291 -- # pci_devs=() 00:07:06.916 12:50:10 -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:06.916 12:50:10 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:06.916 12:50:10 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:06.916 12:50:10 -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:06.917 12:50:10 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:06.917 12:50:10 -- nvmf/common.sh@295 -- # net_devs=() 00:07:06.917 12:50:10 -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:06.917 12:50:10 -- nvmf/common.sh@296 -- # e810=() 00:07:06.917 12:50:10 -- nvmf/common.sh@296 -- # local -ga e810 00:07:06.917 12:50:10 -- nvmf/common.sh@297 -- # x722=() 00:07:06.917 12:50:10 -- nvmf/common.sh@297 -- # local -ga x722 00:07:06.917 12:50:10 -- nvmf/common.sh@298 -- # mlx=() 00:07:06.917 12:50:10 -- nvmf/common.sh@298 -- # local -ga mlx 00:07:06.917 12:50:10 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:06.917 12:50:10 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:06.917 12:50:10 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:06.917 12:50:10 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:06.917 12:50:10 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:06.917 12:50:10 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:06.917 12:50:10 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:06.917 12:50:10 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:06.917 12:50:10 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:06.917 12:50:10 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:06.917 12:50:10 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:06.917 12:50:10 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:06.917 12:50:10 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:06.917 12:50:10 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:06.917 12:50:10 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:06.917 12:50:10 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:06.917 12:50:10 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:06.917 12:50:10 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:06.917 12:50:10 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:07:06.917 Found 0000:31:00.0 (0x8086 - 0x159b) 00:07:06.917 12:50:10 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:06.917 12:50:10 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:06.917 12:50:10 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:06.917 12:50:10 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:06.917 12:50:10 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:06.917 12:50:10 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:06.917 12:50:10 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:07:06.917 Found 0000:31:00.1 (0x8086 - 0x159b) 00:07:06.917 12:50:10 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:06.917 12:50:10 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:06.917 12:50:10 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:06.917 12:50:10 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:06.917 12:50:10 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:06.917 12:50:10 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:06.917 12:50:10 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:06.917 12:50:10 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:06.917 12:50:10 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:06.917 12:50:10 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:06.917 12:50:10 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:06.917 12:50:10 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:06.917 12:50:10 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:07:06.917 Found net devices under 0000:31:00.0: cvl_0_0 00:07:06.917 12:50:10 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:06.917 12:50:10 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:06.917 12:50:10 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:06.917 12:50:10 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:06.917 12:50:10 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:06.917 12:50:10 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:07:06.917 Found net devices under 0000:31:00.1: cvl_0_1 00:07:06.917 12:50:10 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:06.917 12:50:10 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:07:06.917 12:50:10 -- nvmf/common.sh@403 -- # is_hw=yes 00:07:06.917 12:50:10 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:07:06.917 12:50:10 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:07:06.917 12:50:10 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:07:06.917 12:50:10 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:06.917 12:50:10 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:06.917 12:50:10 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:06.917 12:50:10 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:06.917 12:50:10 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:06.917 12:50:10 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:06.917 12:50:10 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:06.917 12:50:10 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:06.917 12:50:10 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:06.917 12:50:10 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:06.917 12:50:10 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:06.917 12:50:10 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:06.917 12:50:10 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:06.917 12:50:10 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:06.917 12:50:10 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:06.917 12:50:10 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:06.917 12:50:10 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:06.917 12:50:10 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:06.917 12:50:11 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:06.917 12:50:11 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:06.917 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:06.917 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.581 ms 00:07:06.917 00:07:06.917 --- 10.0.0.2 ping statistics --- 00:07:06.917 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:06.917 rtt min/avg/max/mdev = 0.581/0.581/0.581/0.000 ms 00:07:06.917 12:50:11 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:06.917 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:06.917 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.321 ms 00:07:06.917 00:07:06.917 --- 10.0.0.1 ping statistics --- 00:07:06.917 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:06.917 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:07:06.917 12:50:11 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:06.917 12:50:11 -- nvmf/common.sh@411 -- # return 0 00:07:06.917 12:50:11 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:07:06.917 12:50:11 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:06.917 12:50:11 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:07:06.917 12:50:11 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:07:06.917 12:50:11 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:06.917 12:50:11 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:07:06.917 12:50:11 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:07:06.917 12:50:11 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:07:06.917 12:50:11 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:06.917 12:50:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:06.917 12:50:11 -- common/autotest_common.sh@10 -- # set +x 00:07:06.917 ************************************ 00:07:06.917 START TEST nvmf_filesystem_no_in_capsule 00:07:06.917 ************************************ 00:07:06.917 12:50:11 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_part 0 00:07:06.917 12:50:11 -- target/filesystem.sh@47 -- # in_capsule=0 00:07:06.917 12:50:11 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:06.917 12:50:11 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:07:06.917 12:50:11 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:06.917 12:50:11 -- common/autotest_common.sh@10 -- # set +x 00:07:06.917 12:50:11 -- nvmf/common.sh@470 -- # nvmfpid=3791544 00:07:06.917 12:50:11 -- nvmf/common.sh@471 -- # waitforlisten 3791544 00:07:06.917 12:50:11 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:06.917 12:50:11 -- common/autotest_common.sh@817 -- # '[' -z 3791544 ']' 00:07:06.917 12:50:11 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:06.917 12:50:11 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:06.917 12:50:11 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:06.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:06.917 12:50:11 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:06.917 12:50:11 -- common/autotest_common.sh@10 -- # set +x 00:07:06.917 [2024-04-26 12:50:11.294131] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:07:06.917 [2024-04-26 12:50:11.294180] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:06.917 EAL: No free 2048 kB hugepages reported on node 1 00:07:06.917 [2024-04-26 12:50:11.365018] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:06.917 [2024-04-26 12:50:11.439529] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:06.917 [2024-04-26 12:50:11.439571] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:06.917 [2024-04-26 12:50:11.439579] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:06.917 [2024-04-26 12:50:11.439585] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:06.917 [2024-04-26 12:50:11.439591] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:06.917 [2024-04-26 12:50:11.439734] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:06.918 [2024-04-26 12:50:11.439861] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:06.918 [2024-04-26 12:50:11.439962] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.918 [2024-04-26 12:50:11.439963] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:07.179 12:50:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:07.179 12:50:12 -- common/autotest_common.sh@850 -- # return 0 00:07:07.179 12:50:12 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:07:07.179 12:50:12 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:07.179 12:50:12 -- common/autotest_common.sh@10 -- # set +x 00:07:07.179 12:50:12 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:07.179 12:50:12 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:07.179 12:50:12 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:07.179 12:50:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:07.179 12:50:12 -- common/autotest_common.sh@10 -- # set +x 00:07:07.179 [2024-04-26 12:50:12.117430] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:07.179 12:50:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:07.179 12:50:12 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:07.179 12:50:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:07.179 12:50:12 -- common/autotest_common.sh@10 -- # set +x 00:07:07.179 Malloc1 00:07:07.179 12:50:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:07.179 12:50:12 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:07.179 12:50:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:07.179 12:50:12 -- common/autotest_common.sh@10 -- # set +x 00:07:07.179 12:50:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:07.179 12:50:12 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:07.179 12:50:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:07.179 12:50:12 -- common/autotest_common.sh@10 -- # set +x 00:07:07.441 12:50:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:07.441 12:50:12 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:07.441 12:50:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:07.441 12:50:12 -- common/autotest_common.sh@10 -- # set +x 00:07:07.441 [2024-04-26 12:50:12.252122] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:07.441 12:50:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:07.441 12:50:12 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:07.441 12:50:12 -- common/autotest_common.sh@1364 -- # local bdev_name=Malloc1 00:07:07.441 12:50:12 -- common/autotest_common.sh@1365 -- # local bdev_info 00:07:07.441 12:50:12 -- common/autotest_common.sh@1366 -- # local bs 00:07:07.441 12:50:12 -- common/autotest_common.sh@1367 -- # local nb 00:07:07.441 12:50:12 -- common/autotest_common.sh@1368 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:07.441 12:50:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:07.441 12:50:12 -- common/autotest_common.sh@10 -- # set +x 00:07:07.441 12:50:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:07.441 12:50:12 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:07:07.441 { 00:07:07.441 "name": "Malloc1", 00:07:07.441 "aliases": [ 00:07:07.441 "3e5ef593-54f0-4053-a547-0f3cc690c1f7" 00:07:07.441 ], 00:07:07.441 "product_name": "Malloc disk", 00:07:07.441 "block_size": 512, 00:07:07.441 "num_blocks": 1048576, 00:07:07.441 "uuid": "3e5ef593-54f0-4053-a547-0f3cc690c1f7", 00:07:07.441 "assigned_rate_limits": { 00:07:07.441 "rw_ios_per_sec": 0, 00:07:07.441 "rw_mbytes_per_sec": 0, 00:07:07.441 "r_mbytes_per_sec": 0, 00:07:07.441 "w_mbytes_per_sec": 0 00:07:07.441 }, 00:07:07.441 "claimed": true, 00:07:07.441 "claim_type": "exclusive_write", 00:07:07.441 "zoned": false, 00:07:07.441 "supported_io_types": { 00:07:07.441 "read": true, 00:07:07.441 "write": true, 00:07:07.441 "unmap": true, 00:07:07.441 "write_zeroes": true, 00:07:07.441 "flush": true, 00:07:07.441 "reset": true, 00:07:07.441 "compare": false, 00:07:07.441 "compare_and_write": false, 00:07:07.441 "abort": true, 00:07:07.441 "nvme_admin": false, 00:07:07.441 "nvme_io": false 00:07:07.441 }, 00:07:07.441 "memory_domains": [ 00:07:07.441 { 00:07:07.441 "dma_device_id": "system", 00:07:07.441 "dma_device_type": 1 00:07:07.441 }, 00:07:07.441 { 00:07:07.441 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:07.441 "dma_device_type": 2 00:07:07.441 } 00:07:07.441 ], 00:07:07.441 "driver_specific": {} 00:07:07.441 } 00:07:07.441 ]' 00:07:07.441 12:50:12 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:07:07.441 12:50:12 -- common/autotest_common.sh@1369 -- # bs=512 00:07:07.441 12:50:12 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:07:07.441 12:50:12 -- common/autotest_common.sh@1370 -- # nb=1048576 00:07:07.441 12:50:12 -- common/autotest_common.sh@1373 -- # bdev_size=512 00:07:07.441 12:50:12 -- common/autotest_common.sh@1374 -- # echo 512 00:07:07.441 12:50:12 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:07.441 12:50:12 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:08.826 12:50:13 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:08.826 12:50:13 -- common/autotest_common.sh@1184 -- # local i=0 00:07:08.826 12:50:13 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:07:08.826 12:50:13 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:07:08.826 12:50:13 -- common/autotest_common.sh@1191 -- # sleep 2 00:07:11.367 12:50:15 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:07:11.367 12:50:15 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:07:11.367 12:50:15 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:07:11.367 12:50:15 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:07:11.367 12:50:15 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:07:11.367 12:50:15 -- common/autotest_common.sh@1194 -- # return 0 00:07:11.367 12:50:15 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:11.367 12:50:15 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:11.367 12:50:15 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:11.367 12:50:15 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:11.367 12:50:15 -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:11.367 12:50:15 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:11.367 12:50:15 -- setup/common.sh@80 -- # echo 536870912 00:07:11.367 12:50:15 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:11.367 12:50:15 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:11.367 12:50:15 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:11.367 12:50:15 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:11.367 12:50:16 -- target/filesystem.sh@69 -- # partprobe 00:07:11.367 12:50:16 -- target/filesystem.sh@70 -- # sleep 1 00:07:12.310 12:50:17 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:07:12.310 12:50:17 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:12.310 12:50:17 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:12.310 12:50:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:12.310 12:50:17 -- common/autotest_common.sh@10 -- # set +x 00:07:12.570 ************************************ 00:07:12.570 START TEST filesystem_ext4 00:07:12.570 ************************************ 00:07:12.570 12:50:17 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:12.570 12:50:17 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:12.570 12:50:17 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:12.570 12:50:17 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:12.570 12:50:17 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:07:12.570 12:50:17 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:12.570 12:50:17 -- common/autotest_common.sh@914 -- # local i=0 00:07:12.570 12:50:17 -- common/autotest_common.sh@915 -- # local force 00:07:12.570 12:50:17 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:07:12.570 12:50:17 -- common/autotest_common.sh@918 -- # force=-F 00:07:12.570 12:50:17 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:12.570 mke2fs 1.46.5 (30-Dec-2021) 00:07:12.570 Discarding device blocks: 0/522240 done 00:07:12.570 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:12.570 Filesystem UUID: b2d6844e-ca80-411c-a4c6-da0b21d0f84a 00:07:12.570 Superblock backups stored on blocks: 00:07:12.570 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:12.570 00:07:12.570 Allocating group tables: 0/64 done 00:07:12.570 Writing inode tables: 0/64 done 00:07:12.854 Creating journal (8192 blocks): done 00:07:13.947 Writing superblocks and filesystem accounting information: 0/64 2/64 done 00:07:13.947 00:07:13.947 12:50:18 -- common/autotest_common.sh@931 -- # return 0 00:07:13.947 12:50:18 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:13.947 12:50:18 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:14.208 12:50:19 -- target/filesystem.sh@25 -- # sync 00:07:14.208 12:50:19 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:14.209 12:50:19 -- target/filesystem.sh@27 -- # sync 00:07:14.209 12:50:19 -- target/filesystem.sh@29 -- # i=0 00:07:14.209 12:50:19 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:14.209 12:50:19 -- target/filesystem.sh@37 -- # kill -0 3791544 00:07:14.209 12:50:19 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:14.209 12:50:19 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:14.209 12:50:19 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:14.209 12:50:19 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:14.209 00:07:14.209 real 0m1.610s 00:07:14.209 user 0m0.030s 00:07:14.209 sys 0m0.067s 00:07:14.209 12:50:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:14.209 12:50:19 -- common/autotest_common.sh@10 -- # set +x 00:07:14.209 ************************************ 00:07:14.209 END TEST filesystem_ext4 00:07:14.209 ************************************ 00:07:14.209 12:50:19 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:14.209 12:50:19 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:14.209 12:50:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:14.209 12:50:19 -- common/autotest_common.sh@10 -- # set +x 00:07:14.469 ************************************ 00:07:14.469 START TEST filesystem_btrfs 00:07:14.469 ************************************ 00:07:14.469 12:50:19 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:14.469 12:50:19 -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:14.469 12:50:19 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:14.469 12:50:19 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:14.470 12:50:19 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:07:14.470 12:50:19 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:14.470 12:50:19 -- common/autotest_common.sh@914 -- # local i=0 00:07:14.470 12:50:19 -- common/autotest_common.sh@915 -- # local force 00:07:14.470 12:50:19 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:07:14.470 12:50:19 -- common/autotest_common.sh@920 -- # force=-f 00:07:14.470 12:50:19 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:14.470 btrfs-progs v6.6.2 00:07:14.470 See https://btrfs.readthedocs.io for more information. 00:07:14.470 00:07:14.470 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:14.470 NOTE: several default settings have changed in version 5.15, please make sure 00:07:14.470 this does not affect your deployments: 00:07:14.470 - DUP for metadata (-m dup) 00:07:14.470 - enabled no-holes (-O no-holes) 00:07:14.470 - enabled free-space-tree (-R free-space-tree) 00:07:14.470 00:07:14.470 Label: (null) 00:07:14.470 UUID: 17f19b5b-7b5d-449d-ab54-690080656766 00:07:14.470 Node size: 16384 00:07:14.470 Sector size: 4096 00:07:14.470 Filesystem size: 510.00MiB 00:07:14.470 Block group profiles: 00:07:14.470 Data: single 8.00MiB 00:07:14.470 Metadata: DUP 32.00MiB 00:07:14.470 System: DUP 8.00MiB 00:07:14.470 SSD detected: yes 00:07:14.470 Zoned device: no 00:07:14.470 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:14.470 Runtime features: free-space-tree 00:07:14.470 Checksum: crc32c 00:07:14.470 Number of devices: 1 00:07:14.470 Devices: 00:07:14.470 ID SIZE PATH 00:07:14.470 1 510.00MiB /dev/nvme0n1p1 00:07:14.470 00:07:14.470 12:50:19 -- common/autotest_common.sh@931 -- # return 0 00:07:14.470 12:50:19 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:14.730 12:50:19 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:14.730 12:50:19 -- target/filesystem.sh@25 -- # sync 00:07:14.730 12:50:19 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:14.730 12:50:19 -- target/filesystem.sh@27 -- # sync 00:07:14.730 12:50:19 -- target/filesystem.sh@29 -- # i=0 00:07:14.730 12:50:19 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:14.730 12:50:19 -- target/filesystem.sh@37 -- # kill -0 3791544 00:07:14.730 12:50:19 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:14.730 12:50:19 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:14.730 12:50:19 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:14.730 12:50:19 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:14.730 00:07:14.730 real 0m0.497s 00:07:14.730 user 0m0.032s 00:07:14.730 sys 0m0.126s 00:07:14.730 12:50:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:14.730 12:50:19 -- common/autotest_common.sh@10 -- # set +x 00:07:14.730 ************************************ 00:07:14.730 END TEST filesystem_btrfs 00:07:14.730 ************************************ 00:07:14.991 12:50:19 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:07:14.991 12:50:19 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:14.991 12:50:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:14.991 12:50:19 -- common/autotest_common.sh@10 -- # set +x 00:07:14.991 ************************************ 00:07:14.991 START TEST filesystem_xfs 00:07:14.991 ************************************ 00:07:14.991 12:50:19 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create xfs nvme0n1 00:07:14.991 12:50:19 -- target/filesystem.sh@18 -- # fstype=xfs 00:07:14.991 12:50:19 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:14.991 12:50:19 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:14.991 12:50:19 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:07:14.991 12:50:19 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:14.991 12:50:19 -- common/autotest_common.sh@914 -- # local i=0 00:07:14.991 12:50:19 -- common/autotest_common.sh@915 -- # local force 00:07:14.991 12:50:19 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:07:14.991 12:50:19 -- common/autotest_common.sh@920 -- # force=-f 00:07:14.991 12:50:19 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:14.991 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:14.991 = sectsz=512 attr=2, projid32bit=1 00:07:14.991 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:14.991 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:14.991 data = bsize=4096 blocks=130560, imaxpct=25 00:07:14.991 = sunit=0 swidth=0 blks 00:07:14.991 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:14.991 log =internal log bsize=4096 blocks=16384, version=2 00:07:14.991 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:14.991 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:16.373 Discarding blocks...Done. 00:07:16.373 12:50:21 -- common/autotest_common.sh@931 -- # return 0 00:07:16.373 12:50:21 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:18.282 12:50:22 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:18.282 12:50:22 -- target/filesystem.sh@25 -- # sync 00:07:18.282 12:50:22 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:18.282 12:50:22 -- target/filesystem.sh@27 -- # sync 00:07:18.282 12:50:22 -- target/filesystem.sh@29 -- # i=0 00:07:18.282 12:50:22 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:18.282 12:50:22 -- target/filesystem.sh@37 -- # kill -0 3791544 00:07:18.282 12:50:22 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:18.282 12:50:22 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:18.282 12:50:22 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:18.282 12:50:22 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:18.282 00:07:18.282 real 0m2.966s 00:07:18.282 user 0m0.027s 00:07:18.282 sys 0m0.075s 00:07:18.283 12:50:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:18.283 12:50:22 -- common/autotest_common.sh@10 -- # set +x 00:07:18.283 ************************************ 00:07:18.283 END TEST filesystem_xfs 00:07:18.283 ************************************ 00:07:18.283 12:50:22 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:18.283 12:50:23 -- target/filesystem.sh@93 -- # sync 00:07:18.542 12:50:23 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:18.542 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:18.542 12:50:23 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:18.542 12:50:23 -- common/autotest_common.sh@1205 -- # local i=0 00:07:18.542 12:50:23 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:07:18.542 12:50:23 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:18.542 12:50:23 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:07:18.542 12:50:23 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:18.801 12:50:23 -- common/autotest_common.sh@1217 -- # return 0 00:07:18.801 12:50:23 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:18.801 12:50:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:18.801 12:50:23 -- common/autotest_common.sh@10 -- # set +x 00:07:18.801 12:50:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:18.801 12:50:23 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:18.801 12:50:23 -- target/filesystem.sh@101 -- # killprocess 3791544 00:07:18.801 12:50:23 -- common/autotest_common.sh@936 -- # '[' -z 3791544 ']' 00:07:18.801 12:50:23 -- common/autotest_common.sh@940 -- # kill -0 3791544 00:07:18.801 12:50:23 -- common/autotest_common.sh@941 -- # uname 00:07:18.801 12:50:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:18.801 12:50:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3791544 00:07:18.801 12:50:23 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:18.801 12:50:23 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:18.801 12:50:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3791544' 00:07:18.801 killing process with pid 3791544 00:07:18.801 12:50:23 -- common/autotest_common.sh@955 -- # kill 3791544 00:07:18.801 12:50:23 -- common/autotest_common.sh@960 -- # wait 3791544 00:07:19.061 12:50:23 -- target/filesystem.sh@102 -- # nvmfpid= 00:07:19.061 00:07:19.061 real 0m12.674s 00:07:19.061 user 0m50.050s 00:07:19.061 sys 0m1.346s 00:07:19.061 12:50:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:19.061 12:50:23 -- common/autotest_common.sh@10 -- # set +x 00:07:19.061 ************************************ 00:07:19.061 END TEST nvmf_filesystem_no_in_capsule 00:07:19.061 ************************************ 00:07:19.061 12:50:23 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:07:19.061 12:50:23 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:19.061 12:50:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:19.061 12:50:23 -- common/autotest_common.sh@10 -- # set +x 00:07:19.061 ************************************ 00:07:19.061 START TEST nvmf_filesystem_in_capsule 00:07:19.061 ************************************ 00:07:19.061 12:50:24 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_part 4096 00:07:19.061 12:50:24 -- target/filesystem.sh@47 -- # in_capsule=4096 00:07:19.061 12:50:24 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:19.061 12:50:24 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:07:19.061 12:50:24 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:19.061 12:50:24 -- common/autotest_common.sh@10 -- # set +x 00:07:19.061 12:50:24 -- nvmf/common.sh@470 -- # nvmfpid=3794172 00:07:19.061 12:50:24 -- nvmf/common.sh@471 -- # waitforlisten 3794172 00:07:19.061 12:50:24 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:19.061 12:50:24 -- common/autotest_common.sh@817 -- # '[' -z 3794172 ']' 00:07:19.061 12:50:24 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:19.061 12:50:24 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:19.061 12:50:24 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:19.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:19.061 12:50:24 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:19.061 12:50:24 -- common/autotest_common.sh@10 -- # set +x 00:07:19.321 [2024-04-26 12:50:24.145781] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:07:19.321 [2024-04-26 12:50:24.145828] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:19.321 EAL: No free 2048 kB hugepages reported on node 1 00:07:19.321 [2024-04-26 12:50:24.213634] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:19.321 [2024-04-26 12:50:24.279193] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:19.321 [2024-04-26 12:50:24.279231] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:19.321 [2024-04-26 12:50:24.279240] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:19.321 [2024-04-26 12:50:24.279248] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:19.321 [2024-04-26 12:50:24.279255] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:19.321 [2024-04-26 12:50:24.279434] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:19.321 [2024-04-26 12:50:24.279554] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:19.321 [2024-04-26 12:50:24.279717] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.321 [2024-04-26 12:50:24.279717] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:19.893 12:50:24 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:19.893 12:50:24 -- common/autotest_common.sh@850 -- # return 0 00:07:19.893 12:50:24 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:07:19.893 12:50:24 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:19.893 12:50:24 -- common/autotest_common.sh@10 -- # set +x 00:07:20.153 12:50:24 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:20.153 12:50:24 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:20.153 12:50:24 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:07:20.153 12:50:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:20.153 12:50:24 -- common/autotest_common.sh@10 -- # set +x 00:07:20.153 [2024-04-26 12:50:24.959408] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:20.153 12:50:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:20.153 12:50:24 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:20.153 12:50:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:20.153 12:50:24 -- common/autotest_common.sh@10 -- # set +x 00:07:20.153 Malloc1 00:07:20.153 12:50:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:20.153 12:50:25 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:20.153 12:50:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:20.153 12:50:25 -- common/autotest_common.sh@10 -- # set +x 00:07:20.153 12:50:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:20.153 12:50:25 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:20.153 12:50:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:20.153 12:50:25 -- common/autotest_common.sh@10 -- # set +x 00:07:20.153 12:50:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:20.153 12:50:25 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:20.153 12:50:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:20.153 12:50:25 -- common/autotest_common.sh@10 -- # set +x 00:07:20.153 [2024-04-26 12:50:25.085154] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:20.153 12:50:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:20.153 12:50:25 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:20.153 12:50:25 -- common/autotest_common.sh@1364 -- # local bdev_name=Malloc1 00:07:20.153 12:50:25 -- common/autotest_common.sh@1365 -- # local bdev_info 00:07:20.153 12:50:25 -- common/autotest_common.sh@1366 -- # local bs 00:07:20.153 12:50:25 -- common/autotest_common.sh@1367 -- # local nb 00:07:20.153 12:50:25 -- common/autotest_common.sh@1368 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:20.153 12:50:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:20.153 12:50:25 -- common/autotest_common.sh@10 -- # set +x 00:07:20.153 12:50:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:20.153 12:50:25 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:07:20.153 { 00:07:20.153 "name": "Malloc1", 00:07:20.153 "aliases": [ 00:07:20.153 "c0c7cba3-dcd4-4862-82f0-fd2b025c0747" 00:07:20.153 ], 00:07:20.153 "product_name": "Malloc disk", 00:07:20.153 "block_size": 512, 00:07:20.153 "num_blocks": 1048576, 00:07:20.153 "uuid": "c0c7cba3-dcd4-4862-82f0-fd2b025c0747", 00:07:20.153 "assigned_rate_limits": { 00:07:20.153 "rw_ios_per_sec": 0, 00:07:20.153 "rw_mbytes_per_sec": 0, 00:07:20.153 "r_mbytes_per_sec": 0, 00:07:20.153 "w_mbytes_per_sec": 0 00:07:20.153 }, 00:07:20.153 "claimed": true, 00:07:20.153 "claim_type": "exclusive_write", 00:07:20.153 "zoned": false, 00:07:20.153 "supported_io_types": { 00:07:20.153 "read": true, 00:07:20.153 "write": true, 00:07:20.153 "unmap": true, 00:07:20.153 "write_zeroes": true, 00:07:20.153 "flush": true, 00:07:20.153 "reset": true, 00:07:20.153 "compare": false, 00:07:20.153 "compare_and_write": false, 00:07:20.153 "abort": true, 00:07:20.154 "nvme_admin": false, 00:07:20.154 "nvme_io": false 00:07:20.154 }, 00:07:20.154 "memory_domains": [ 00:07:20.154 { 00:07:20.154 "dma_device_id": "system", 00:07:20.154 "dma_device_type": 1 00:07:20.154 }, 00:07:20.154 { 00:07:20.154 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:20.154 "dma_device_type": 2 00:07:20.154 } 00:07:20.154 ], 00:07:20.154 "driver_specific": {} 00:07:20.154 } 00:07:20.154 ]' 00:07:20.154 12:50:25 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:07:20.154 12:50:25 -- common/autotest_common.sh@1369 -- # bs=512 00:07:20.154 12:50:25 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:07:20.154 12:50:25 -- common/autotest_common.sh@1370 -- # nb=1048576 00:07:20.154 12:50:25 -- common/autotest_common.sh@1373 -- # bdev_size=512 00:07:20.154 12:50:25 -- common/autotest_common.sh@1374 -- # echo 512 00:07:20.154 12:50:25 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:20.154 12:50:25 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:22.066 12:50:26 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:22.066 12:50:26 -- common/autotest_common.sh@1184 -- # local i=0 00:07:22.066 12:50:26 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:07:22.066 12:50:26 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:07:22.066 12:50:26 -- common/autotest_common.sh@1191 -- # sleep 2 00:07:23.980 12:50:28 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:07:23.980 12:50:28 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:07:23.980 12:50:28 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:07:23.980 12:50:28 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:07:23.980 12:50:28 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:07:23.980 12:50:28 -- common/autotest_common.sh@1194 -- # return 0 00:07:23.980 12:50:28 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:23.980 12:50:28 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:23.980 12:50:28 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:23.980 12:50:28 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:23.980 12:50:28 -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:23.980 12:50:28 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:23.981 12:50:28 -- setup/common.sh@80 -- # echo 536870912 00:07:23.981 12:50:28 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:23.981 12:50:28 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:23.981 12:50:28 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:23.981 12:50:28 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:23.981 12:50:28 -- target/filesystem.sh@69 -- # partprobe 00:07:24.552 12:50:29 -- target/filesystem.sh@70 -- # sleep 1 00:07:25.939 12:50:30 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:07:25.939 12:50:30 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:25.939 12:50:30 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:25.939 12:50:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:25.939 12:50:30 -- common/autotest_common.sh@10 -- # set +x 00:07:25.939 ************************************ 00:07:25.939 START TEST filesystem_in_capsule_ext4 00:07:25.939 ************************************ 00:07:25.939 12:50:30 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:25.939 12:50:30 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:25.939 12:50:30 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:25.939 12:50:30 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:25.939 12:50:30 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:07:25.939 12:50:30 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:25.939 12:50:30 -- common/autotest_common.sh@914 -- # local i=0 00:07:25.939 12:50:30 -- common/autotest_common.sh@915 -- # local force 00:07:25.939 12:50:30 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:07:25.939 12:50:30 -- common/autotest_common.sh@918 -- # force=-F 00:07:25.939 12:50:30 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:25.939 mke2fs 1.46.5 (30-Dec-2021) 00:07:25.939 Discarding device blocks: 0/522240 done 00:07:25.939 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:25.939 Filesystem UUID: d794f588-e804-4e9f-9a45-8dffc54a8df8 00:07:25.939 Superblock backups stored on blocks: 00:07:25.939 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:25.939 00:07:25.939 Allocating group tables: 0/64 done 00:07:25.939 Writing inode tables: 0/64 done 00:07:27.327 Creating journal (8192 blocks): done 00:07:27.327 Writing superblocks and filesystem accounting information: 0/64 done 00:07:27.327 00:07:27.327 12:50:32 -- common/autotest_common.sh@931 -- # return 0 00:07:27.327 12:50:32 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:28.270 12:50:33 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:28.270 12:50:33 -- target/filesystem.sh@25 -- # sync 00:07:28.270 12:50:33 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:28.270 12:50:33 -- target/filesystem.sh@27 -- # sync 00:07:28.270 12:50:33 -- target/filesystem.sh@29 -- # i=0 00:07:28.270 12:50:33 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:28.270 12:50:33 -- target/filesystem.sh@37 -- # kill -0 3794172 00:07:28.270 12:50:33 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:28.270 12:50:33 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:28.270 12:50:33 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:28.270 12:50:33 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:28.270 00:07:28.270 real 0m2.553s 00:07:28.270 user 0m0.035s 00:07:28.270 sys 0m0.063s 00:07:28.270 12:50:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:28.270 12:50:33 -- common/autotest_common.sh@10 -- # set +x 00:07:28.270 ************************************ 00:07:28.270 END TEST filesystem_in_capsule_ext4 00:07:28.270 ************************************ 00:07:28.530 12:50:33 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:28.530 12:50:33 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:28.530 12:50:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:28.530 12:50:33 -- common/autotest_common.sh@10 -- # set +x 00:07:28.530 ************************************ 00:07:28.530 START TEST filesystem_in_capsule_btrfs 00:07:28.530 ************************************ 00:07:28.530 12:50:33 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:28.530 12:50:33 -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:28.530 12:50:33 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:28.530 12:50:33 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:28.530 12:50:33 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:07:28.530 12:50:33 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:28.530 12:50:33 -- common/autotest_common.sh@914 -- # local i=0 00:07:28.530 12:50:33 -- common/autotest_common.sh@915 -- # local force 00:07:28.530 12:50:33 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:07:28.530 12:50:33 -- common/autotest_common.sh@920 -- # force=-f 00:07:28.530 12:50:33 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:28.790 btrfs-progs v6.6.2 00:07:28.790 See https://btrfs.readthedocs.io for more information. 00:07:28.790 00:07:28.790 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:28.790 NOTE: several default settings have changed in version 5.15, please make sure 00:07:28.790 this does not affect your deployments: 00:07:28.790 - DUP for metadata (-m dup) 00:07:28.790 - enabled no-holes (-O no-holes) 00:07:28.790 - enabled free-space-tree (-R free-space-tree) 00:07:28.790 00:07:28.790 Label: (null) 00:07:28.790 UUID: 6fe61ff0-d8d3-4c6c-b455-02f0f4d1e85e 00:07:28.790 Node size: 16384 00:07:28.790 Sector size: 4096 00:07:28.790 Filesystem size: 510.00MiB 00:07:28.790 Block group profiles: 00:07:28.790 Data: single 8.00MiB 00:07:28.790 Metadata: DUP 32.00MiB 00:07:28.790 System: DUP 8.00MiB 00:07:28.790 SSD detected: yes 00:07:28.790 Zoned device: no 00:07:28.790 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:28.790 Runtime features: free-space-tree 00:07:28.790 Checksum: crc32c 00:07:28.790 Number of devices: 1 00:07:28.790 Devices: 00:07:28.790 ID SIZE PATH 00:07:28.790 1 510.00MiB /dev/nvme0n1p1 00:07:28.790 00:07:28.790 12:50:33 -- common/autotest_common.sh@931 -- # return 0 00:07:28.790 12:50:33 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:29.362 12:50:34 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:29.362 12:50:34 -- target/filesystem.sh@25 -- # sync 00:07:29.362 12:50:34 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:29.362 12:50:34 -- target/filesystem.sh@27 -- # sync 00:07:29.362 12:50:34 -- target/filesystem.sh@29 -- # i=0 00:07:29.362 12:50:34 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:29.362 12:50:34 -- target/filesystem.sh@37 -- # kill -0 3794172 00:07:29.362 12:50:34 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:29.362 12:50:34 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:29.623 12:50:34 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:29.623 12:50:34 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:29.623 00:07:29.623 real 0m0.938s 00:07:29.623 user 0m0.032s 00:07:29.623 sys 0m0.132s 00:07:29.623 12:50:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:29.623 12:50:34 -- common/autotest_common.sh@10 -- # set +x 00:07:29.623 ************************************ 00:07:29.623 END TEST filesystem_in_capsule_btrfs 00:07:29.623 ************************************ 00:07:29.623 12:50:34 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:07:29.623 12:50:34 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:29.623 12:50:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:29.623 12:50:34 -- common/autotest_common.sh@10 -- # set +x 00:07:29.623 ************************************ 00:07:29.623 START TEST filesystem_in_capsule_xfs 00:07:29.623 ************************************ 00:07:29.623 12:50:34 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create xfs nvme0n1 00:07:29.623 12:50:34 -- target/filesystem.sh@18 -- # fstype=xfs 00:07:29.623 12:50:34 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:29.623 12:50:34 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:29.623 12:50:34 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:07:29.623 12:50:34 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:29.623 12:50:34 -- common/autotest_common.sh@914 -- # local i=0 00:07:29.623 12:50:34 -- common/autotest_common.sh@915 -- # local force 00:07:29.623 12:50:34 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:07:29.623 12:50:34 -- common/autotest_common.sh@920 -- # force=-f 00:07:29.623 12:50:34 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:29.623 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:29.623 = sectsz=512 attr=2, projid32bit=1 00:07:29.623 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:29.623 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:29.623 data = bsize=4096 blocks=130560, imaxpct=25 00:07:29.623 = sunit=0 swidth=0 blks 00:07:29.623 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:29.623 log =internal log bsize=4096 blocks=16384, version=2 00:07:29.623 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:29.623 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:30.566 Discarding blocks...Done. 00:07:30.566 12:50:35 -- common/autotest_common.sh@931 -- # return 0 00:07:30.566 12:50:35 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:32.477 12:50:37 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:32.477 12:50:37 -- target/filesystem.sh@25 -- # sync 00:07:32.477 12:50:37 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:32.477 12:50:37 -- target/filesystem.sh@27 -- # sync 00:07:32.477 12:50:37 -- target/filesystem.sh@29 -- # i=0 00:07:32.477 12:50:37 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:32.477 12:50:37 -- target/filesystem.sh@37 -- # kill -0 3794172 00:07:32.477 12:50:37 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:32.477 12:50:37 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:32.477 12:50:37 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:32.477 12:50:37 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:32.477 00:07:32.477 real 0m2.652s 00:07:32.477 user 0m0.028s 00:07:32.477 sys 0m0.076s 00:07:32.477 12:50:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:32.477 12:50:37 -- common/autotest_common.sh@10 -- # set +x 00:07:32.477 ************************************ 00:07:32.477 END TEST filesystem_in_capsule_xfs 00:07:32.477 ************************************ 00:07:32.477 12:50:37 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:32.477 12:50:37 -- target/filesystem.sh@93 -- # sync 00:07:32.478 12:50:37 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:32.478 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:32.478 12:50:37 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:32.478 12:50:37 -- common/autotest_common.sh@1205 -- # local i=0 00:07:32.478 12:50:37 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:07:32.478 12:50:37 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:32.478 12:50:37 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:07:32.478 12:50:37 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:32.478 12:50:37 -- common/autotest_common.sh@1217 -- # return 0 00:07:32.478 12:50:37 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:32.478 12:50:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:32.478 12:50:37 -- common/autotest_common.sh@10 -- # set +x 00:07:32.738 12:50:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:32.738 12:50:37 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:32.738 12:50:37 -- target/filesystem.sh@101 -- # killprocess 3794172 00:07:32.738 12:50:37 -- common/autotest_common.sh@936 -- # '[' -z 3794172 ']' 00:07:32.738 12:50:37 -- common/autotest_common.sh@940 -- # kill -0 3794172 00:07:32.738 12:50:37 -- common/autotest_common.sh@941 -- # uname 00:07:32.738 12:50:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:32.738 12:50:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3794172 00:07:32.738 12:50:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:32.738 12:50:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:32.738 12:50:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3794172' 00:07:32.738 killing process with pid 3794172 00:07:32.738 12:50:37 -- common/autotest_common.sh@955 -- # kill 3794172 00:07:32.738 12:50:37 -- common/autotest_common.sh@960 -- # wait 3794172 00:07:32.999 12:50:37 -- target/filesystem.sh@102 -- # nvmfpid= 00:07:32.999 00:07:32.999 real 0m13.754s 00:07:32.999 user 0m54.402s 00:07:32.999 sys 0m1.345s 00:07:32.999 12:50:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:32.999 12:50:37 -- common/autotest_common.sh@10 -- # set +x 00:07:32.999 ************************************ 00:07:32.999 END TEST nvmf_filesystem_in_capsule 00:07:32.999 ************************************ 00:07:32.999 12:50:37 -- target/filesystem.sh@108 -- # nvmftestfini 00:07:32.999 12:50:37 -- nvmf/common.sh@477 -- # nvmfcleanup 00:07:32.999 12:50:37 -- nvmf/common.sh@117 -- # sync 00:07:32.999 12:50:37 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:32.999 12:50:37 -- nvmf/common.sh@120 -- # set +e 00:07:32.999 12:50:37 -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:32.999 12:50:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:32.999 rmmod nvme_tcp 00:07:32.999 rmmod nvme_fabrics 00:07:32.999 rmmod nvme_keyring 00:07:32.999 12:50:37 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:32.999 12:50:37 -- nvmf/common.sh@124 -- # set -e 00:07:32.999 12:50:37 -- nvmf/common.sh@125 -- # return 0 00:07:32.999 12:50:37 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:07:32.999 12:50:37 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:07:32.999 12:50:37 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:07:32.999 12:50:37 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:07:32.999 12:50:37 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:32.999 12:50:37 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:32.999 12:50:37 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:32.999 12:50:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:32.999 12:50:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:35.547 12:50:40 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:35.547 00:07:35.547 real 0m36.534s 00:07:35.547 user 1m46.745s 00:07:35.547 sys 0m8.354s 00:07:35.547 12:50:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:35.547 12:50:40 -- common/autotest_common.sh@10 -- # set +x 00:07:35.547 ************************************ 00:07:35.547 END TEST nvmf_filesystem 00:07:35.547 ************************************ 00:07:35.547 12:50:40 -- nvmf/nvmf.sh@25 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:35.547 12:50:40 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:35.547 12:50:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:35.547 12:50:40 -- common/autotest_common.sh@10 -- # set +x 00:07:35.547 ************************************ 00:07:35.547 START TEST nvmf_discovery 00:07:35.547 ************************************ 00:07:35.547 12:50:40 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:35.547 * Looking for test storage... 00:07:35.547 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:35.547 12:50:40 -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:35.547 12:50:40 -- nvmf/common.sh@7 -- # uname -s 00:07:35.547 12:50:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:35.547 12:50:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:35.547 12:50:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:35.547 12:50:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:35.547 12:50:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:35.547 12:50:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:35.547 12:50:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:35.547 12:50:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:35.547 12:50:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:35.547 12:50:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:35.547 12:50:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:35.547 12:50:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:35.547 12:50:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:35.547 12:50:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:35.547 12:50:40 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:35.547 12:50:40 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:35.547 12:50:40 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:35.547 12:50:40 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:35.547 12:50:40 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:35.547 12:50:40 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:35.547 12:50:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.547 12:50:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.547 12:50:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.547 12:50:40 -- paths/export.sh@5 -- # export PATH 00:07:35.547 12:50:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.547 12:50:40 -- nvmf/common.sh@47 -- # : 0 00:07:35.547 12:50:40 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:35.547 12:50:40 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:35.547 12:50:40 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:35.547 12:50:40 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:35.547 12:50:40 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:35.547 12:50:40 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:35.547 12:50:40 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:35.547 12:50:40 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:35.547 12:50:40 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:07:35.547 12:50:40 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:07:35.548 12:50:40 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:07:35.548 12:50:40 -- target/discovery.sh@15 -- # hash nvme 00:07:35.548 12:50:40 -- target/discovery.sh@20 -- # nvmftestinit 00:07:35.548 12:50:40 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:07:35.548 12:50:40 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:35.548 12:50:40 -- nvmf/common.sh@437 -- # prepare_net_devs 00:07:35.548 12:50:40 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:07:35.548 12:50:40 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:07:35.548 12:50:40 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:35.548 12:50:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:35.548 12:50:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:35.548 12:50:40 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:07:35.548 12:50:40 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:07:35.548 12:50:40 -- nvmf/common.sh@285 -- # xtrace_disable 00:07:35.548 12:50:40 -- common/autotest_common.sh@10 -- # set +x 00:07:42.136 12:50:47 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:42.136 12:50:47 -- nvmf/common.sh@291 -- # pci_devs=() 00:07:42.136 12:50:47 -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:42.136 12:50:47 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:42.136 12:50:47 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:42.136 12:50:47 -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:42.136 12:50:47 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:42.136 12:50:47 -- nvmf/common.sh@295 -- # net_devs=() 00:07:42.136 12:50:47 -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:42.136 12:50:47 -- nvmf/common.sh@296 -- # e810=() 00:07:42.136 12:50:47 -- nvmf/common.sh@296 -- # local -ga e810 00:07:42.136 12:50:47 -- nvmf/common.sh@297 -- # x722=() 00:07:42.136 12:50:47 -- nvmf/common.sh@297 -- # local -ga x722 00:07:42.136 12:50:47 -- nvmf/common.sh@298 -- # mlx=() 00:07:42.136 12:50:47 -- nvmf/common.sh@298 -- # local -ga mlx 00:07:42.136 12:50:47 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:42.136 12:50:47 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:42.136 12:50:47 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:42.136 12:50:47 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:42.136 12:50:47 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:42.136 12:50:47 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:42.136 12:50:47 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:42.136 12:50:47 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:42.136 12:50:47 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:42.136 12:50:47 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:42.136 12:50:47 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:42.136 12:50:47 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:42.136 12:50:47 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:42.136 12:50:47 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:42.136 12:50:47 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:42.136 12:50:47 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:42.136 12:50:47 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:42.136 12:50:47 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:42.136 12:50:47 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:07:42.136 Found 0000:31:00.0 (0x8086 - 0x159b) 00:07:42.136 12:50:47 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:42.136 12:50:47 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:42.136 12:50:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:42.136 12:50:47 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:42.136 12:50:47 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:42.136 12:50:47 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:42.136 12:50:47 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:07:42.136 Found 0000:31:00.1 (0x8086 - 0x159b) 00:07:42.136 12:50:47 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:42.136 12:50:47 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:42.136 12:50:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:42.136 12:50:47 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:42.136 12:50:47 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:42.136 12:50:47 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:42.136 12:50:47 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:42.136 12:50:47 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:42.136 12:50:47 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:42.136 12:50:47 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:42.136 12:50:47 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:42.136 12:50:47 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:42.136 12:50:47 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:07:42.136 Found net devices under 0000:31:00.0: cvl_0_0 00:07:42.136 12:50:47 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:42.136 12:50:47 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:42.136 12:50:47 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:42.136 12:50:47 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:42.136 12:50:47 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:42.136 12:50:47 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:07:42.136 Found net devices under 0000:31:00.1: cvl_0_1 00:07:42.136 12:50:47 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:42.136 12:50:47 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:07:42.136 12:50:47 -- nvmf/common.sh@403 -- # is_hw=yes 00:07:42.136 12:50:47 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:07:42.136 12:50:47 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:07:42.136 12:50:47 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:07:42.136 12:50:47 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:42.136 12:50:47 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:42.136 12:50:47 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:42.136 12:50:47 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:42.136 12:50:47 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:42.136 12:50:47 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:42.136 12:50:47 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:42.136 12:50:47 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:42.136 12:50:47 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:42.136 12:50:47 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:42.136 12:50:47 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:42.136 12:50:47 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:42.136 12:50:47 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:42.397 12:50:47 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:42.397 12:50:47 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:42.397 12:50:47 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:42.397 12:50:47 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:42.657 12:50:47 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:42.657 12:50:47 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:42.657 12:50:47 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:42.657 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:42.657 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.593 ms 00:07:42.657 00:07:42.657 --- 10.0.0.2 ping statistics --- 00:07:42.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:42.657 rtt min/avg/max/mdev = 0.593/0.593/0.593/0.000 ms 00:07:42.657 12:50:47 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:42.657 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:42.657 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.289 ms 00:07:42.657 00:07:42.657 --- 10.0.0.1 ping statistics --- 00:07:42.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:42.657 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:07:42.657 12:50:47 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:42.657 12:50:47 -- nvmf/common.sh@411 -- # return 0 00:07:42.657 12:50:47 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:07:42.657 12:50:47 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:42.657 12:50:47 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:07:42.657 12:50:47 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:07:42.657 12:50:47 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:42.657 12:50:47 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:07:42.657 12:50:47 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:07:42.657 12:50:47 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:07:42.657 12:50:47 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:07:42.657 12:50:47 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:42.657 12:50:47 -- common/autotest_common.sh@10 -- # set +x 00:07:42.657 12:50:47 -- nvmf/common.sh@470 -- # nvmfpid=3801474 00:07:42.657 12:50:47 -- nvmf/common.sh@471 -- # waitforlisten 3801474 00:07:42.657 12:50:47 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:42.657 12:50:47 -- common/autotest_common.sh@817 -- # '[' -z 3801474 ']' 00:07:42.657 12:50:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:42.657 12:50:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:42.657 12:50:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:42.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:42.657 12:50:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:42.657 12:50:47 -- common/autotest_common.sh@10 -- # set +x 00:07:42.657 [2024-04-26 12:50:47.595032] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:07:42.657 [2024-04-26 12:50:47.595093] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:42.657 EAL: No free 2048 kB hugepages reported on node 1 00:07:42.657 [2024-04-26 12:50:47.670400] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:42.938 [2024-04-26 12:50:47.742764] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:42.938 [2024-04-26 12:50:47.742805] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:42.938 [2024-04-26 12:50:47.742814] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:42.938 [2024-04-26 12:50:47.742821] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:42.938 [2024-04-26 12:50:47.742828] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:42.938 [2024-04-26 12:50:47.742983] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:42.938 [2024-04-26 12:50:47.743158] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:42.938 [2024-04-26 12:50:47.743315] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:42.938 [2024-04-26 12:50:47.743316] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.586 12:50:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:43.586 12:50:48 -- common/autotest_common.sh@850 -- # return 0 00:07:43.586 12:50:48 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:07:43.586 12:50:48 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:43.586 12:50:48 -- common/autotest_common.sh@10 -- # set +x 00:07:43.586 12:50:48 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:43.586 12:50:48 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:43.586 12:50:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:43.586 12:50:48 -- common/autotest_common.sh@10 -- # set +x 00:07:43.586 [2024-04-26 12:50:48.421375] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:43.586 12:50:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:43.586 12:50:48 -- target/discovery.sh@26 -- # seq 1 4 00:07:43.586 12:50:48 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:43.586 12:50:48 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:07:43.586 12:50:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:43.586 12:50:48 -- common/autotest_common.sh@10 -- # set +x 00:07:43.586 Null1 00:07:43.586 12:50:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:43.586 12:50:48 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:43.586 12:50:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:43.586 12:50:48 -- common/autotest_common.sh@10 -- # set +x 00:07:43.586 12:50:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:43.586 12:50:48 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:07:43.586 12:50:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:43.586 12:50:48 -- common/autotest_common.sh@10 -- # set +x 00:07:43.586 12:50:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:43.586 12:50:48 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:43.586 12:50:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:43.586 12:50:48 -- common/autotest_common.sh@10 -- # set +x 00:07:43.586 [2024-04-26 12:50:48.481693] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:43.586 12:50:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:43.586 12:50:48 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:43.586 12:50:48 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:07:43.586 12:50:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:43.586 12:50:48 -- common/autotest_common.sh@10 -- # set +x 00:07:43.586 Null2 00:07:43.586 12:50:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:43.586 12:50:48 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:07:43.586 12:50:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:43.586 12:50:48 -- common/autotest_common.sh@10 -- # set +x 00:07:43.586 12:50:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:43.586 12:50:48 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:07:43.586 12:50:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:43.586 12:50:48 -- common/autotest_common.sh@10 -- # set +x 00:07:43.586 12:50:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:43.586 12:50:48 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:07:43.586 12:50:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:43.586 12:50:48 -- common/autotest_common.sh@10 -- # set +x 00:07:43.586 12:50:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:43.586 12:50:48 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:43.586 12:50:48 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:07:43.586 12:50:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:43.586 12:50:48 -- common/autotest_common.sh@10 -- # set +x 00:07:43.586 Null3 00:07:43.586 12:50:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:43.586 12:50:48 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:07:43.586 12:50:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:43.586 12:50:48 -- common/autotest_common.sh@10 -- # set +x 00:07:43.586 12:50:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:43.586 12:50:48 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:07:43.586 12:50:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:43.586 12:50:48 -- common/autotest_common.sh@10 -- # set +x 00:07:43.586 12:50:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:43.586 12:50:48 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:07:43.586 12:50:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:43.586 12:50:48 -- common/autotest_common.sh@10 -- # set +x 00:07:43.586 12:50:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:43.586 12:50:48 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:43.586 12:50:48 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:07:43.586 12:50:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:43.586 12:50:48 -- common/autotest_common.sh@10 -- # set +x 00:07:43.586 Null4 00:07:43.586 12:50:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:43.586 12:50:48 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:07:43.586 12:50:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:43.586 12:50:48 -- common/autotest_common.sh@10 -- # set +x 00:07:43.586 12:50:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:43.586 12:50:48 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:07:43.587 12:50:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:43.587 12:50:48 -- common/autotest_common.sh@10 -- # set +x 00:07:43.587 12:50:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:43.587 12:50:48 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:07:43.587 12:50:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:43.587 12:50:48 -- common/autotest_common.sh@10 -- # set +x 00:07:43.587 12:50:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:43.587 12:50:48 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:43.587 12:50:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:43.587 12:50:48 -- common/autotest_common.sh@10 -- # set +x 00:07:43.587 12:50:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:43.587 12:50:48 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:07:43.587 12:50:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:43.587 12:50:48 -- common/autotest_common.sh@10 -- # set +x 00:07:43.847 12:50:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:43.847 12:50:48 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 4420 00:07:43.847 00:07:43.847 Discovery Log Number of Records 6, Generation counter 6 00:07:43.847 =====Discovery Log Entry 0====== 00:07:43.847 trtype: tcp 00:07:43.847 adrfam: ipv4 00:07:43.847 subtype: current discovery subsystem 00:07:43.847 treq: not required 00:07:43.847 portid: 0 00:07:43.847 trsvcid: 4420 00:07:43.847 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:43.847 traddr: 10.0.0.2 00:07:43.847 eflags: explicit discovery connections, duplicate discovery information 00:07:43.847 sectype: none 00:07:43.847 =====Discovery Log Entry 1====== 00:07:43.847 trtype: tcp 00:07:43.847 adrfam: ipv4 00:07:43.847 subtype: nvme subsystem 00:07:43.847 treq: not required 00:07:43.847 portid: 0 00:07:43.847 trsvcid: 4420 00:07:43.848 subnqn: nqn.2016-06.io.spdk:cnode1 00:07:43.848 traddr: 10.0.0.2 00:07:43.848 eflags: none 00:07:43.848 sectype: none 00:07:43.848 =====Discovery Log Entry 2====== 00:07:43.848 trtype: tcp 00:07:43.848 adrfam: ipv4 00:07:43.848 subtype: nvme subsystem 00:07:43.848 treq: not required 00:07:43.848 portid: 0 00:07:43.848 trsvcid: 4420 00:07:43.848 subnqn: nqn.2016-06.io.spdk:cnode2 00:07:43.848 traddr: 10.0.0.2 00:07:43.848 eflags: none 00:07:43.848 sectype: none 00:07:43.848 =====Discovery Log Entry 3====== 00:07:43.848 trtype: tcp 00:07:43.848 adrfam: ipv4 00:07:43.848 subtype: nvme subsystem 00:07:43.848 treq: not required 00:07:43.848 portid: 0 00:07:43.848 trsvcid: 4420 00:07:43.848 subnqn: nqn.2016-06.io.spdk:cnode3 00:07:43.848 traddr: 10.0.0.2 00:07:43.848 eflags: none 00:07:43.848 sectype: none 00:07:43.848 =====Discovery Log Entry 4====== 00:07:43.848 trtype: tcp 00:07:43.848 adrfam: ipv4 00:07:43.848 subtype: nvme subsystem 00:07:43.848 treq: not required 00:07:43.848 portid: 0 00:07:43.848 trsvcid: 4420 00:07:43.848 subnqn: nqn.2016-06.io.spdk:cnode4 00:07:43.848 traddr: 10.0.0.2 00:07:43.848 eflags: none 00:07:43.848 sectype: none 00:07:43.848 =====Discovery Log Entry 5====== 00:07:43.848 trtype: tcp 00:07:43.848 adrfam: ipv4 00:07:43.848 subtype: discovery subsystem referral 00:07:43.848 treq: not required 00:07:43.848 portid: 0 00:07:43.848 trsvcid: 4430 00:07:43.848 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:43.848 traddr: 10.0.0.2 00:07:43.848 eflags: none 00:07:43.848 sectype: none 00:07:43.848 12:50:48 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:07:43.848 Perform nvmf subsystem discovery via RPC 00:07:43.848 12:50:48 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:07:43.848 12:50:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:43.848 12:50:48 -- common/autotest_common.sh@10 -- # set +x 00:07:43.848 [2024-04-26 12:50:48.870864] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:07:43.848 [ 00:07:43.848 { 00:07:43.848 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:07:43.848 "subtype": "Discovery", 00:07:43.848 "listen_addresses": [ 00:07:43.848 { 00:07:43.848 "transport": "TCP", 00:07:43.848 "trtype": "TCP", 00:07:43.848 "adrfam": "IPv4", 00:07:43.848 "traddr": "10.0.0.2", 00:07:43.848 "trsvcid": "4420" 00:07:43.848 } 00:07:43.848 ], 00:07:43.848 "allow_any_host": true, 00:07:43.848 "hosts": [] 00:07:43.848 }, 00:07:43.848 { 00:07:43.848 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:07:43.848 "subtype": "NVMe", 00:07:43.848 "listen_addresses": [ 00:07:43.848 { 00:07:43.848 "transport": "TCP", 00:07:43.848 "trtype": "TCP", 00:07:43.848 "adrfam": "IPv4", 00:07:43.848 "traddr": "10.0.0.2", 00:07:43.848 "trsvcid": "4420" 00:07:43.848 } 00:07:43.848 ], 00:07:43.848 "allow_any_host": true, 00:07:43.848 "hosts": [], 00:07:43.848 "serial_number": "SPDK00000000000001", 00:07:43.848 "model_number": "SPDK bdev Controller", 00:07:43.848 "max_namespaces": 32, 00:07:43.848 "min_cntlid": 1, 00:07:43.848 "max_cntlid": 65519, 00:07:43.848 "namespaces": [ 00:07:43.848 { 00:07:43.848 "nsid": 1, 00:07:43.848 "bdev_name": "Null1", 00:07:43.848 "name": "Null1", 00:07:43.848 "nguid": "9CD3F4647DB24EE5B859F17C92AB2BCA", 00:07:43.848 "uuid": "9cd3f464-7db2-4ee5-b859-f17c92ab2bca" 00:07:43.848 } 00:07:43.848 ] 00:07:43.848 }, 00:07:43.848 { 00:07:43.848 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:07:43.848 "subtype": "NVMe", 00:07:43.848 "listen_addresses": [ 00:07:43.848 { 00:07:43.848 "transport": "TCP", 00:07:43.848 "trtype": "TCP", 00:07:43.848 "adrfam": "IPv4", 00:07:43.848 "traddr": "10.0.0.2", 00:07:43.848 "trsvcid": "4420" 00:07:43.848 } 00:07:43.848 ], 00:07:43.848 "allow_any_host": true, 00:07:43.848 "hosts": [], 00:07:43.848 "serial_number": "SPDK00000000000002", 00:07:43.848 "model_number": "SPDK bdev Controller", 00:07:43.848 "max_namespaces": 32, 00:07:43.848 "min_cntlid": 1, 00:07:43.848 "max_cntlid": 65519, 00:07:43.848 "namespaces": [ 00:07:43.848 { 00:07:43.848 "nsid": 1, 00:07:43.848 "bdev_name": "Null2", 00:07:43.848 "name": "Null2", 00:07:43.848 "nguid": "5A18AF618A52497F808E7B2AFD577F5E", 00:07:43.848 "uuid": "5a18af61-8a52-497f-808e-7b2afd577f5e" 00:07:43.848 } 00:07:43.848 ] 00:07:43.848 }, 00:07:43.848 { 00:07:43.848 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:07:43.848 "subtype": "NVMe", 00:07:43.848 "listen_addresses": [ 00:07:43.848 { 00:07:43.848 "transport": "TCP", 00:07:43.848 "trtype": "TCP", 00:07:43.848 "adrfam": "IPv4", 00:07:43.848 "traddr": "10.0.0.2", 00:07:43.848 "trsvcid": "4420" 00:07:43.848 } 00:07:43.848 ], 00:07:43.848 "allow_any_host": true, 00:07:43.848 "hosts": [], 00:07:43.848 "serial_number": "SPDK00000000000003", 00:07:43.848 "model_number": "SPDK bdev Controller", 00:07:43.848 "max_namespaces": 32, 00:07:43.848 "min_cntlid": 1, 00:07:43.848 "max_cntlid": 65519, 00:07:43.848 "namespaces": [ 00:07:43.848 { 00:07:43.848 "nsid": 1, 00:07:43.848 "bdev_name": "Null3", 00:07:43.848 "name": "Null3", 00:07:43.848 "nguid": "278FED70B23E4A4B998F742A11D251EF", 00:07:43.848 "uuid": "278fed70-b23e-4a4b-998f-742a11d251ef" 00:07:43.848 } 00:07:43.848 ] 00:07:43.848 }, 00:07:43.848 { 00:07:43.848 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:07:43.848 "subtype": "NVMe", 00:07:43.848 "listen_addresses": [ 00:07:43.848 { 00:07:43.848 "transport": "TCP", 00:07:43.848 "trtype": "TCP", 00:07:43.848 "adrfam": "IPv4", 00:07:43.848 "traddr": "10.0.0.2", 00:07:43.848 "trsvcid": "4420" 00:07:43.848 } 00:07:43.848 ], 00:07:43.848 "allow_any_host": true, 00:07:43.848 "hosts": [], 00:07:43.848 "serial_number": "SPDK00000000000004", 00:07:43.848 "model_number": "SPDK bdev Controller", 00:07:43.848 "max_namespaces": 32, 00:07:43.848 "min_cntlid": 1, 00:07:43.848 "max_cntlid": 65519, 00:07:43.848 "namespaces": [ 00:07:43.848 { 00:07:43.848 "nsid": 1, 00:07:43.848 "bdev_name": "Null4", 00:07:43.848 "name": "Null4", 00:07:43.848 "nguid": "7B8D8D047D60490397ABE1E22D7B5702", 00:07:43.848 "uuid": "7b8d8d04-7d60-4903-97ab-e1e22d7b5702" 00:07:43.848 } 00:07:43.848 ] 00:07:43.848 } 00:07:43.848 ] 00:07:43.848 12:50:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:43.848 12:50:48 -- target/discovery.sh@42 -- # seq 1 4 00:07:43.848 12:50:48 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:43.848 12:50:48 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:43.848 12:50:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:43.848 12:50:48 -- common/autotest_common.sh@10 -- # set +x 00:07:44.110 12:50:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:44.110 12:50:48 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:07:44.110 12:50:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:44.110 12:50:48 -- common/autotest_common.sh@10 -- # set +x 00:07:44.110 12:50:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:44.110 12:50:48 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:44.110 12:50:48 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:07:44.110 12:50:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:44.110 12:50:48 -- common/autotest_common.sh@10 -- # set +x 00:07:44.110 12:50:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:44.110 12:50:48 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:07:44.110 12:50:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:44.110 12:50:48 -- common/autotest_common.sh@10 -- # set +x 00:07:44.110 12:50:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:44.110 12:50:48 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:44.110 12:50:48 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:07:44.110 12:50:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:44.110 12:50:48 -- common/autotest_common.sh@10 -- # set +x 00:07:44.110 12:50:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:44.110 12:50:48 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:07:44.110 12:50:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:44.110 12:50:48 -- common/autotest_common.sh@10 -- # set +x 00:07:44.110 12:50:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:44.110 12:50:48 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:44.110 12:50:48 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:07:44.110 12:50:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:44.110 12:50:48 -- common/autotest_common.sh@10 -- # set +x 00:07:44.110 12:50:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:44.110 12:50:48 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:07:44.110 12:50:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:44.110 12:50:48 -- common/autotest_common.sh@10 -- # set +x 00:07:44.110 12:50:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:44.110 12:50:48 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:07:44.110 12:50:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:44.110 12:50:48 -- common/autotest_common.sh@10 -- # set +x 00:07:44.110 12:50:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:44.110 12:50:48 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:07:44.110 12:50:48 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:07:44.110 12:50:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:44.110 12:50:48 -- common/autotest_common.sh@10 -- # set +x 00:07:44.110 12:50:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:44.110 12:50:49 -- target/discovery.sh@49 -- # check_bdevs= 00:07:44.110 12:50:49 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:07:44.110 12:50:49 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:07:44.110 12:50:49 -- target/discovery.sh@57 -- # nvmftestfini 00:07:44.110 12:50:49 -- nvmf/common.sh@477 -- # nvmfcleanup 00:07:44.110 12:50:49 -- nvmf/common.sh@117 -- # sync 00:07:44.110 12:50:49 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:44.110 12:50:49 -- nvmf/common.sh@120 -- # set +e 00:07:44.110 12:50:49 -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:44.110 12:50:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:44.110 rmmod nvme_tcp 00:07:44.110 rmmod nvme_fabrics 00:07:44.110 rmmod nvme_keyring 00:07:44.110 12:50:49 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:44.110 12:50:49 -- nvmf/common.sh@124 -- # set -e 00:07:44.110 12:50:49 -- nvmf/common.sh@125 -- # return 0 00:07:44.110 12:50:49 -- nvmf/common.sh@478 -- # '[' -n 3801474 ']' 00:07:44.110 12:50:49 -- nvmf/common.sh@479 -- # killprocess 3801474 00:07:44.110 12:50:49 -- common/autotest_common.sh@936 -- # '[' -z 3801474 ']' 00:07:44.110 12:50:49 -- common/autotest_common.sh@940 -- # kill -0 3801474 00:07:44.110 12:50:49 -- common/autotest_common.sh@941 -- # uname 00:07:44.110 12:50:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:44.110 12:50:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3801474 00:07:44.110 12:50:49 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:44.110 12:50:49 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:44.110 12:50:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3801474' 00:07:44.110 killing process with pid 3801474 00:07:44.110 12:50:49 -- common/autotest_common.sh@955 -- # kill 3801474 00:07:44.110 [2024-04-26 12:50:49.153712] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:07:44.110 12:50:49 -- common/autotest_common.sh@960 -- # wait 3801474 00:07:44.371 12:50:49 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:07:44.371 12:50:49 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:07:44.371 12:50:49 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:07:44.371 12:50:49 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:44.371 12:50:49 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:44.371 12:50:49 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:44.371 12:50:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:44.371 12:50:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:46.914 12:50:51 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:46.914 00:07:46.914 real 0m11.133s 00:07:46.915 user 0m8.409s 00:07:46.915 sys 0m5.626s 00:07:46.915 12:50:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:46.915 12:50:51 -- common/autotest_common.sh@10 -- # set +x 00:07:46.915 ************************************ 00:07:46.915 END TEST nvmf_discovery 00:07:46.915 ************************************ 00:07:46.915 12:50:51 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:46.915 12:50:51 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:46.915 12:50:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:46.915 12:50:51 -- common/autotest_common.sh@10 -- # set +x 00:07:46.915 ************************************ 00:07:46.915 START TEST nvmf_referrals 00:07:46.915 ************************************ 00:07:46.915 12:50:51 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:46.915 * Looking for test storage... 00:07:46.915 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:46.915 12:50:51 -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:46.915 12:50:51 -- nvmf/common.sh@7 -- # uname -s 00:07:46.915 12:50:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:46.915 12:50:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:46.915 12:50:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:46.915 12:50:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:46.915 12:50:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:46.915 12:50:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:46.915 12:50:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:46.915 12:50:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:46.915 12:50:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:46.915 12:50:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:46.915 12:50:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:46.915 12:50:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:46.915 12:50:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:46.915 12:50:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:46.915 12:50:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:46.915 12:50:51 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:46.915 12:50:51 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:46.915 12:50:51 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:46.915 12:50:51 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:46.915 12:50:51 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:46.915 12:50:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.915 12:50:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.915 12:50:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.915 12:50:51 -- paths/export.sh@5 -- # export PATH 00:07:46.915 12:50:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.915 12:50:51 -- nvmf/common.sh@47 -- # : 0 00:07:46.915 12:50:51 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:46.915 12:50:51 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:46.915 12:50:51 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:46.915 12:50:51 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:46.915 12:50:51 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:46.915 12:50:51 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:46.915 12:50:51 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:46.915 12:50:51 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:46.915 12:50:51 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:07:46.915 12:50:51 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:07:46.915 12:50:51 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:07:46.915 12:50:51 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:07:46.915 12:50:51 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:07:46.915 12:50:51 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:07:46.915 12:50:51 -- target/referrals.sh@37 -- # nvmftestinit 00:07:46.915 12:50:51 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:07:46.915 12:50:51 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:46.915 12:50:51 -- nvmf/common.sh@437 -- # prepare_net_devs 00:07:46.915 12:50:51 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:07:46.915 12:50:51 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:07:46.915 12:50:51 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:46.915 12:50:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:46.915 12:50:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:46.915 12:50:51 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:07:46.915 12:50:51 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:07:46.915 12:50:51 -- nvmf/common.sh@285 -- # xtrace_disable 00:07:46.915 12:50:51 -- common/autotest_common.sh@10 -- # set +x 00:07:55.059 12:50:58 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:55.059 12:50:58 -- nvmf/common.sh@291 -- # pci_devs=() 00:07:55.059 12:50:58 -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:55.059 12:50:58 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:55.059 12:50:58 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:55.059 12:50:58 -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:55.059 12:50:58 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:55.059 12:50:58 -- nvmf/common.sh@295 -- # net_devs=() 00:07:55.059 12:50:58 -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:55.059 12:50:58 -- nvmf/common.sh@296 -- # e810=() 00:07:55.059 12:50:58 -- nvmf/common.sh@296 -- # local -ga e810 00:07:55.059 12:50:58 -- nvmf/common.sh@297 -- # x722=() 00:07:55.059 12:50:58 -- nvmf/common.sh@297 -- # local -ga x722 00:07:55.059 12:50:58 -- nvmf/common.sh@298 -- # mlx=() 00:07:55.059 12:50:58 -- nvmf/common.sh@298 -- # local -ga mlx 00:07:55.059 12:50:58 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:55.059 12:50:58 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:55.059 12:50:58 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:55.059 12:50:58 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:55.059 12:50:58 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:55.059 12:50:58 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:55.059 12:50:58 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:55.059 12:50:58 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:55.059 12:50:58 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:55.059 12:50:58 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:55.059 12:50:58 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:55.059 12:50:58 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:55.059 12:50:58 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:55.059 12:50:58 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:55.059 12:50:58 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:55.059 12:50:58 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:55.059 12:50:58 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:55.059 12:50:58 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:55.059 12:50:58 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:07:55.059 Found 0000:31:00.0 (0x8086 - 0x159b) 00:07:55.059 12:50:58 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:55.059 12:50:58 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:55.059 12:50:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:55.059 12:50:58 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:55.059 12:50:58 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:55.059 12:50:58 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:55.059 12:50:58 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:07:55.059 Found 0000:31:00.1 (0x8086 - 0x159b) 00:07:55.059 12:50:58 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:55.059 12:50:58 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:55.059 12:50:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:55.059 12:50:58 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:55.059 12:50:58 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:55.059 12:50:58 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:55.059 12:50:58 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:55.059 12:50:58 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:55.059 12:50:58 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:55.059 12:50:58 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:55.059 12:50:58 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:55.059 12:50:58 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:55.059 12:50:58 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:07:55.059 Found net devices under 0000:31:00.0: cvl_0_0 00:07:55.059 12:50:58 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:55.059 12:50:58 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:55.059 12:50:58 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:55.059 12:50:58 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:55.059 12:50:58 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:55.059 12:50:58 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:07:55.059 Found net devices under 0000:31:00.1: cvl_0_1 00:07:55.059 12:50:58 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:55.059 12:50:58 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:07:55.059 12:50:58 -- nvmf/common.sh@403 -- # is_hw=yes 00:07:55.059 12:50:58 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:07:55.059 12:50:58 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:07:55.059 12:50:58 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:07:55.059 12:50:58 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:55.059 12:50:58 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:55.059 12:50:58 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:55.059 12:50:58 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:55.059 12:50:58 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:55.059 12:50:58 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:55.059 12:50:58 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:55.060 12:50:58 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:55.060 12:50:58 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:55.060 12:50:58 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:55.060 12:50:58 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:55.060 12:50:58 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:55.060 12:50:58 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:55.060 12:50:58 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:55.060 12:50:58 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:55.060 12:50:58 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:55.060 12:50:58 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:55.060 12:50:58 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:55.060 12:50:58 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:55.060 12:50:58 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:55.060 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:55.060 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.533 ms 00:07:55.060 00:07:55.060 --- 10.0.0.2 ping statistics --- 00:07:55.060 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:55.060 rtt min/avg/max/mdev = 0.533/0.533/0.533/0.000 ms 00:07:55.060 12:50:58 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:55.060 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:55.060 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.311 ms 00:07:55.060 00:07:55.060 --- 10.0.0.1 ping statistics --- 00:07:55.060 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:55.060 rtt min/avg/max/mdev = 0.311/0.311/0.311/0.000 ms 00:07:55.060 12:50:58 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:55.060 12:50:58 -- nvmf/common.sh@411 -- # return 0 00:07:55.060 12:50:58 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:07:55.060 12:50:58 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:55.060 12:50:58 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:07:55.060 12:50:58 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:07:55.060 12:50:58 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:55.060 12:50:58 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:07:55.060 12:50:58 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:07:55.060 12:50:58 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:07:55.060 12:50:58 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:07:55.060 12:50:58 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:55.060 12:50:58 -- common/autotest_common.sh@10 -- # set +x 00:07:55.060 12:50:58 -- nvmf/common.sh@470 -- # nvmfpid=3806026 00:07:55.060 12:50:58 -- nvmf/common.sh@471 -- # waitforlisten 3806026 00:07:55.060 12:50:58 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:55.060 12:50:58 -- common/autotest_common.sh@817 -- # '[' -z 3806026 ']' 00:07:55.060 12:50:58 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:55.060 12:50:58 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:55.060 12:50:58 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:55.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:55.060 12:50:58 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:55.060 12:50:58 -- common/autotest_common.sh@10 -- # set +x 00:07:55.060 [2024-04-26 12:50:59.010390] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:07:55.060 [2024-04-26 12:50:59.010454] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:55.060 EAL: No free 2048 kB hugepages reported on node 1 00:07:55.060 [2024-04-26 12:50:59.083310] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:55.060 [2024-04-26 12:50:59.157218] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:55.060 [2024-04-26 12:50:59.157260] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:55.060 [2024-04-26 12:50:59.157270] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:55.060 [2024-04-26 12:50:59.157282] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:55.060 [2024-04-26 12:50:59.157288] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:55.060 [2024-04-26 12:50:59.157437] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:55.060 [2024-04-26 12:50:59.157571] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:55.060 [2024-04-26 12:50:59.157727] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.060 [2024-04-26 12:50:59.157727] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:55.060 12:50:59 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:55.060 12:50:59 -- common/autotest_common.sh@850 -- # return 0 00:07:55.060 12:50:59 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:07:55.060 12:50:59 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:55.060 12:50:59 -- common/autotest_common.sh@10 -- # set +x 00:07:55.060 12:50:59 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:55.060 12:50:59 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:55.060 12:50:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:55.060 12:50:59 -- common/autotest_common.sh@10 -- # set +x 00:07:55.060 [2024-04-26 12:50:59.833355] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:55.060 12:50:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:55.060 12:50:59 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:07:55.060 12:50:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:55.060 12:50:59 -- common/autotest_common.sh@10 -- # set +x 00:07:55.060 [2024-04-26 12:50:59.849529] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:07:55.060 12:50:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:55.060 12:50:59 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:07:55.060 12:50:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:55.060 12:50:59 -- common/autotest_common.sh@10 -- # set +x 00:07:55.060 12:50:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:55.060 12:50:59 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:07:55.060 12:50:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:55.060 12:50:59 -- common/autotest_common.sh@10 -- # set +x 00:07:55.060 12:50:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:55.060 12:50:59 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:07:55.060 12:50:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:55.060 12:50:59 -- common/autotest_common.sh@10 -- # set +x 00:07:55.060 12:50:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:55.060 12:50:59 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:55.060 12:50:59 -- target/referrals.sh@48 -- # jq length 00:07:55.060 12:50:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:55.060 12:50:59 -- common/autotest_common.sh@10 -- # set +x 00:07:55.060 12:50:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:55.060 12:50:59 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:07:55.060 12:50:59 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:07:55.060 12:50:59 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:55.060 12:50:59 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:55.060 12:50:59 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:55.060 12:50:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:55.060 12:50:59 -- target/referrals.sh@21 -- # sort 00:07:55.060 12:50:59 -- common/autotest_common.sh@10 -- # set +x 00:07:55.060 12:50:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:55.060 12:50:59 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:55.060 12:50:59 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:55.060 12:50:59 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:07:55.060 12:50:59 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:55.060 12:50:59 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:55.060 12:50:59 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:55.060 12:50:59 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:55.060 12:50:59 -- target/referrals.sh@26 -- # sort 00:07:55.321 12:51:00 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:55.321 12:51:00 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:55.321 12:51:00 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:07:55.321 12:51:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:55.321 12:51:00 -- common/autotest_common.sh@10 -- # set +x 00:07:55.321 12:51:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:55.321 12:51:00 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:07:55.321 12:51:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:55.321 12:51:00 -- common/autotest_common.sh@10 -- # set +x 00:07:55.321 12:51:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:55.321 12:51:00 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:07:55.321 12:51:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:55.321 12:51:00 -- common/autotest_common.sh@10 -- # set +x 00:07:55.321 12:51:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:55.321 12:51:00 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:55.321 12:51:00 -- target/referrals.sh@56 -- # jq length 00:07:55.321 12:51:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:55.321 12:51:00 -- common/autotest_common.sh@10 -- # set +x 00:07:55.321 12:51:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:55.321 12:51:00 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:07:55.321 12:51:00 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:07:55.321 12:51:00 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:55.321 12:51:00 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:55.321 12:51:00 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:55.321 12:51:00 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:55.321 12:51:00 -- target/referrals.sh@26 -- # sort 00:07:55.321 12:51:00 -- target/referrals.sh@26 -- # echo 00:07:55.321 12:51:00 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:07:55.321 12:51:00 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:07:55.321 12:51:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:55.321 12:51:00 -- common/autotest_common.sh@10 -- # set +x 00:07:55.581 12:51:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:55.581 12:51:00 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:55.581 12:51:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:55.581 12:51:00 -- common/autotest_common.sh@10 -- # set +x 00:07:55.581 12:51:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:55.581 12:51:00 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:07:55.581 12:51:00 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:55.581 12:51:00 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:55.581 12:51:00 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:55.581 12:51:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:55.581 12:51:00 -- target/referrals.sh@21 -- # sort 00:07:55.581 12:51:00 -- common/autotest_common.sh@10 -- # set +x 00:07:55.581 12:51:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:55.581 12:51:00 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:07:55.581 12:51:00 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:55.581 12:51:00 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:07:55.581 12:51:00 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:55.581 12:51:00 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:55.581 12:51:00 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:55.581 12:51:00 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:55.581 12:51:00 -- target/referrals.sh@26 -- # sort 00:07:55.581 12:51:00 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:07:55.581 12:51:00 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:55.581 12:51:00 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:07:55.581 12:51:00 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:55.581 12:51:00 -- target/referrals.sh@67 -- # jq -r .subnqn 00:07:55.581 12:51:00 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:55.581 12:51:00 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:55.841 12:51:00 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:07:55.841 12:51:00 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:07:55.841 12:51:00 -- target/referrals.sh@68 -- # jq -r .subnqn 00:07:55.841 12:51:00 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:55.841 12:51:00 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:55.841 12:51:00 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:56.101 12:51:00 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:56.101 12:51:00 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:56.101 12:51:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:56.101 12:51:00 -- common/autotest_common.sh@10 -- # set +x 00:07:56.101 12:51:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:56.101 12:51:00 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:07:56.101 12:51:00 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:56.101 12:51:00 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:56.101 12:51:00 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:56.101 12:51:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:56.101 12:51:00 -- target/referrals.sh@21 -- # sort 00:07:56.101 12:51:00 -- common/autotest_common.sh@10 -- # set +x 00:07:56.101 12:51:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:56.101 12:51:00 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:07:56.101 12:51:00 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:56.101 12:51:00 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:07:56.101 12:51:00 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:56.101 12:51:00 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:56.101 12:51:00 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:56.101 12:51:00 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:56.101 12:51:00 -- target/referrals.sh@26 -- # sort 00:07:56.101 12:51:01 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:07:56.101 12:51:01 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:56.101 12:51:01 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:07:56.101 12:51:01 -- target/referrals.sh@75 -- # jq -r .subnqn 00:07:56.101 12:51:01 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:56.101 12:51:01 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:56.101 12:51:01 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:56.362 12:51:01 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:07:56.362 12:51:01 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:07:56.362 12:51:01 -- target/referrals.sh@76 -- # jq -r .subnqn 00:07:56.362 12:51:01 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:56.362 12:51:01 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:56.362 12:51:01 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:56.362 12:51:01 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:56.362 12:51:01 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:07:56.362 12:51:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:56.362 12:51:01 -- common/autotest_common.sh@10 -- # set +x 00:07:56.362 12:51:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:56.362 12:51:01 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:56.362 12:51:01 -- target/referrals.sh@82 -- # jq length 00:07:56.362 12:51:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:56.362 12:51:01 -- common/autotest_common.sh@10 -- # set +x 00:07:56.362 12:51:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:56.621 12:51:01 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:07:56.621 12:51:01 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:07:56.621 12:51:01 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:56.621 12:51:01 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:56.621 12:51:01 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:56.621 12:51:01 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:56.621 12:51:01 -- target/referrals.sh@26 -- # sort 00:07:56.621 12:51:01 -- target/referrals.sh@26 -- # echo 00:07:56.621 12:51:01 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:07:56.621 12:51:01 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:07:56.621 12:51:01 -- target/referrals.sh@86 -- # nvmftestfini 00:07:56.621 12:51:01 -- nvmf/common.sh@477 -- # nvmfcleanup 00:07:56.621 12:51:01 -- nvmf/common.sh@117 -- # sync 00:07:56.621 12:51:01 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:56.621 12:51:01 -- nvmf/common.sh@120 -- # set +e 00:07:56.621 12:51:01 -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:56.621 12:51:01 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:56.621 rmmod nvme_tcp 00:07:56.621 rmmod nvme_fabrics 00:07:56.621 rmmod nvme_keyring 00:07:56.621 12:51:01 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:56.621 12:51:01 -- nvmf/common.sh@124 -- # set -e 00:07:56.621 12:51:01 -- nvmf/common.sh@125 -- # return 0 00:07:56.621 12:51:01 -- nvmf/common.sh@478 -- # '[' -n 3806026 ']' 00:07:56.621 12:51:01 -- nvmf/common.sh@479 -- # killprocess 3806026 00:07:56.621 12:51:01 -- common/autotest_common.sh@936 -- # '[' -z 3806026 ']' 00:07:56.621 12:51:01 -- common/autotest_common.sh@940 -- # kill -0 3806026 00:07:56.621 12:51:01 -- common/autotest_common.sh@941 -- # uname 00:07:56.621 12:51:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:56.621 12:51:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3806026 00:07:56.881 12:51:01 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:56.881 12:51:01 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:56.881 12:51:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3806026' 00:07:56.881 killing process with pid 3806026 00:07:56.881 12:51:01 -- common/autotest_common.sh@955 -- # kill 3806026 00:07:56.881 12:51:01 -- common/autotest_common.sh@960 -- # wait 3806026 00:07:56.881 12:51:01 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:07:56.881 12:51:01 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:07:56.881 12:51:01 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:07:56.881 12:51:01 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:56.881 12:51:01 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:56.881 12:51:01 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:56.881 12:51:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:56.881 12:51:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:59.417 12:51:03 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:59.417 00:07:59.417 real 0m12.346s 00:07:59.417 user 0m13.614s 00:07:59.417 sys 0m6.080s 00:07:59.417 12:51:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:59.417 12:51:03 -- common/autotest_common.sh@10 -- # set +x 00:07:59.417 ************************************ 00:07:59.417 END TEST nvmf_referrals 00:07:59.417 ************************************ 00:07:59.417 12:51:03 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:59.417 12:51:03 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:59.417 12:51:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:59.417 12:51:03 -- common/autotest_common.sh@10 -- # set +x 00:07:59.417 ************************************ 00:07:59.417 START TEST nvmf_connect_disconnect 00:07:59.417 ************************************ 00:07:59.417 12:51:04 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:59.417 * Looking for test storage... 00:07:59.417 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:59.418 12:51:04 -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:59.418 12:51:04 -- nvmf/common.sh@7 -- # uname -s 00:07:59.418 12:51:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:59.418 12:51:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:59.418 12:51:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:59.418 12:51:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:59.418 12:51:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:59.418 12:51:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:59.418 12:51:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:59.418 12:51:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:59.418 12:51:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:59.418 12:51:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:59.418 12:51:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:59.418 12:51:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:59.418 12:51:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:59.418 12:51:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:59.418 12:51:04 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:59.418 12:51:04 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:59.418 12:51:04 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:59.418 12:51:04 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:59.418 12:51:04 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:59.418 12:51:04 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:59.418 12:51:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.418 12:51:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.418 12:51:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.418 12:51:04 -- paths/export.sh@5 -- # export PATH 00:07:59.418 12:51:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.418 12:51:04 -- nvmf/common.sh@47 -- # : 0 00:07:59.418 12:51:04 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:59.418 12:51:04 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:59.418 12:51:04 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:59.418 12:51:04 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:59.418 12:51:04 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:59.418 12:51:04 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:59.418 12:51:04 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:59.418 12:51:04 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:59.418 12:51:04 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:59.418 12:51:04 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:59.418 12:51:04 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:07:59.418 12:51:04 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:07:59.418 12:51:04 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:59.418 12:51:04 -- nvmf/common.sh@437 -- # prepare_net_devs 00:07:59.418 12:51:04 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:07:59.418 12:51:04 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:07:59.418 12:51:04 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:59.418 12:51:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:59.418 12:51:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:59.418 12:51:04 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:07:59.418 12:51:04 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:07:59.418 12:51:04 -- nvmf/common.sh@285 -- # xtrace_disable 00:07:59.418 12:51:04 -- common/autotest_common.sh@10 -- # set +x 00:08:07.585 12:51:11 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:07.585 12:51:11 -- nvmf/common.sh@291 -- # pci_devs=() 00:08:07.585 12:51:11 -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:07.585 12:51:11 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:07.585 12:51:11 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:07.585 12:51:11 -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:07.585 12:51:11 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:07.585 12:51:11 -- nvmf/common.sh@295 -- # net_devs=() 00:08:07.585 12:51:11 -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:07.585 12:51:11 -- nvmf/common.sh@296 -- # e810=() 00:08:07.585 12:51:11 -- nvmf/common.sh@296 -- # local -ga e810 00:08:07.585 12:51:11 -- nvmf/common.sh@297 -- # x722=() 00:08:07.585 12:51:11 -- nvmf/common.sh@297 -- # local -ga x722 00:08:07.585 12:51:11 -- nvmf/common.sh@298 -- # mlx=() 00:08:07.585 12:51:11 -- nvmf/common.sh@298 -- # local -ga mlx 00:08:07.585 12:51:11 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:07.585 12:51:11 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:07.585 12:51:11 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:07.585 12:51:11 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:07.585 12:51:11 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:07.585 12:51:11 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:07.585 12:51:11 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:07.585 12:51:11 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:07.585 12:51:11 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:07.585 12:51:11 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:07.585 12:51:11 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:07.585 12:51:11 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:07.585 12:51:11 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:07.585 12:51:11 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:07.585 12:51:11 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:07.585 12:51:11 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:07.585 12:51:11 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:07.585 12:51:11 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:07.585 12:51:11 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:07.585 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:07.585 12:51:11 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:07.585 12:51:11 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:07.585 12:51:11 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:07.585 12:51:11 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:07.585 12:51:11 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:07.585 12:51:11 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:07.585 12:51:11 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:07.585 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:07.585 12:51:11 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:07.585 12:51:11 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:07.585 12:51:11 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:07.585 12:51:11 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:07.585 12:51:11 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:07.585 12:51:11 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:07.585 12:51:11 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:07.585 12:51:11 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:07.585 12:51:11 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:07.585 12:51:11 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:07.585 12:51:11 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:07.585 12:51:11 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:07.585 12:51:11 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:07.585 Found net devices under 0000:31:00.0: cvl_0_0 00:08:07.585 12:51:11 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:07.585 12:51:11 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:07.585 12:51:11 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:07.585 12:51:11 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:07.585 12:51:11 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:07.585 12:51:11 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:07.585 Found net devices under 0000:31:00.1: cvl_0_1 00:08:07.585 12:51:11 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:07.585 12:51:11 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:08:07.586 12:51:11 -- nvmf/common.sh@403 -- # is_hw=yes 00:08:07.586 12:51:11 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:08:07.586 12:51:11 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:08:07.586 12:51:11 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:08:07.586 12:51:11 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:07.586 12:51:11 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:07.586 12:51:11 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:07.586 12:51:11 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:07.586 12:51:11 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:07.586 12:51:11 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:07.586 12:51:11 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:07.586 12:51:11 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:07.586 12:51:11 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:07.586 12:51:11 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:07.586 12:51:11 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:07.586 12:51:11 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:07.586 12:51:11 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:07.586 12:51:11 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:07.586 12:51:11 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:07.586 12:51:11 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:07.586 12:51:11 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:07.586 12:51:11 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:07.586 12:51:11 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:07.586 12:51:11 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:07.586 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:07.586 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.523 ms 00:08:07.586 00:08:07.586 --- 10.0.0.2 ping statistics --- 00:08:07.586 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:07.586 rtt min/avg/max/mdev = 0.523/0.523/0.523/0.000 ms 00:08:07.586 12:51:11 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:07.586 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:07.586 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.304 ms 00:08:07.586 00:08:07.586 --- 10.0.0.1 ping statistics --- 00:08:07.586 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:07.586 rtt min/avg/max/mdev = 0.304/0.304/0.304/0.000 ms 00:08:07.586 12:51:11 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:07.586 12:51:11 -- nvmf/common.sh@411 -- # return 0 00:08:07.586 12:51:11 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:08:07.586 12:51:11 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:07.586 12:51:11 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:08:07.586 12:51:11 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:08:07.586 12:51:11 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:07.586 12:51:11 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:08:07.586 12:51:11 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:08:07.586 12:51:11 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:07.586 12:51:11 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:08:07.586 12:51:11 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:07.586 12:51:11 -- common/autotest_common.sh@10 -- # set +x 00:08:07.586 12:51:11 -- nvmf/common.sh@470 -- # nvmfpid=3811603 00:08:07.586 12:51:11 -- nvmf/common.sh@471 -- # waitforlisten 3811603 00:08:07.586 12:51:11 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:07.586 12:51:11 -- common/autotest_common.sh@817 -- # '[' -z 3811603 ']' 00:08:07.586 12:51:11 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:07.586 12:51:11 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:07.586 12:51:11 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:07.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:07.586 12:51:11 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:07.586 12:51:11 -- common/autotest_common.sh@10 -- # set +x 00:08:07.586 [2024-04-26 12:51:11.610364] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:08:07.586 [2024-04-26 12:51:11.610425] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:07.586 EAL: No free 2048 kB hugepages reported on node 1 00:08:07.586 [2024-04-26 12:51:11.684029] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:07.586 [2024-04-26 12:51:11.756827] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:07.586 [2024-04-26 12:51:11.756877] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:07.586 [2024-04-26 12:51:11.756886] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:07.586 [2024-04-26 12:51:11.756894] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:07.586 [2024-04-26 12:51:11.756900] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:07.586 [2024-04-26 12:51:11.757054] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:07.586 [2024-04-26 12:51:11.757191] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:07.586 [2024-04-26 12:51:11.757351] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.586 [2024-04-26 12:51:11.757351] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:07.586 12:51:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:07.586 12:51:12 -- common/autotest_common.sh@850 -- # return 0 00:08:07.586 12:51:12 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:08:07.586 12:51:12 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:07.586 12:51:12 -- common/autotest_common.sh@10 -- # set +x 00:08:07.586 12:51:12 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:07.586 12:51:12 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:07.586 12:51:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:07.586 12:51:12 -- common/autotest_common.sh@10 -- # set +x 00:08:07.586 [2024-04-26 12:51:12.438397] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:07.586 12:51:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:07.586 12:51:12 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:07.586 12:51:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:07.586 12:51:12 -- common/autotest_common.sh@10 -- # set +x 00:08:07.586 12:51:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:07.586 12:51:12 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:07.586 12:51:12 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:07.586 12:51:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:07.586 12:51:12 -- common/autotest_common.sh@10 -- # set +x 00:08:07.586 12:51:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:07.586 12:51:12 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:07.586 12:51:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:07.586 12:51:12 -- common/autotest_common.sh@10 -- # set +x 00:08:07.586 12:51:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:07.586 12:51:12 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:07.586 12:51:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:07.586 12:51:12 -- common/autotest_common.sh@10 -- # set +x 00:08:07.586 [2024-04-26 12:51:12.497822] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:07.586 12:51:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:07.586 12:51:12 -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:08:07.586 12:51:12 -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:08:07.586 12:51:12 -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:08:07.586 12:51:12 -- target/connect_disconnect.sh@34 -- # set +x 00:08:10.132 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:12.045 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:14.590 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:17.178 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:19.086 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:21.707 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:24.248 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:26.158 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:28.698 [2024-04-26 12:51:33.483737] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c2570 is same with the state(5) to be set 00:08:28.698 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:31.249 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:33.158 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:35.708 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:37.617 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:40.160 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:42.703 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:44.616 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:47.156 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:49.703 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:52.255 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:54.194 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:56.759 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:59.302 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:01.216 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:03.766 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:05.678 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:08.221 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:10.763 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:12.675 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:15.216 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:17.763 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:19.678 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:22.223 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:24.766 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:26.699 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:29.269 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:31.811 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:33.723 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:36.268 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:38.180 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:40.739 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:43.282 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:45.190 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:47.731 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:50.271 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:52.181 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:54.755 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:57.298 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:59.210 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:01.833 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:03.742 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:06.284 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:08.827 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:10.742 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:13.290 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:15.837 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:17.749 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:20.293 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:22.836 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:24.746 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:27.288 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:29.835 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:31.747 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:34.310 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:36.910 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:38.819 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:41.359 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:43.342 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:45.896 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:48.441 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:50.985 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:52.896 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:55.437 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:57.352 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:59.900 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:02.447 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:04.362 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:07.075 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:08.986 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:11.532 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:14.077 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:15.989 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:18.541 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:21.087 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:23.001 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:25.548 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:28.111 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:30.085 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:32.631 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:35.178 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:37.091 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:39.634 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:42.178 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:44.094 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:46.637 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:49.179 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:51.090 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:53.634 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:55.563 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:58.217 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:00.763 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:00.763 12:55:05 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:00.763 12:55:05 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:00.763 12:55:05 -- nvmf/common.sh@477 -- # nvmfcleanup 00:12:00.763 12:55:05 -- nvmf/common.sh@117 -- # sync 00:12:00.763 12:55:05 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:00.763 12:55:05 -- nvmf/common.sh@120 -- # set +e 00:12:00.763 12:55:05 -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:00.763 12:55:05 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:00.763 rmmod nvme_tcp 00:12:00.763 rmmod nvme_fabrics 00:12:00.763 rmmod nvme_keyring 00:12:00.763 12:55:05 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:00.763 12:55:05 -- nvmf/common.sh@124 -- # set -e 00:12:00.763 12:55:05 -- nvmf/common.sh@125 -- # return 0 00:12:00.763 12:55:05 -- nvmf/common.sh@478 -- # '[' -n 3811603 ']' 00:12:00.763 12:55:05 -- nvmf/common.sh@479 -- # killprocess 3811603 00:12:00.763 12:55:05 -- common/autotest_common.sh@936 -- # '[' -z 3811603 ']' 00:12:00.763 12:55:05 -- common/autotest_common.sh@940 -- # kill -0 3811603 00:12:00.763 12:55:05 -- common/autotest_common.sh@941 -- # uname 00:12:00.763 12:55:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:00.763 12:55:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3811603 00:12:00.763 12:55:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:00.763 12:55:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:00.763 12:55:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3811603' 00:12:00.763 killing process with pid 3811603 00:12:00.763 12:55:05 -- common/autotest_common.sh@955 -- # kill 3811603 00:12:00.763 12:55:05 -- common/autotest_common.sh@960 -- # wait 3811603 00:12:00.763 12:55:05 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:12:00.763 12:55:05 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:12:00.763 12:55:05 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:12:00.763 12:55:05 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:00.763 12:55:05 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:00.763 12:55:05 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:00.763 12:55:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:00.763 12:55:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:02.672 12:55:07 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:02.672 00:12:02.672 real 4m3.504s 00:12:02.672 user 15m29.423s 00:12:02.672 sys 0m21.774s 00:12:02.672 12:55:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:02.672 12:55:07 -- common/autotest_common.sh@10 -- # set +x 00:12:02.672 ************************************ 00:12:02.672 END TEST nvmf_connect_disconnect 00:12:02.672 ************************************ 00:12:02.672 12:55:07 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:02.672 12:55:07 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:02.672 12:55:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:02.672 12:55:07 -- common/autotest_common.sh@10 -- # set +x 00:12:02.934 ************************************ 00:12:02.934 START TEST nvmf_multitarget 00:12:02.934 ************************************ 00:12:02.934 12:55:07 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:02.934 * Looking for test storage... 00:12:02.934 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:02.934 12:55:07 -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:02.934 12:55:07 -- nvmf/common.sh@7 -- # uname -s 00:12:02.934 12:55:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:02.934 12:55:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:02.934 12:55:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:02.934 12:55:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:02.934 12:55:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:02.934 12:55:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:02.934 12:55:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:02.934 12:55:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:02.934 12:55:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:02.934 12:55:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:02.934 12:55:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:02.934 12:55:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:02.934 12:55:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:02.934 12:55:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:02.934 12:55:07 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:02.934 12:55:07 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:02.934 12:55:07 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:02.934 12:55:07 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:02.934 12:55:07 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:02.934 12:55:07 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:02.934 12:55:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:02.934 12:55:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:02.934 12:55:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:02.934 12:55:07 -- paths/export.sh@5 -- # export PATH 00:12:02.934 12:55:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:02.934 12:55:07 -- nvmf/common.sh@47 -- # : 0 00:12:02.934 12:55:07 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:02.934 12:55:07 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:02.934 12:55:07 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:02.934 12:55:07 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:02.934 12:55:07 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:02.934 12:55:07 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:02.934 12:55:07 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:02.934 12:55:07 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:02.934 12:55:07 -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:02.934 12:55:07 -- target/multitarget.sh@15 -- # nvmftestinit 00:12:02.934 12:55:07 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:12:02.934 12:55:07 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:02.934 12:55:07 -- nvmf/common.sh@437 -- # prepare_net_devs 00:12:02.934 12:55:07 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:12:02.934 12:55:07 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:12:02.934 12:55:07 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:02.934 12:55:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:02.934 12:55:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:02.934 12:55:07 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:12:02.934 12:55:07 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:12:02.934 12:55:07 -- nvmf/common.sh@285 -- # xtrace_disable 00:12:02.934 12:55:07 -- common/autotest_common.sh@10 -- # set +x 00:12:11.079 12:55:14 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:11.079 12:55:14 -- nvmf/common.sh@291 -- # pci_devs=() 00:12:11.079 12:55:14 -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:11.079 12:55:14 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:11.079 12:55:14 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:11.079 12:55:14 -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:11.079 12:55:14 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:11.079 12:55:14 -- nvmf/common.sh@295 -- # net_devs=() 00:12:11.079 12:55:14 -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:11.079 12:55:14 -- nvmf/common.sh@296 -- # e810=() 00:12:11.079 12:55:14 -- nvmf/common.sh@296 -- # local -ga e810 00:12:11.079 12:55:14 -- nvmf/common.sh@297 -- # x722=() 00:12:11.079 12:55:14 -- nvmf/common.sh@297 -- # local -ga x722 00:12:11.079 12:55:14 -- nvmf/common.sh@298 -- # mlx=() 00:12:11.079 12:55:14 -- nvmf/common.sh@298 -- # local -ga mlx 00:12:11.079 12:55:14 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:11.079 12:55:14 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:11.079 12:55:14 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:11.079 12:55:14 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:11.079 12:55:14 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:11.079 12:55:14 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:11.079 12:55:14 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:11.079 12:55:14 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:11.079 12:55:14 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:11.079 12:55:14 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:11.079 12:55:14 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:11.079 12:55:14 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:11.079 12:55:14 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:11.079 12:55:14 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:11.079 12:55:14 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:11.079 12:55:14 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:11.079 12:55:14 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:11.079 12:55:14 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:11.079 12:55:14 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:12:11.080 Found 0000:31:00.0 (0x8086 - 0x159b) 00:12:11.080 12:55:14 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:11.080 12:55:14 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:11.080 12:55:14 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:11.080 12:55:14 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:11.080 12:55:14 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:11.080 12:55:14 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:11.080 12:55:14 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:12:11.080 Found 0000:31:00.1 (0x8086 - 0x159b) 00:12:11.080 12:55:14 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:11.080 12:55:14 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:11.080 12:55:14 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:11.080 12:55:14 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:11.080 12:55:14 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:11.080 12:55:14 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:11.080 12:55:14 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:11.080 12:55:14 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:11.080 12:55:14 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:11.080 12:55:14 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:11.080 12:55:14 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:11.080 12:55:14 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:11.080 12:55:14 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:12:11.080 Found net devices under 0000:31:00.0: cvl_0_0 00:12:11.080 12:55:14 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:11.080 12:55:14 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:11.080 12:55:14 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:11.080 12:55:14 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:11.080 12:55:14 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:11.080 12:55:14 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:12:11.080 Found net devices under 0000:31:00.1: cvl_0_1 00:12:11.080 12:55:14 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:11.080 12:55:14 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:12:11.080 12:55:14 -- nvmf/common.sh@403 -- # is_hw=yes 00:12:11.080 12:55:14 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:12:11.080 12:55:14 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:12:11.080 12:55:14 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:12:11.080 12:55:14 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:11.080 12:55:14 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:11.080 12:55:14 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:11.080 12:55:14 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:11.080 12:55:14 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:11.080 12:55:14 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:11.080 12:55:14 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:11.080 12:55:14 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:11.080 12:55:14 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:11.080 12:55:14 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:11.080 12:55:14 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:11.080 12:55:14 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:11.080 12:55:14 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:11.080 12:55:14 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:11.080 12:55:14 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:11.080 12:55:14 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:11.080 12:55:14 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:11.080 12:55:15 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:11.080 12:55:15 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:11.080 12:55:15 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:11.080 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:11.080 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.778 ms 00:12:11.080 00:12:11.080 --- 10.0.0.2 ping statistics --- 00:12:11.080 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:11.080 rtt min/avg/max/mdev = 0.778/0.778/0.778/0.000 ms 00:12:11.080 12:55:15 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:11.080 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:11.080 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.320 ms 00:12:11.080 00:12:11.080 --- 10.0.0.1 ping statistics --- 00:12:11.080 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:11.080 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:12:11.080 12:55:15 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:11.080 12:55:15 -- nvmf/common.sh@411 -- # return 0 00:12:11.080 12:55:15 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:12:11.080 12:55:15 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:11.080 12:55:15 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:12:11.080 12:55:15 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:12:11.080 12:55:15 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:11.080 12:55:15 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:12:11.080 12:55:15 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:12:11.080 12:55:15 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:11.080 12:55:15 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:12:11.080 12:55:15 -- common/autotest_common.sh@710 -- # xtrace_disable 00:12:11.080 12:55:15 -- common/autotest_common.sh@10 -- # set +x 00:12:11.080 12:55:15 -- nvmf/common.sh@470 -- # nvmfpid=3863098 00:12:11.080 12:55:15 -- nvmf/common.sh@471 -- # waitforlisten 3863098 00:12:11.080 12:55:15 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:11.080 12:55:15 -- common/autotest_common.sh@817 -- # '[' -z 3863098 ']' 00:12:11.080 12:55:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:11.080 12:55:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:11.080 12:55:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:11.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:11.080 12:55:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:11.080 12:55:15 -- common/autotest_common.sh@10 -- # set +x 00:12:11.080 [2024-04-26 12:55:15.180397] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:12:11.080 [2024-04-26 12:55:15.180457] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:11.080 EAL: No free 2048 kB hugepages reported on node 1 00:12:11.080 [2024-04-26 12:55:15.253430] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:11.080 [2024-04-26 12:55:15.326622] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:11.080 [2024-04-26 12:55:15.326665] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:11.080 [2024-04-26 12:55:15.326673] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:11.080 [2024-04-26 12:55:15.326679] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:11.080 [2024-04-26 12:55:15.326685] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:11.080 [2024-04-26 12:55:15.326832] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:11.080 [2024-04-26 12:55:15.326954] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:11.080 [2024-04-26 12:55:15.327322] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:11.080 [2024-04-26 12:55:15.327324] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:11.080 12:55:15 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:11.080 12:55:15 -- common/autotest_common.sh@850 -- # return 0 00:12:11.080 12:55:15 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:12:11.080 12:55:15 -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:11.080 12:55:15 -- common/autotest_common.sh@10 -- # set +x 00:12:11.080 12:55:16 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:11.080 12:55:16 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:11.080 12:55:16 -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:11.080 12:55:16 -- target/multitarget.sh@21 -- # jq length 00:12:11.080 12:55:16 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:11.080 12:55:16 -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:11.341 "nvmf_tgt_1" 00:12:11.341 12:55:16 -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:11.341 "nvmf_tgt_2" 00:12:11.341 12:55:16 -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:11.342 12:55:16 -- target/multitarget.sh@28 -- # jq length 00:12:11.601 12:55:16 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:11.601 12:55:16 -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:11.601 true 00:12:11.601 12:55:16 -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:11.601 true 00:12:11.601 12:55:16 -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:11.601 12:55:16 -- target/multitarget.sh@35 -- # jq length 00:12:11.861 12:55:16 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:11.861 12:55:16 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:11.861 12:55:16 -- target/multitarget.sh@41 -- # nvmftestfini 00:12:11.861 12:55:16 -- nvmf/common.sh@477 -- # nvmfcleanup 00:12:11.861 12:55:16 -- nvmf/common.sh@117 -- # sync 00:12:11.861 12:55:16 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:11.861 12:55:16 -- nvmf/common.sh@120 -- # set +e 00:12:11.861 12:55:16 -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:11.861 12:55:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:11.861 rmmod nvme_tcp 00:12:11.861 rmmod nvme_fabrics 00:12:11.861 rmmod nvme_keyring 00:12:11.861 12:55:16 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:11.861 12:55:16 -- nvmf/common.sh@124 -- # set -e 00:12:11.861 12:55:16 -- nvmf/common.sh@125 -- # return 0 00:12:11.861 12:55:16 -- nvmf/common.sh@478 -- # '[' -n 3863098 ']' 00:12:11.861 12:55:16 -- nvmf/common.sh@479 -- # killprocess 3863098 00:12:11.861 12:55:16 -- common/autotest_common.sh@936 -- # '[' -z 3863098 ']' 00:12:11.861 12:55:16 -- common/autotest_common.sh@940 -- # kill -0 3863098 00:12:11.861 12:55:16 -- common/autotest_common.sh@941 -- # uname 00:12:11.861 12:55:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:11.861 12:55:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3863098 00:12:11.861 12:55:16 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:11.861 12:55:16 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:11.861 12:55:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3863098' 00:12:11.861 killing process with pid 3863098 00:12:11.861 12:55:16 -- common/autotest_common.sh@955 -- # kill 3863098 00:12:11.861 12:55:16 -- common/autotest_common.sh@960 -- # wait 3863098 00:12:12.122 12:55:16 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:12:12.122 12:55:16 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:12:12.122 12:55:16 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:12:12.122 12:55:16 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:12.122 12:55:16 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:12.122 12:55:16 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:12.122 12:55:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:12.122 12:55:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:14.035 12:55:19 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:14.035 00:12:14.035 real 0m11.235s 00:12:14.035 user 0m9.281s 00:12:14.035 sys 0m5.724s 00:12:14.035 12:55:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:14.035 12:55:19 -- common/autotest_common.sh@10 -- # set +x 00:12:14.035 ************************************ 00:12:14.035 END TEST nvmf_multitarget 00:12:14.035 ************************************ 00:12:14.035 12:55:19 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:14.035 12:55:19 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:14.035 12:55:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:14.035 12:55:19 -- common/autotest_common.sh@10 -- # set +x 00:12:14.296 ************************************ 00:12:14.296 START TEST nvmf_rpc 00:12:14.296 ************************************ 00:12:14.296 12:55:19 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:14.296 * Looking for test storage... 00:12:14.296 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:14.296 12:55:19 -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:14.296 12:55:19 -- nvmf/common.sh@7 -- # uname -s 00:12:14.296 12:55:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:14.296 12:55:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:14.296 12:55:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:14.296 12:55:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:14.296 12:55:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:14.296 12:55:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:14.296 12:55:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:14.296 12:55:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:14.296 12:55:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:14.296 12:55:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:14.296 12:55:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:14.296 12:55:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:14.296 12:55:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:14.296 12:55:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:14.296 12:55:19 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:14.296 12:55:19 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:14.296 12:55:19 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:14.296 12:55:19 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:14.296 12:55:19 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:14.296 12:55:19 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:14.296 12:55:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.297 12:55:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.297 12:55:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.297 12:55:19 -- paths/export.sh@5 -- # export PATH 00:12:14.297 12:55:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.297 12:55:19 -- nvmf/common.sh@47 -- # : 0 00:12:14.297 12:55:19 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:14.297 12:55:19 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:14.297 12:55:19 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:14.297 12:55:19 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:14.297 12:55:19 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:14.297 12:55:19 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:14.297 12:55:19 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:14.297 12:55:19 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:14.297 12:55:19 -- target/rpc.sh@11 -- # loops=5 00:12:14.297 12:55:19 -- target/rpc.sh@23 -- # nvmftestinit 00:12:14.297 12:55:19 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:12:14.297 12:55:19 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:14.297 12:55:19 -- nvmf/common.sh@437 -- # prepare_net_devs 00:12:14.297 12:55:19 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:12:14.297 12:55:19 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:12:14.297 12:55:19 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:14.297 12:55:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:14.297 12:55:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:14.297 12:55:19 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:12:14.297 12:55:19 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:12:14.297 12:55:19 -- nvmf/common.sh@285 -- # xtrace_disable 00:12:14.297 12:55:19 -- common/autotest_common.sh@10 -- # set +x 00:12:22.449 12:55:26 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:22.449 12:55:26 -- nvmf/common.sh@291 -- # pci_devs=() 00:12:22.449 12:55:26 -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:22.449 12:55:26 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:22.449 12:55:26 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:22.449 12:55:26 -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:22.449 12:55:26 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:22.449 12:55:26 -- nvmf/common.sh@295 -- # net_devs=() 00:12:22.449 12:55:26 -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:22.449 12:55:26 -- nvmf/common.sh@296 -- # e810=() 00:12:22.449 12:55:26 -- nvmf/common.sh@296 -- # local -ga e810 00:12:22.449 12:55:26 -- nvmf/common.sh@297 -- # x722=() 00:12:22.449 12:55:26 -- nvmf/common.sh@297 -- # local -ga x722 00:12:22.449 12:55:26 -- nvmf/common.sh@298 -- # mlx=() 00:12:22.449 12:55:26 -- nvmf/common.sh@298 -- # local -ga mlx 00:12:22.449 12:55:26 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:22.449 12:55:26 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:22.449 12:55:26 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:22.449 12:55:26 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:22.449 12:55:26 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:22.449 12:55:26 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:22.449 12:55:26 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:22.449 12:55:26 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:22.449 12:55:26 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:22.449 12:55:26 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:22.449 12:55:26 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:22.449 12:55:26 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:22.449 12:55:26 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:22.449 12:55:26 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:22.449 12:55:26 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:22.449 12:55:26 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:22.449 12:55:26 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:22.449 12:55:26 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:22.449 12:55:26 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:12:22.449 Found 0000:31:00.0 (0x8086 - 0x159b) 00:12:22.449 12:55:26 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:22.449 12:55:26 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:22.449 12:55:26 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:22.449 12:55:26 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:22.449 12:55:26 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:22.449 12:55:26 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:22.449 12:55:26 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:12:22.449 Found 0000:31:00.1 (0x8086 - 0x159b) 00:12:22.449 12:55:26 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:22.449 12:55:26 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:22.449 12:55:26 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:22.449 12:55:26 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:22.449 12:55:26 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:22.449 12:55:26 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:22.449 12:55:26 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:22.449 12:55:26 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:22.449 12:55:26 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:22.449 12:55:26 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:22.449 12:55:26 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:22.449 12:55:26 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:22.449 12:55:26 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:12:22.449 Found net devices under 0000:31:00.0: cvl_0_0 00:12:22.449 12:55:26 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:22.449 12:55:26 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:22.449 12:55:26 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:22.449 12:55:26 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:22.449 12:55:26 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:22.449 12:55:26 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:12:22.449 Found net devices under 0000:31:00.1: cvl_0_1 00:12:22.449 12:55:26 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:22.449 12:55:26 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:12:22.449 12:55:26 -- nvmf/common.sh@403 -- # is_hw=yes 00:12:22.449 12:55:26 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:12:22.449 12:55:26 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:12:22.449 12:55:26 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:12:22.449 12:55:26 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:22.449 12:55:26 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:22.449 12:55:26 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:22.449 12:55:26 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:22.449 12:55:26 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:22.449 12:55:26 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:22.449 12:55:26 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:22.449 12:55:26 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:22.449 12:55:26 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:22.449 12:55:26 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:22.449 12:55:26 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:22.449 12:55:26 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:22.449 12:55:26 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:22.449 12:55:26 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:22.449 12:55:26 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:22.449 12:55:26 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:22.449 12:55:26 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:22.449 12:55:26 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:22.449 12:55:26 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:22.449 12:55:26 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:22.449 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:22.449 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.776 ms 00:12:22.449 00:12:22.449 --- 10.0.0.2 ping statistics --- 00:12:22.449 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:22.449 rtt min/avg/max/mdev = 0.776/0.776/0.776/0.000 ms 00:12:22.449 12:55:26 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:22.449 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:22.449 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.294 ms 00:12:22.449 00:12:22.449 --- 10.0.0.1 ping statistics --- 00:12:22.449 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:22.449 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:12:22.449 12:55:26 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:22.449 12:55:26 -- nvmf/common.sh@411 -- # return 0 00:12:22.449 12:55:26 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:12:22.449 12:55:26 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:22.449 12:55:26 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:12:22.449 12:55:26 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:12:22.449 12:55:26 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:22.449 12:55:26 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:12:22.449 12:55:26 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:12:22.449 12:55:26 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:22.449 12:55:26 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:12:22.449 12:55:26 -- common/autotest_common.sh@710 -- # xtrace_disable 00:12:22.449 12:55:26 -- common/autotest_common.sh@10 -- # set +x 00:12:22.450 12:55:26 -- nvmf/common.sh@470 -- # nvmfpid=3867850 00:12:22.450 12:55:26 -- nvmf/common.sh@471 -- # waitforlisten 3867850 00:12:22.450 12:55:26 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:22.450 12:55:26 -- common/autotest_common.sh@817 -- # '[' -z 3867850 ']' 00:12:22.450 12:55:26 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:22.450 12:55:26 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:22.450 12:55:26 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:22.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:22.450 12:55:26 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:22.450 12:55:26 -- common/autotest_common.sh@10 -- # set +x 00:12:22.450 [2024-04-26 12:55:26.708072] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:12:22.450 [2024-04-26 12:55:26.708119] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:22.450 EAL: No free 2048 kB hugepages reported on node 1 00:12:22.450 [2024-04-26 12:55:26.774333] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:22.450 [2024-04-26 12:55:26.837812] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:22.450 [2024-04-26 12:55:26.837858] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:22.450 [2024-04-26 12:55:26.837871] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:22.450 [2024-04-26 12:55:26.837879] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:22.450 [2024-04-26 12:55:26.837886] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:22.450 [2024-04-26 12:55:26.837969] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:22.450 [2024-04-26 12:55:26.838105] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:22.450 [2024-04-26 12:55:26.838262] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:22.450 [2024-04-26 12:55:26.838263] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:22.450 12:55:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:22.450 12:55:27 -- common/autotest_common.sh@850 -- # return 0 00:12:22.450 12:55:27 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:12:22.450 12:55:27 -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:22.450 12:55:27 -- common/autotest_common.sh@10 -- # set +x 00:12:22.710 12:55:27 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:22.711 12:55:27 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:22.711 12:55:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:22.711 12:55:27 -- common/autotest_common.sh@10 -- # set +x 00:12:22.711 12:55:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:22.711 12:55:27 -- target/rpc.sh@26 -- # stats='{ 00:12:22.711 "tick_rate": 2400000000, 00:12:22.711 "poll_groups": [ 00:12:22.711 { 00:12:22.711 "name": "nvmf_tgt_poll_group_0", 00:12:22.711 "admin_qpairs": 0, 00:12:22.711 "io_qpairs": 0, 00:12:22.711 "current_admin_qpairs": 0, 00:12:22.711 "current_io_qpairs": 0, 00:12:22.711 "pending_bdev_io": 0, 00:12:22.711 "completed_nvme_io": 0, 00:12:22.711 "transports": [] 00:12:22.711 }, 00:12:22.711 { 00:12:22.711 "name": "nvmf_tgt_poll_group_1", 00:12:22.711 "admin_qpairs": 0, 00:12:22.711 "io_qpairs": 0, 00:12:22.711 "current_admin_qpairs": 0, 00:12:22.711 "current_io_qpairs": 0, 00:12:22.711 "pending_bdev_io": 0, 00:12:22.711 "completed_nvme_io": 0, 00:12:22.711 "transports": [] 00:12:22.711 }, 00:12:22.711 { 00:12:22.711 "name": "nvmf_tgt_poll_group_2", 00:12:22.711 "admin_qpairs": 0, 00:12:22.711 "io_qpairs": 0, 00:12:22.711 "current_admin_qpairs": 0, 00:12:22.711 "current_io_qpairs": 0, 00:12:22.711 "pending_bdev_io": 0, 00:12:22.711 "completed_nvme_io": 0, 00:12:22.711 "transports": [] 00:12:22.711 }, 00:12:22.711 { 00:12:22.711 "name": "nvmf_tgt_poll_group_3", 00:12:22.711 "admin_qpairs": 0, 00:12:22.711 "io_qpairs": 0, 00:12:22.711 "current_admin_qpairs": 0, 00:12:22.711 "current_io_qpairs": 0, 00:12:22.711 "pending_bdev_io": 0, 00:12:22.711 "completed_nvme_io": 0, 00:12:22.711 "transports": [] 00:12:22.711 } 00:12:22.711 ] 00:12:22.711 }' 00:12:22.711 12:55:27 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:22.711 12:55:27 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:22.711 12:55:27 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:22.711 12:55:27 -- target/rpc.sh@15 -- # wc -l 00:12:22.711 12:55:27 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:22.711 12:55:27 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:22.711 12:55:27 -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:22.711 12:55:27 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:22.711 12:55:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:22.711 12:55:27 -- common/autotest_common.sh@10 -- # set +x 00:12:22.711 [2024-04-26 12:55:27.638764] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:22.711 12:55:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:22.711 12:55:27 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:22.711 12:55:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:22.711 12:55:27 -- common/autotest_common.sh@10 -- # set +x 00:12:22.711 12:55:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:22.711 12:55:27 -- target/rpc.sh@33 -- # stats='{ 00:12:22.711 "tick_rate": 2400000000, 00:12:22.711 "poll_groups": [ 00:12:22.711 { 00:12:22.711 "name": "nvmf_tgt_poll_group_0", 00:12:22.711 "admin_qpairs": 0, 00:12:22.711 "io_qpairs": 0, 00:12:22.711 "current_admin_qpairs": 0, 00:12:22.711 "current_io_qpairs": 0, 00:12:22.711 "pending_bdev_io": 0, 00:12:22.711 "completed_nvme_io": 0, 00:12:22.711 "transports": [ 00:12:22.711 { 00:12:22.711 "trtype": "TCP" 00:12:22.711 } 00:12:22.711 ] 00:12:22.711 }, 00:12:22.711 { 00:12:22.711 "name": "nvmf_tgt_poll_group_1", 00:12:22.711 "admin_qpairs": 0, 00:12:22.711 "io_qpairs": 0, 00:12:22.711 "current_admin_qpairs": 0, 00:12:22.711 "current_io_qpairs": 0, 00:12:22.711 "pending_bdev_io": 0, 00:12:22.711 "completed_nvme_io": 0, 00:12:22.711 "transports": [ 00:12:22.711 { 00:12:22.711 "trtype": "TCP" 00:12:22.711 } 00:12:22.711 ] 00:12:22.711 }, 00:12:22.711 { 00:12:22.711 "name": "nvmf_tgt_poll_group_2", 00:12:22.711 "admin_qpairs": 0, 00:12:22.711 "io_qpairs": 0, 00:12:22.711 "current_admin_qpairs": 0, 00:12:22.711 "current_io_qpairs": 0, 00:12:22.711 "pending_bdev_io": 0, 00:12:22.711 "completed_nvme_io": 0, 00:12:22.711 "transports": [ 00:12:22.711 { 00:12:22.711 "trtype": "TCP" 00:12:22.711 } 00:12:22.711 ] 00:12:22.711 }, 00:12:22.711 { 00:12:22.711 "name": "nvmf_tgt_poll_group_3", 00:12:22.711 "admin_qpairs": 0, 00:12:22.711 "io_qpairs": 0, 00:12:22.711 "current_admin_qpairs": 0, 00:12:22.711 "current_io_qpairs": 0, 00:12:22.711 "pending_bdev_io": 0, 00:12:22.711 "completed_nvme_io": 0, 00:12:22.711 "transports": [ 00:12:22.711 { 00:12:22.711 "trtype": "TCP" 00:12:22.711 } 00:12:22.711 ] 00:12:22.711 } 00:12:22.711 ] 00:12:22.711 }' 00:12:22.711 12:55:27 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:22.711 12:55:27 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:22.711 12:55:27 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:22.711 12:55:27 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:22.711 12:55:27 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:22.711 12:55:27 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:22.711 12:55:27 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:22.711 12:55:27 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:22.711 12:55:27 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:22.711 12:55:27 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:22.711 12:55:27 -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:22.711 12:55:27 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:22.711 12:55:27 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:22.711 12:55:27 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:22.711 12:55:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:22.711 12:55:27 -- common/autotest_common.sh@10 -- # set +x 00:12:22.972 Malloc1 00:12:22.972 12:55:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:22.972 12:55:27 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:22.972 12:55:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:22.972 12:55:27 -- common/autotest_common.sh@10 -- # set +x 00:12:22.972 12:55:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:22.972 12:55:27 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:22.972 12:55:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:22.972 12:55:27 -- common/autotest_common.sh@10 -- # set +x 00:12:22.972 12:55:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:22.972 12:55:27 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:22.972 12:55:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:22.972 12:55:27 -- common/autotest_common.sh@10 -- # set +x 00:12:22.972 12:55:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:22.972 12:55:27 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:22.972 12:55:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:22.972 12:55:27 -- common/autotest_common.sh@10 -- # set +x 00:12:22.972 [2024-04-26 12:55:27.830434] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:22.972 12:55:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:22.972 12:55:27 -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:12:22.972 12:55:27 -- common/autotest_common.sh@638 -- # local es=0 00:12:22.972 12:55:27 -- common/autotest_common.sh@640 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:12:22.972 12:55:27 -- common/autotest_common.sh@626 -- # local arg=nvme 00:12:22.972 12:55:27 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:22.972 12:55:27 -- common/autotest_common.sh@630 -- # type -t nvme 00:12:22.972 12:55:27 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:22.972 12:55:27 -- common/autotest_common.sh@632 -- # type -P nvme 00:12:22.972 12:55:27 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:22.972 12:55:27 -- common/autotest_common.sh@632 -- # arg=/usr/sbin/nvme 00:12:22.972 12:55:27 -- common/autotest_common.sh@632 -- # [[ -x /usr/sbin/nvme ]] 00:12:22.972 12:55:27 -- common/autotest_common.sh@641 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:12:22.972 [2024-04-26 12:55:27.857251] ctrlr.c: 766:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396' 00:12:22.972 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:22.972 could not add new controller: failed to write to nvme-fabrics device 00:12:22.972 12:55:27 -- common/autotest_common.sh@641 -- # es=1 00:12:22.972 12:55:27 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:12:22.972 12:55:27 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:12:22.972 12:55:27 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:12:22.972 12:55:27 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:22.972 12:55:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:22.972 12:55:27 -- common/autotest_common.sh@10 -- # set +x 00:12:22.972 12:55:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:22.972 12:55:27 -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:24.884 12:55:29 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:24.884 12:55:29 -- common/autotest_common.sh@1184 -- # local i=0 00:12:24.884 12:55:29 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:12:24.884 12:55:29 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:12:24.884 12:55:29 -- common/autotest_common.sh@1191 -- # sleep 2 00:12:26.796 12:55:31 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:12:26.796 12:55:31 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:12:26.796 12:55:31 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:12:26.796 12:55:31 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:12:26.796 12:55:31 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:12:26.796 12:55:31 -- common/autotest_common.sh@1194 -- # return 0 00:12:26.796 12:55:31 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:26.796 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:26.796 12:55:31 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:26.796 12:55:31 -- common/autotest_common.sh@1205 -- # local i=0 00:12:26.796 12:55:31 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:12:26.796 12:55:31 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:26.796 12:55:31 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:12:26.796 12:55:31 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:26.796 12:55:31 -- common/autotest_common.sh@1217 -- # return 0 00:12:26.796 12:55:31 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:26.796 12:55:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:26.796 12:55:31 -- common/autotest_common.sh@10 -- # set +x 00:12:26.796 12:55:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:26.796 12:55:31 -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:26.796 12:55:31 -- common/autotest_common.sh@638 -- # local es=0 00:12:26.796 12:55:31 -- common/autotest_common.sh@640 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:26.796 12:55:31 -- common/autotest_common.sh@626 -- # local arg=nvme 00:12:26.796 12:55:31 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:26.796 12:55:31 -- common/autotest_common.sh@630 -- # type -t nvme 00:12:26.796 12:55:31 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:26.796 12:55:31 -- common/autotest_common.sh@632 -- # type -P nvme 00:12:26.796 12:55:31 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:26.796 12:55:31 -- common/autotest_common.sh@632 -- # arg=/usr/sbin/nvme 00:12:26.796 12:55:31 -- common/autotest_common.sh@632 -- # [[ -x /usr/sbin/nvme ]] 00:12:26.796 12:55:31 -- common/autotest_common.sh@641 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:26.796 [2024-04-26 12:55:31.591153] ctrlr.c: 766:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396' 00:12:26.796 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:26.796 could not add new controller: failed to write to nvme-fabrics device 00:12:26.796 12:55:31 -- common/autotest_common.sh@641 -- # es=1 00:12:26.796 12:55:31 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:12:26.796 12:55:31 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:12:26.796 12:55:31 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:12:26.796 12:55:31 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:26.796 12:55:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:26.797 12:55:31 -- common/autotest_common.sh@10 -- # set +x 00:12:26.797 12:55:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:26.797 12:55:31 -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:28.184 12:55:33 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:28.184 12:55:33 -- common/autotest_common.sh@1184 -- # local i=0 00:12:28.184 12:55:33 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:12:28.184 12:55:33 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:12:28.184 12:55:33 -- common/autotest_common.sh@1191 -- # sleep 2 00:12:30.104 12:55:35 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:12:30.104 12:55:35 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:12:30.104 12:55:35 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:12:30.104 12:55:35 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:12:30.104 12:55:35 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:12:30.104 12:55:35 -- common/autotest_common.sh@1194 -- # return 0 00:12:30.104 12:55:35 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:30.366 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:30.366 12:55:35 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:30.366 12:55:35 -- common/autotest_common.sh@1205 -- # local i=0 00:12:30.366 12:55:35 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:12:30.366 12:55:35 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:30.366 12:55:35 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:12:30.366 12:55:35 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:30.366 12:55:35 -- common/autotest_common.sh@1217 -- # return 0 00:12:30.366 12:55:35 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:30.366 12:55:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:30.366 12:55:35 -- common/autotest_common.sh@10 -- # set +x 00:12:30.366 12:55:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:30.366 12:55:35 -- target/rpc.sh@81 -- # seq 1 5 00:12:30.366 12:55:35 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:30.366 12:55:35 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:30.366 12:55:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:30.366 12:55:35 -- common/autotest_common.sh@10 -- # set +x 00:12:30.366 12:55:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:30.366 12:55:35 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:30.366 12:55:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:30.366 12:55:35 -- common/autotest_common.sh@10 -- # set +x 00:12:30.366 [2024-04-26 12:55:35.297501] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:30.366 12:55:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:30.366 12:55:35 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:30.366 12:55:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:30.366 12:55:35 -- common/autotest_common.sh@10 -- # set +x 00:12:30.366 12:55:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:30.366 12:55:35 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:30.366 12:55:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:30.366 12:55:35 -- common/autotest_common.sh@10 -- # set +x 00:12:30.366 12:55:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:30.366 12:55:35 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:32.281 12:55:36 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:32.281 12:55:36 -- common/autotest_common.sh@1184 -- # local i=0 00:12:32.281 12:55:36 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:12:32.281 12:55:36 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:12:32.281 12:55:36 -- common/autotest_common.sh@1191 -- # sleep 2 00:12:34.199 12:55:38 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:12:34.199 12:55:38 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:12:34.199 12:55:38 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:12:34.200 12:55:38 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:12:34.200 12:55:38 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:12:34.200 12:55:38 -- common/autotest_common.sh@1194 -- # return 0 00:12:34.200 12:55:38 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:34.200 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:34.200 12:55:38 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:34.200 12:55:38 -- common/autotest_common.sh@1205 -- # local i=0 00:12:34.200 12:55:38 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:12:34.200 12:55:38 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:34.200 12:55:38 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:12:34.200 12:55:38 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:34.200 12:55:38 -- common/autotest_common.sh@1217 -- # return 0 00:12:34.200 12:55:38 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:34.200 12:55:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:34.200 12:55:39 -- common/autotest_common.sh@10 -- # set +x 00:12:34.200 12:55:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:34.200 12:55:39 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:34.200 12:55:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:34.200 12:55:39 -- common/autotest_common.sh@10 -- # set +x 00:12:34.200 12:55:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:34.200 12:55:39 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:34.200 12:55:39 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:34.200 12:55:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:34.200 12:55:39 -- common/autotest_common.sh@10 -- # set +x 00:12:34.200 12:55:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:34.200 12:55:39 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:34.200 12:55:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:34.200 12:55:39 -- common/autotest_common.sh@10 -- # set +x 00:12:34.200 [2024-04-26 12:55:39.043554] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:34.200 12:55:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:34.200 12:55:39 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:34.200 12:55:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:34.200 12:55:39 -- common/autotest_common.sh@10 -- # set +x 00:12:34.200 12:55:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:34.200 12:55:39 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:34.200 12:55:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:34.200 12:55:39 -- common/autotest_common.sh@10 -- # set +x 00:12:34.200 12:55:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:34.200 12:55:39 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:35.588 12:55:40 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:35.588 12:55:40 -- common/autotest_common.sh@1184 -- # local i=0 00:12:35.588 12:55:40 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:12:35.588 12:55:40 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:12:35.588 12:55:40 -- common/autotest_common.sh@1191 -- # sleep 2 00:12:37.566 12:55:42 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:12:37.566 12:55:42 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:12:37.566 12:55:42 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:12:37.566 12:55:42 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:12:37.566 12:55:42 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:12:37.566 12:55:42 -- common/autotest_common.sh@1194 -- # return 0 00:12:37.566 12:55:42 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:37.827 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:37.827 12:55:42 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:37.827 12:55:42 -- common/autotest_common.sh@1205 -- # local i=0 00:12:37.827 12:55:42 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:12:37.827 12:55:42 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:37.827 12:55:42 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:12:37.827 12:55:42 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:37.827 12:55:42 -- common/autotest_common.sh@1217 -- # return 0 00:12:37.827 12:55:42 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:37.827 12:55:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:37.827 12:55:42 -- common/autotest_common.sh@10 -- # set +x 00:12:37.827 12:55:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:37.827 12:55:42 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:37.827 12:55:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:37.827 12:55:42 -- common/autotest_common.sh@10 -- # set +x 00:12:37.827 12:55:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:37.827 12:55:42 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:37.827 12:55:42 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:37.827 12:55:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:37.827 12:55:42 -- common/autotest_common.sh@10 -- # set +x 00:12:37.827 12:55:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:37.827 12:55:42 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:37.827 12:55:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:37.827 12:55:42 -- common/autotest_common.sh@10 -- # set +x 00:12:37.827 [2024-04-26 12:55:42.746545] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:37.827 12:55:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:37.827 12:55:42 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:37.827 12:55:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:37.827 12:55:42 -- common/autotest_common.sh@10 -- # set +x 00:12:37.827 12:55:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:37.827 12:55:42 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:37.827 12:55:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:37.827 12:55:42 -- common/autotest_common.sh@10 -- # set +x 00:12:37.827 12:55:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:37.827 12:55:42 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:39.216 12:55:44 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:39.216 12:55:44 -- common/autotest_common.sh@1184 -- # local i=0 00:12:39.216 12:55:44 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:12:39.216 12:55:44 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:12:39.216 12:55:44 -- common/autotest_common.sh@1191 -- # sleep 2 00:12:41.762 12:55:46 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:12:41.762 12:55:46 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:12:41.762 12:55:46 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:12:41.762 12:55:46 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:12:41.762 12:55:46 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:12:41.762 12:55:46 -- common/autotest_common.sh@1194 -- # return 0 00:12:41.762 12:55:46 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:41.762 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:41.762 12:55:46 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:41.762 12:55:46 -- common/autotest_common.sh@1205 -- # local i=0 00:12:41.762 12:55:46 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:12:41.762 12:55:46 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:41.762 12:55:46 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:12:41.762 12:55:46 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:41.762 12:55:46 -- common/autotest_common.sh@1217 -- # return 0 00:12:41.762 12:55:46 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:41.762 12:55:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:41.762 12:55:46 -- common/autotest_common.sh@10 -- # set +x 00:12:41.762 12:55:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:41.762 12:55:46 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:41.762 12:55:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:41.762 12:55:46 -- common/autotest_common.sh@10 -- # set +x 00:12:41.762 12:55:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:41.762 12:55:46 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:41.762 12:55:46 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:41.762 12:55:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:41.762 12:55:46 -- common/autotest_common.sh@10 -- # set +x 00:12:41.762 12:55:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:41.762 12:55:46 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:41.762 12:55:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:41.762 12:55:46 -- common/autotest_common.sh@10 -- # set +x 00:12:41.762 [2024-04-26 12:55:46.456417] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:41.762 12:55:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:41.762 12:55:46 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:41.762 12:55:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:41.762 12:55:46 -- common/autotest_common.sh@10 -- # set +x 00:12:41.762 12:55:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:41.762 12:55:46 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:41.762 12:55:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:41.762 12:55:46 -- common/autotest_common.sh@10 -- # set +x 00:12:41.762 12:55:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:41.762 12:55:46 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:43.148 12:55:47 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:43.148 12:55:47 -- common/autotest_common.sh@1184 -- # local i=0 00:12:43.148 12:55:47 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:12:43.148 12:55:47 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:12:43.148 12:55:47 -- common/autotest_common.sh@1191 -- # sleep 2 00:12:45.062 12:55:49 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:12:45.062 12:55:49 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:12:45.062 12:55:49 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:12:45.062 12:55:49 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:12:45.062 12:55:49 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:12:45.062 12:55:49 -- common/autotest_common.sh@1194 -- # return 0 00:12:45.062 12:55:49 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:45.062 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:45.062 12:55:50 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:45.062 12:55:50 -- common/autotest_common.sh@1205 -- # local i=0 00:12:45.062 12:55:50 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:12:45.062 12:55:50 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:45.062 12:55:50 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:12:45.062 12:55:50 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:45.324 12:55:50 -- common/autotest_common.sh@1217 -- # return 0 00:12:45.324 12:55:50 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:45.324 12:55:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:45.324 12:55:50 -- common/autotest_common.sh@10 -- # set +x 00:12:45.324 12:55:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:45.324 12:55:50 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:45.324 12:55:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:45.324 12:55:50 -- common/autotest_common.sh@10 -- # set +x 00:12:45.324 12:55:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:45.324 12:55:50 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:45.324 12:55:50 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:45.324 12:55:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:45.324 12:55:50 -- common/autotest_common.sh@10 -- # set +x 00:12:45.324 12:55:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:45.324 12:55:50 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:45.324 12:55:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:45.324 12:55:50 -- common/autotest_common.sh@10 -- # set +x 00:12:45.324 [2024-04-26 12:55:50.170539] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:45.324 12:55:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:45.324 12:55:50 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:45.324 12:55:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:45.324 12:55:50 -- common/autotest_common.sh@10 -- # set +x 00:12:45.324 12:55:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:45.324 12:55:50 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:45.324 12:55:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:45.324 12:55:50 -- common/autotest_common.sh@10 -- # set +x 00:12:45.324 12:55:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:45.324 12:55:50 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:46.710 12:55:51 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:46.710 12:55:51 -- common/autotest_common.sh@1184 -- # local i=0 00:12:46.710 12:55:51 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:12:46.710 12:55:51 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:12:46.710 12:55:51 -- common/autotest_common.sh@1191 -- # sleep 2 00:12:49.257 12:55:53 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:12:49.257 12:55:53 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:12:49.257 12:55:53 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:12:49.257 12:55:53 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:12:49.258 12:55:53 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:12:49.258 12:55:53 -- common/autotest_common.sh@1194 -- # return 0 00:12:49.258 12:55:53 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:49.258 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:49.258 12:55:53 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:49.258 12:55:53 -- common/autotest_common.sh@1205 -- # local i=0 00:12:49.258 12:55:53 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:12:49.258 12:55:53 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:49.258 12:55:53 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:12:49.258 12:55:53 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:49.258 12:55:53 -- common/autotest_common.sh@1217 -- # return 0 00:12:49.258 12:55:53 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:49.258 12:55:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:49.258 12:55:53 -- common/autotest_common.sh@10 -- # set +x 00:12:49.258 12:55:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:49.258 12:55:53 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:49.258 12:55:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:49.258 12:55:53 -- common/autotest_common.sh@10 -- # set +x 00:12:49.258 12:55:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:49.258 12:55:53 -- target/rpc.sh@99 -- # seq 1 5 00:12:49.258 12:55:53 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:49.258 12:55:53 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:49.258 12:55:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:49.258 12:55:53 -- common/autotest_common.sh@10 -- # set +x 00:12:49.258 12:55:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:49.258 12:55:53 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:49.258 12:55:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:49.258 12:55:53 -- common/autotest_common.sh@10 -- # set +x 00:12:49.258 [2024-04-26 12:55:53.924833] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:49.258 12:55:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:49.258 12:55:53 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:49.258 12:55:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:49.258 12:55:53 -- common/autotest_common.sh@10 -- # set +x 00:12:49.258 12:55:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:49.258 12:55:53 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:49.258 12:55:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:49.258 12:55:53 -- common/autotest_common.sh@10 -- # set +x 00:12:49.258 12:55:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:49.258 12:55:53 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:49.258 12:55:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:49.258 12:55:53 -- common/autotest_common.sh@10 -- # set +x 00:12:49.258 12:55:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:49.258 12:55:53 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:49.258 12:55:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:49.258 12:55:53 -- common/autotest_common.sh@10 -- # set +x 00:12:49.258 12:55:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:49.258 12:55:53 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:49.258 12:55:53 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:49.258 12:55:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:49.258 12:55:53 -- common/autotest_common.sh@10 -- # set +x 00:12:49.258 12:55:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:49.258 12:55:53 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:49.258 12:55:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:49.258 12:55:53 -- common/autotest_common.sh@10 -- # set +x 00:12:49.258 [2024-04-26 12:55:53.988987] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:49.258 12:55:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:49.258 12:55:53 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:49.258 12:55:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:49.258 12:55:53 -- common/autotest_common.sh@10 -- # set +x 00:12:49.258 12:55:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:49.258 12:55:54 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:49.258 12:55:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:49.258 12:55:54 -- common/autotest_common.sh@10 -- # set +x 00:12:49.258 12:55:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:49.258 12:55:54 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:49.258 12:55:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:49.258 12:55:54 -- common/autotest_common.sh@10 -- # set +x 00:12:49.258 12:55:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:49.258 12:55:54 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:49.258 12:55:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:49.258 12:55:54 -- common/autotest_common.sh@10 -- # set +x 00:12:49.258 12:55:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:49.258 12:55:54 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:49.258 12:55:54 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:49.258 12:55:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:49.258 12:55:54 -- common/autotest_common.sh@10 -- # set +x 00:12:49.258 12:55:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:49.258 12:55:54 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:49.258 12:55:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:49.258 12:55:54 -- common/autotest_common.sh@10 -- # set +x 00:12:49.258 [2024-04-26 12:55:54.045157] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:49.258 12:55:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:49.258 12:55:54 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:49.258 12:55:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:49.258 12:55:54 -- common/autotest_common.sh@10 -- # set +x 00:12:49.258 12:55:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:49.258 12:55:54 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:49.258 12:55:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:49.258 12:55:54 -- common/autotest_common.sh@10 -- # set +x 00:12:49.258 12:55:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:49.258 12:55:54 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:49.258 12:55:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:49.258 12:55:54 -- common/autotest_common.sh@10 -- # set +x 00:12:49.258 12:55:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:49.258 12:55:54 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:49.258 12:55:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:49.258 12:55:54 -- common/autotest_common.sh@10 -- # set +x 00:12:49.258 12:55:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:49.258 12:55:54 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:49.258 12:55:54 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:49.258 12:55:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:49.258 12:55:54 -- common/autotest_common.sh@10 -- # set +x 00:12:49.258 12:55:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:49.258 12:55:54 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:49.258 12:55:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:49.258 12:55:54 -- common/autotest_common.sh@10 -- # set +x 00:12:49.258 [2024-04-26 12:55:54.105367] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:49.258 12:55:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:49.258 12:55:54 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:49.258 12:55:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:49.258 12:55:54 -- common/autotest_common.sh@10 -- # set +x 00:12:49.258 12:55:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:49.258 12:55:54 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:49.258 12:55:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:49.258 12:55:54 -- common/autotest_common.sh@10 -- # set +x 00:12:49.258 12:55:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:49.258 12:55:54 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:49.258 12:55:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:49.258 12:55:54 -- common/autotest_common.sh@10 -- # set +x 00:12:49.258 12:55:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:49.258 12:55:54 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:49.258 12:55:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:49.258 12:55:54 -- common/autotest_common.sh@10 -- # set +x 00:12:49.258 12:55:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:49.258 12:55:54 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:49.258 12:55:54 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:49.258 12:55:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:49.258 12:55:54 -- common/autotest_common.sh@10 -- # set +x 00:12:49.258 12:55:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:49.258 12:55:54 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:49.258 12:55:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:49.258 12:55:54 -- common/autotest_common.sh@10 -- # set +x 00:12:49.258 [2024-04-26 12:55:54.169589] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:49.258 12:55:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:49.259 12:55:54 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:49.259 12:55:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:49.259 12:55:54 -- common/autotest_common.sh@10 -- # set +x 00:12:49.259 12:55:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:49.259 12:55:54 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:49.259 12:55:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:49.259 12:55:54 -- common/autotest_common.sh@10 -- # set +x 00:12:49.259 12:55:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:49.259 12:55:54 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:49.259 12:55:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:49.259 12:55:54 -- common/autotest_common.sh@10 -- # set +x 00:12:49.259 12:55:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:49.259 12:55:54 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:49.259 12:55:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:49.259 12:55:54 -- common/autotest_common.sh@10 -- # set +x 00:12:49.259 12:55:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:49.259 12:55:54 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:12:49.259 12:55:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:49.259 12:55:54 -- common/autotest_common.sh@10 -- # set +x 00:12:49.259 12:55:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:49.259 12:55:54 -- target/rpc.sh@110 -- # stats='{ 00:12:49.259 "tick_rate": 2400000000, 00:12:49.259 "poll_groups": [ 00:12:49.259 { 00:12:49.259 "name": "nvmf_tgt_poll_group_0", 00:12:49.259 "admin_qpairs": 0, 00:12:49.259 "io_qpairs": 224, 00:12:49.259 "current_admin_qpairs": 0, 00:12:49.259 "current_io_qpairs": 0, 00:12:49.259 "pending_bdev_io": 0, 00:12:49.259 "completed_nvme_io": 367, 00:12:49.259 "transports": [ 00:12:49.259 { 00:12:49.259 "trtype": "TCP" 00:12:49.259 } 00:12:49.259 ] 00:12:49.259 }, 00:12:49.259 { 00:12:49.259 "name": "nvmf_tgt_poll_group_1", 00:12:49.259 "admin_qpairs": 1, 00:12:49.259 "io_qpairs": 223, 00:12:49.259 "current_admin_qpairs": 0, 00:12:49.259 "current_io_qpairs": 0, 00:12:49.259 "pending_bdev_io": 0, 00:12:49.259 "completed_nvme_io": 376, 00:12:49.259 "transports": [ 00:12:49.259 { 00:12:49.259 "trtype": "TCP" 00:12:49.259 } 00:12:49.259 ] 00:12:49.259 }, 00:12:49.259 { 00:12:49.259 "name": "nvmf_tgt_poll_group_2", 00:12:49.259 "admin_qpairs": 6, 00:12:49.259 "io_qpairs": 218, 00:12:49.259 "current_admin_qpairs": 0, 00:12:49.259 "current_io_qpairs": 0, 00:12:49.259 "pending_bdev_io": 0, 00:12:49.259 "completed_nvme_io": 220, 00:12:49.259 "transports": [ 00:12:49.259 { 00:12:49.259 "trtype": "TCP" 00:12:49.259 } 00:12:49.259 ] 00:12:49.259 }, 00:12:49.259 { 00:12:49.259 "name": "nvmf_tgt_poll_group_3", 00:12:49.259 "admin_qpairs": 0, 00:12:49.259 "io_qpairs": 224, 00:12:49.259 "current_admin_qpairs": 0, 00:12:49.259 "current_io_qpairs": 0, 00:12:49.259 "pending_bdev_io": 0, 00:12:49.259 "completed_nvme_io": 276, 00:12:49.259 "transports": [ 00:12:49.259 { 00:12:49.259 "trtype": "TCP" 00:12:49.259 } 00:12:49.259 ] 00:12:49.259 } 00:12:49.259 ] 00:12:49.259 }' 00:12:49.259 12:55:54 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:12:49.259 12:55:54 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:49.259 12:55:54 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:49.259 12:55:54 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:49.259 12:55:54 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:12:49.259 12:55:54 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:12:49.259 12:55:54 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:49.259 12:55:54 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:49.259 12:55:54 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:49.520 12:55:54 -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:12:49.520 12:55:54 -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:12:49.520 12:55:54 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:12:49.520 12:55:54 -- target/rpc.sh@123 -- # nvmftestfini 00:12:49.520 12:55:54 -- nvmf/common.sh@477 -- # nvmfcleanup 00:12:49.520 12:55:54 -- nvmf/common.sh@117 -- # sync 00:12:49.520 12:55:54 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:49.520 12:55:54 -- nvmf/common.sh@120 -- # set +e 00:12:49.520 12:55:54 -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:49.520 12:55:54 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:49.520 rmmod nvme_tcp 00:12:49.520 rmmod nvme_fabrics 00:12:49.520 rmmod nvme_keyring 00:12:49.520 12:55:54 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:49.520 12:55:54 -- nvmf/common.sh@124 -- # set -e 00:12:49.520 12:55:54 -- nvmf/common.sh@125 -- # return 0 00:12:49.520 12:55:54 -- nvmf/common.sh@478 -- # '[' -n 3867850 ']' 00:12:49.520 12:55:54 -- nvmf/common.sh@479 -- # killprocess 3867850 00:12:49.520 12:55:54 -- common/autotest_common.sh@936 -- # '[' -z 3867850 ']' 00:12:49.520 12:55:54 -- common/autotest_common.sh@940 -- # kill -0 3867850 00:12:49.520 12:55:54 -- common/autotest_common.sh@941 -- # uname 00:12:49.520 12:55:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:49.520 12:55:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3867850 00:12:49.520 12:55:54 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:49.520 12:55:54 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:49.520 12:55:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3867850' 00:12:49.520 killing process with pid 3867850 00:12:49.520 12:55:54 -- common/autotest_common.sh@955 -- # kill 3867850 00:12:49.520 12:55:54 -- common/autotest_common.sh@960 -- # wait 3867850 00:12:49.781 12:55:54 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:12:49.781 12:55:54 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:12:49.781 12:55:54 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:12:49.781 12:55:54 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:49.781 12:55:54 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:49.781 12:55:54 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:49.781 12:55:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:49.781 12:55:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:51.694 12:55:56 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:51.694 00:12:51.694 real 0m37.452s 00:12:51.694 user 1m52.912s 00:12:51.694 sys 0m7.266s 00:12:51.694 12:55:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:51.694 12:55:56 -- common/autotest_common.sh@10 -- # set +x 00:12:51.694 ************************************ 00:12:51.694 END TEST nvmf_rpc 00:12:51.694 ************************************ 00:12:51.694 12:55:56 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:51.694 12:55:56 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:51.695 12:55:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:51.695 12:55:56 -- common/autotest_common.sh@10 -- # set +x 00:12:51.957 ************************************ 00:12:51.957 START TEST nvmf_invalid 00:12:51.957 ************************************ 00:12:51.957 12:55:56 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:51.957 * Looking for test storage... 00:12:51.957 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:51.957 12:55:56 -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:51.957 12:55:56 -- nvmf/common.sh@7 -- # uname -s 00:12:51.957 12:55:56 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:51.957 12:55:56 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:51.957 12:55:56 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:51.957 12:55:56 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:51.957 12:55:56 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:51.957 12:55:56 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:51.957 12:55:56 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:51.957 12:55:56 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:51.957 12:55:56 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:51.957 12:55:56 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:51.957 12:55:56 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:51.957 12:55:56 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:51.957 12:55:56 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:51.957 12:55:56 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:51.957 12:55:56 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:51.957 12:55:56 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:51.957 12:55:56 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:51.957 12:55:56 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:51.957 12:55:56 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:51.957 12:55:56 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:51.957 12:55:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:51.957 12:55:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:51.957 12:55:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:51.957 12:55:56 -- paths/export.sh@5 -- # export PATH 00:12:51.957 12:55:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:51.957 12:55:56 -- nvmf/common.sh@47 -- # : 0 00:12:51.957 12:55:56 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:51.957 12:55:56 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:51.957 12:55:56 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:51.957 12:55:56 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:51.957 12:55:56 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:51.957 12:55:56 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:51.957 12:55:56 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:51.957 12:55:56 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:51.957 12:55:56 -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:51.957 12:55:56 -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:51.957 12:55:56 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:12:51.957 12:55:56 -- target/invalid.sh@14 -- # target=foobar 00:12:51.957 12:55:56 -- target/invalid.sh@16 -- # RANDOM=0 00:12:51.957 12:55:56 -- target/invalid.sh@34 -- # nvmftestinit 00:12:51.957 12:55:56 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:12:51.957 12:55:56 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:51.957 12:55:56 -- nvmf/common.sh@437 -- # prepare_net_devs 00:12:51.957 12:55:56 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:12:51.957 12:55:56 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:12:51.957 12:55:56 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:51.957 12:55:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:51.957 12:55:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:51.957 12:55:56 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:12:51.957 12:55:56 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:12:51.957 12:55:56 -- nvmf/common.sh@285 -- # xtrace_disable 00:12:51.957 12:55:56 -- common/autotest_common.sh@10 -- # set +x 00:13:00.092 12:56:03 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:00.092 12:56:03 -- nvmf/common.sh@291 -- # pci_devs=() 00:13:00.092 12:56:03 -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:00.092 12:56:03 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:00.092 12:56:03 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:00.092 12:56:03 -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:00.092 12:56:03 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:00.092 12:56:03 -- nvmf/common.sh@295 -- # net_devs=() 00:13:00.092 12:56:03 -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:00.092 12:56:03 -- nvmf/common.sh@296 -- # e810=() 00:13:00.092 12:56:03 -- nvmf/common.sh@296 -- # local -ga e810 00:13:00.092 12:56:03 -- nvmf/common.sh@297 -- # x722=() 00:13:00.092 12:56:03 -- nvmf/common.sh@297 -- # local -ga x722 00:13:00.092 12:56:03 -- nvmf/common.sh@298 -- # mlx=() 00:13:00.092 12:56:03 -- nvmf/common.sh@298 -- # local -ga mlx 00:13:00.092 12:56:03 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:00.092 12:56:03 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:00.092 12:56:03 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:00.092 12:56:03 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:00.092 12:56:03 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:00.092 12:56:03 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:00.092 12:56:03 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:00.092 12:56:03 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:00.092 12:56:03 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:00.092 12:56:03 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:00.092 12:56:03 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:00.092 12:56:03 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:00.092 12:56:03 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:00.092 12:56:03 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:00.092 12:56:03 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:00.092 12:56:03 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:00.092 12:56:03 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:00.092 12:56:03 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:00.092 12:56:03 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:13:00.092 Found 0000:31:00.0 (0x8086 - 0x159b) 00:13:00.092 12:56:03 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:00.092 12:56:03 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:00.092 12:56:03 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:00.092 12:56:03 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:00.092 12:56:03 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:00.092 12:56:03 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:00.092 12:56:03 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:13:00.092 Found 0000:31:00.1 (0x8086 - 0x159b) 00:13:00.092 12:56:03 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:00.092 12:56:03 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:00.092 12:56:03 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:00.092 12:56:03 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:00.092 12:56:03 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:00.092 12:56:03 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:00.092 12:56:03 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:00.092 12:56:03 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:00.092 12:56:03 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:00.092 12:56:03 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:00.092 12:56:03 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:00.092 12:56:03 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:00.092 12:56:03 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:13:00.092 Found net devices under 0000:31:00.0: cvl_0_0 00:13:00.092 12:56:03 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:00.092 12:56:03 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:00.092 12:56:03 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:00.092 12:56:03 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:00.092 12:56:03 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:00.092 12:56:03 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:13:00.092 Found net devices under 0000:31:00.1: cvl_0_1 00:13:00.092 12:56:03 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:00.092 12:56:03 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:13:00.092 12:56:03 -- nvmf/common.sh@403 -- # is_hw=yes 00:13:00.092 12:56:03 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:13:00.092 12:56:03 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:13:00.092 12:56:03 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:13:00.092 12:56:03 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:00.092 12:56:03 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:00.092 12:56:03 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:00.092 12:56:03 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:00.092 12:56:03 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:00.092 12:56:03 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:00.092 12:56:03 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:00.092 12:56:03 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:00.092 12:56:03 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:00.092 12:56:03 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:00.092 12:56:03 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:00.092 12:56:03 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:00.092 12:56:03 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:00.092 12:56:04 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:00.093 12:56:04 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:00.093 12:56:04 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:00.093 12:56:04 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:00.093 12:56:04 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:00.093 12:56:04 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:00.093 12:56:04 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:00.093 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:00.093 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.652 ms 00:13:00.093 00:13:00.093 --- 10.0.0.2 ping statistics --- 00:13:00.093 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:00.093 rtt min/avg/max/mdev = 0.652/0.652/0.652/0.000 ms 00:13:00.093 12:56:04 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:00.093 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:00.093 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.329 ms 00:13:00.093 00:13:00.093 --- 10.0.0.1 ping statistics --- 00:13:00.093 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:00.093 rtt min/avg/max/mdev = 0.329/0.329/0.329/0.000 ms 00:13:00.093 12:56:04 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:00.093 12:56:04 -- nvmf/common.sh@411 -- # return 0 00:13:00.093 12:56:04 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:13:00.093 12:56:04 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:00.093 12:56:04 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:13:00.093 12:56:04 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:13:00.093 12:56:04 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:00.093 12:56:04 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:13:00.093 12:56:04 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:13:00.093 12:56:04 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:13:00.093 12:56:04 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:13:00.093 12:56:04 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:00.093 12:56:04 -- common/autotest_common.sh@10 -- # set +x 00:13:00.093 12:56:04 -- nvmf/common.sh@470 -- # nvmfpid=3877518 00:13:00.093 12:56:04 -- nvmf/common.sh@471 -- # waitforlisten 3877518 00:13:00.093 12:56:04 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:00.093 12:56:04 -- common/autotest_common.sh@817 -- # '[' -z 3877518 ']' 00:13:00.093 12:56:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:00.093 12:56:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:00.093 12:56:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:00.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:00.093 12:56:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:00.093 12:56:04 -- common/autotest_common.sh@10 -- # set +x 00:13:00.093 [2024-04-26 12:56:04.381292] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:13:00.093 [2024-04-26 12:56:04.381357] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:00.093 EAL: No free 2048 kB hugepages reported on node 1 00:13:00.093 [2024-04-26 12:56:04.453965] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:00.093 [2024-04-26 12:56:04.528202] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:00.093 [2024-04-26 12:56:04.528244] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:00.093 [2024-04-26 12:56:04.528252] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:00.093 [2024-04-26 12:56:04.528258] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:00.093 [2024-04-26 12:56:04.528264] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:00.093 [2024-04-26 12:56:04.528403] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:00.093 [2024-04-26 12:56:04.528544] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:00.093 [2024-04-26 12:56:04.528735] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:00.093 [2024-04-26 12:56:04.528736] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:00.354 12:56:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:00.354 12:56:05 -- common/autotest_common.sh@850 -- # return 0 00:13:00.354 12:56:05 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:13:00.354 12:56:05 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:00.354 12:56:05 -- common/autotest_common.sh@10 -- # set +x 00:13:00.354 12:56:05 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:00.354 12:56:05 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:00.354 12:56:05 -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode11944 00:13:00.354 [2024-04-26 12:56:05.342760] nvmf_rpc.c: 401:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:13:00.354 12:56:05 -- target/invalid.sh@40 -- # out='request: 00:13:00.354 { 00:13:00.354 "nqn": "nqn.2016-06.io.spdk:cnode11944", 00:13:00.354 "tgt_name": "foobar", 00:13:00.354 "method": "nvmf_create_subsystem", 00:13:00.354 "req_id": 1 00:13:00.354 } 00:13:00.354 Got JSON-RPC error response 00:13:00.354 response: 00:13:00.354 { 00:13:00.354 "code": -32603, 00:13:00.354 "message": "Unable to find target foobar" 00:13:00.354 }' 00:13:00.354 12:56:05 -- target/invalid.sh@41 -- # [[ request: 00:13:00.354 { 00:13:00.354 "nqn": "nqn.2016-06.io.spdk:cnode11944", 00:13:00.354 "tgt_name": "foobar", 00:13:00.354 "method": "nvmf_create_subsystem", 00:13:00.354 "req_id": 1 00:13:00.354 } 00:13:00.354 Got JSON-RPC error response 00:13:00.354 response: 00:13:00.354 { 00:13:00.354 "code": -32603, 00:13:00.354 "message": "Unable to find target foobar" 00:13:00.354 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:13:00.354 12:56:05 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:13:00.354 12:56:05 -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode26764 00:13:00.615 [2024-04-26 12:56:05.507327] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26764: invalid serial number 'SPDKISFASTANDAWESOME' 00:13:00.615 12:56:05 -- target/invalid.sh@45 -- # out='request: 00:13:00.615 { 00:13:00.615 "nqn": "nqn.2016-06.io.spdk:cnode26764", 00:13:00.615 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:00.615 "method": "nvmf_create_subsystem", 00:13:00.615 "req_id": 1 00:13:00.615 } 00:13:00.615 Got JSON-RPC error response 00:13:00.615 response: 00:13:00.615 { 00:13:00.615 "code": -32602, 00:13:00.615 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:00.615 }' 00:13:00.615 12:56:05 -- target/invalid.sh@46 -- # [[ request: 00:13:00.615 { 00:13:00.615 "nqn": "nqn.2016-06.io.spdk:cnode26764", 00:13:00.615 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:00.615 "method": "nvmf_create_subsystem", 00:13:00.615 "req_id": 1 00:13:00.615 } 00:13:00.615 Got JSON-RPC error response 00:13:00.615 response: 00:13:00.615 { 00:13:00.615 "code": -32602, 00:13:00.615 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:00.615 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:00.615 12:56:05 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:13:00.615 12:56:05 -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode13610 00:13:00.877 [2024-04-26 12:56:05.679882] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13610: invalid model number 'SPDK_Controller' 00:13:00.877 12:56:05 -- target/invalid.sh@50 -- # out='request: 00:13:00.877 { 00:13:00.877 "nqn": "nqn.2016-06.io.spdk:cnode13610", 00:13:00.877 "model_number": "SPDK_Controller\u001f", 00:13:00.877 "method": "nvmf_create_subsystem", 00:13:00.877 "req_id": 1 00:13:00.877 } 00:13:00.877 Got JSON-RPC error response 00:13:00.877 response: 00:13:00.877 { 00:13:00.877 "code": -32602, 00:13:00.877 "message": "Invalid MN SPDK_Controller\u001f" 00:13:00.877 }' 00:13:00.877 12:56:05 -- target/invalid.sh@51 -- # [[ request: 00:13:00.877 { 00:13:00.877 "nqn": "nqn.2016-06.io.spdk:cnode13610", 00:13:00.877 "model_number": "SPDK_Controller\u001f", 00:13:00.877 "method": "nvmf_create_subsystem", 00:13:00.877 "req_id": 1 00:13:00.877 } 00:13:00.877 Got JSON-RPC error response 00:13:00.877 response: 00:13:00.877 { 00:13:00.877 "code": -32602, 00:13:00.877 "message": "Invalid MN SPDK_Controller\u001f" 00:13:00.877 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:00.877 12:56:05 -- target/invalid.sh@54 -- # gen_random_s 21 00:13:00.877 12:56:05 -- target/invalid.sh@19 -- # local length=21 ll 00:13:00.877 12:56:05 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:00.877 12:56:05 -- target/invalid.sh@21 -- # local chars 00:13:00.877 12:56:05 -- target/invalid.sh@22 -- # local string 00:13:00.877 12:56:05 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:00.877 12:56:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:00.877 12:56:05 -- target/invalid.sh@25 -- # printf %x 38 00:13:00.877 12:56:05 -- target/invalid.sh@25 -- # echo -e '\x26' 00:13:00.877 12:56:05 -- target/invalid.sh@25 -- # string+='&' 00:13:00.877 12:56:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:00.877 12:56:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:00.877 12:56:05 -- target/invalid.sh@25 -- # printf %x 81 00:13:00.877 12:56:05 -- target/invalid.sh@25 -- # echo -e '\x51' 00:13:00.877 12:56:05 -- target/invalid.sh@25 -- # string+=Q 00:13:00.877 12:56:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:00.877 12:56:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:00.877 12:56:05 -- target/invalid.sh@25 -- # printf %x 78 00:13:00.877 12:56:05 -- target/invalid.sh@25 -- # echo -e '\x4e' 00:13:00.877 12:56:05 -- target/invalid.sh@25 -- # string+=N 00:13:00.877 12:56:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:00.877 12:56:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:00.877 12:56:05 -- target/invalid.sh@25 -- # printf %x 50 00:13:00.877 12:56:05 -- target/invalid.sh@25 -- # echo -e '\x32' 00:13:00.877 12:56:05 -- target/invalid.sh@25 -- # string+=2 00:13:00.877 12:56:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:00.877 12:56:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:00.877 12:56:05 -- target/invalid.sh@25 -- # printf %x 83 00:13:00.877 12:56:05 -- target/invalid.sh@25 -- # echo -e '\x53' 00:13:00.877 12:56:05 -- target/invalid.sh@25 -- # string+=S 00:13:00.877 12:56:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:00.877 12:56:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:00.877 12:56:05 -- target/invalid.sh@25 -- # printf %x 33 00:13:00.877 12:56:05 -- target/invalid.sh@25 -- # echo -e '\x21' 00:13:00.877 12:56:05 -- target/invalid.sh@25 -- # string+='!' 00:13:00.877 12:56:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:00.877 12:56:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:00.877 12:56:05 -- target/invalid.sh@25 -- # printf %x 91 00:13:00.877 12:56:05 -- target/invalid.sh@25 -- # echo -e '\x5b' 00:13:00.877 12:56:05 -- target/invalid.sh@25 -- # string+='[' 00:13:00.877 12:56:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:00.877 12:56:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:00.877 12:56:05 -- target/invalid.sh@25 -- # printf %x 45 00:13:00.877 12:56:05 -- target/invalid.sh@25 -- # echo -e '\x2d' 00:13:00.877 12:56:05 -- target/invalid.sh@25 -- # string+=- 00:13:00.877 12:56:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:00.877 12:56:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:00.877 12:56:05 -- target/invalid.sh@25 -- # printf %x 123 00:13:00.877 12:56:05 -- target/invalid.sh@25 -- # echo -e '\x7b' 00:13:00.877 12:56:05 -- target/invalid.sh@25 -- # string+='{' 00:13:00.877 12:56:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:00.877 12:56:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:00.877 12:56:05 -- target/invalid.sh@25 -- # printf %x 37 00:13:00.877 12:56:05 -- target/invalid.sh@25 -- # echo -e '\x25' 00:13:00.877 12:56:05 -- target/invalid.sh@25 -- # string+=% 00:13:00.877 12:56:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:00.877 12:56:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:00.877 12:56:05 -- target/invalid.sh@25 -- # printf %x 102 00:13:00.877 12:56:05 -- target/invalid.sh@25 -- # echo -e '\x66' 00:13:00.877 12:56:05 -- target/invalid.sh@25 -- # string+=f 00:13:00.877 12:56:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:00.877 12:56:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:00.877 12:56:05 -- target/invalid.sh@25 -- # printf %x 50 00:13:00.877 12:56:05 -- target/invalid.sh@25 -- # echo -e '\x32' 00:13:00.877 12:56:05 -- target/invalid.sh@25 -- # string+=2 00:13:00.877 12:56:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:00.877 12:56:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:00.877 12:56:05 -- target/invalid.sh@25 -- # printf %x 52 00:13:00.877 12:56:05 -- target/invalid.sh@25 -- # echo -e '\x34' 00:13:00.877 12:56:05 -- target/invalid.sh@25 -- # string+=4 00:13:00.877 12:56:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:00.877 12:56:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:00.877 12:56:05 -- target/invalid.sh@25 -- # printf %x 108 00:13:00.877 12:56:05 -- target/invalid.sh@25 -- # echo -e '\x6c' 00:13:00.877 12:56:05 -- target/invalid.sh@25 -- # string+=l 00:13:00.877 12:56:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:00.877 12:56:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:00.877 12:56:05 -- target/invalid.sh@25 -- # printf %x 87 00:13:00.877 12:56:05 -- target/invalid.sh@25 -- # echo -e '\x57' 00:13:00.877 12:56:05 -- target/invalid.sh@25 -- # string+=W 00:13:00.877 12:56:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:00.877 12:56:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:00.878 12:56:05 -- target/invalid.sh@25 -- # printf %x 66 00:13:00.878 12:56:05 -- target/invalid.sh@25 -- # echo -e '\x42' 00:13:00.878 12:56:05 -- target/invalid.sh@25 -- # string+=B 00:13:00.878 12:56:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:00.878 12:56:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:00.878 12:56:05 -- target/invalid.sh@25 -- # printf %x 77 00:13:00.878 12:56:05 -- target/invalid.sh@25 -- # echo -e '\x4d' 00:13:00.878 12:56:05 -- target/invalid.sh@25 -- # string+=M 00:13:00.878 12:56:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:00.878 12:56:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:00.878 12:56:05 -- target/invalid.sh@25 -- # printf %x 49 00:13:00.878 12:56:05 -- target/invalid.sh@25 -- # echo -e '\x31' 00:13:00.878 12:56:05 -- target/invalid.sh@25 -- # string+=1 00:13:00.878 12:56:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:00.878 12:56:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:00.878 12:56:05 -- target/invalid.sh@25 -- # printf %x 94 00:13:00.878 12:56:05 -- target/invalid.sh@25 -- # echo -e '\x5e' 00:13:00.878 12:56:05 -- target/invalid.sh@25 -- # string+='^' 00:13:00.878 12:56:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:00.878 12:56:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:00.878 12:56:05 -- target/invalid.sh@25 -- # printf %x 73 00:13:00.878 12:56:05 -- target/invalid.sh@25 -- # echo -e '\x49' 00:13:00.878 12:56:05 -- target/invalid.sh@25 -- # string+=I 00:13:00.878 12:56:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:00.878 12:56:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:00.878 12:56:05 -- target/invalid.sh@25 -- # printf %x 94 00:13:00.878 12:56:05 -- target/invalid.sh@25 -- # echo -e '\x5e' 00:13:00.878 12:56:05 -- target/invalid.sh@25 -- # string+='^' 00:13:00.878 12:56:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:00.878 12:56:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:00.878 12:56:05 -- target/invalid.sh@28 -- # [[ & == \- ]] 00:13:00.878 12:56:05 -- target/invalid.sh@31 -- # echo '&QN2S![-{%f24lWBM1^I^' 00:13:00.878 12:56:05 -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '&QN2S![-{%f24lWBM1^I^' nqn.2016-06.io.spdk:cnode27573 00:13:01.139 [2024-04-26 12:56:06.008897] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27573: invalid serial number '&QN2S![-{%f24lWBM1^I^' 00:13:01.139 12:56:06 -- target/invalid.sh@54 -- # out='request: 00:13:01.139 { 00:13:01.139 "nqn": "nqn.2016-06.io.spdk:cnode27573", 00:13:01.139 "serial_number": "&QN2S![-{%f24lWBM1^I^", 00:13:01.139 "method": "nvmf_create_subsystem", 00:13:01.139 "req_id": 1 00:13:01.139 } 00:13:01.139 Got JSON-RPC error response 00:13:01.139 response: 00:13:01.139 { 00:13:01.139 "code": -32602, 00:13:01.139 "message": "Invalid SN &QN2S![-{%f24lWBM1^I^" 00:13:01.139 }' 00:13:01.139 12:56:06 -- target/invalid.sh@55 -- # [[ request: 00:13:01.139 { 00:13:01.139 "nqn": "nqn.2016-06.io.spdk:cnode27573", 00:13:01.139 "serial_number": "&QN2S![-{%f24lWBM1^I^", 00:13:01.139 "method": "nvmf_create_subsystem", 00:13:01.139 "req_id": 1 00:13:01.139 } 00:13:01.139 Got JSON-RPC error response 00:13:01.139 response: 00:13:01.139 { 00:13:01.139 "code": -32602, 00:13:01.139 "message": "Invalid SN &QN2S![-{%f24lWBM1^I^" 00:13:01.139 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:01.139 12:56:06 -- target/invalid.sh@58 -- # gen_random_s 41 00:13:01.139 12:56:06 -- target/invalid.sh@19 -- # local length=41 ll 00:13:01.139 12:56:06 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:01.139 12:56:06 -- target/invalid.sh@21 -- # local chars 00:13:01.139 12:56:06 -- target/invalid.sh@22 -- # local string 00:13:01.139 12:56:06 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:01.139 12:56:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.139 12:56:06 -- target/invalid.sh@25 -- # printf %x 113 00:13:01.139 12:56:06 -- target/invalid.sh@25 -- # echo -e '\x71' 00:13:01.139 12:56:06 -- target/invalid.sh@25 -- # string+=q 00:13:01.139 12:56:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.139 12:56:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.139 12:56:06 -- target/invalid.sh@25 -- # printf %x 120 00:13:01.139 12:56:06 -- target/invalid.sh@25 -- # echo -e '\x78' 00:13:01.139 12:56:06 -- target/invalid.sh@25 -- # string+=x 00:13:01.139 12:56:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.139 12:56:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.139 12:56:06 -- target/invalid.sh@25 -- # printf %x 86 00:13:01.139 12:56:06 -- target/invalid.sh@25 -- # echo -e '\x56' 00:13:01.139 12:56:06 -- target/invalid.sh@25 -- # string+=V 00:13:01.139 12:56:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.139 12:56:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.139 12:56:06 -- target/invalid.sh@25 -- # printf %x 93 00:13:01.139 12:56:06 -- target/invalid.sh@25 -- # echo -e '\x5d' 00:13:01.139 12:56:06 -- target/invalid.sh@25 -- # string+=']' 00:13:01.139 12:56:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.139 12:56:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.139 12:56:06 -- target/invalid.sh@25 -- # printf %x 126 00:13:01.139 12:56:06 -- target/invalid.sh@25 -- # echo -e '\x7e' 00:13:01.139 12:56:06 -- target/invalid.sh@25 -- # string+='~' 00:13:01.139 12:56:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.139 12:56:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.139 12:56:06 -- target/invalid.sh@25 -- # printf %x 83 00:13:01.139 12:56:06 -- target/invalid.sh@25 -- # echo -e '\x53' 00:13:01.139 12:56:06 -- target/invalid.sh@25 -- # string+=S 00:13:01.139 12:56:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.139 12:56:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.139 12:56:06 -- target/invalid.sh@25 -- # printf %x 53 00:13:01.139 12:56:06 -- target/invalid.sh@25 -- # echo -e '\x35' 00:13:01.139 12:56:06 -- target/invalid.sh@25 -- # string+=5 00:13:01.139 12:56:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.139 12:56:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.139 12:56:06 -- target/invalid.sh@25 -- # printf %x 59 00:13:01.139 12:56:06 -- target/invalid.sh@25 -- # echo -e '\x3b' 00:13:01.139 12:56:06 -- target/invalid.sh@25 -- # string+=';' 00:13:01.139 12:56:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.139 12:56:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.139 12:56:06 -- target/invalid.sh@25 -- # printf %x 127 00:13:01.139 12:56:06 -- target/invalid.sh@25 -- # echo -e '\x7f' 00:13:01.139 12:56:06 -- target/invalid.sh@25 -- # string+=$'\177' 00:13:01.139 12:56:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.139 12:56:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.139 12:56:06 -- target/invalid.sh@25 -- # printf %x 108 00:13:01.139 12:56:06 -- target/invalid.sh@25 -- # echo -e '\x6c' 00:13:01.139 12:56:06 -- target/invalid.sh@25 -- # string+=l 00:13:01.139 12:56:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.139 12:56:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.139 12:56:06 -- target/invalid.sh@25 -- # printf %x 48 00:13:01.139 12:56:06 -- target/invalid.sh@25 -- # echo -e '\x30' 00:13:01.139 12:56:06 -- target/invalid.sh@25 -- # string+=0 00:13:01.139 12:56:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.139 12:56:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.139 12:56:06 -- target/invalid.sh@25 -- # printf %x 57 00:13:01.139 12:56:06 -- target/invalid.sh@25 -- # echo -e '\x39' 00:13:01.139 12:56:06 -- target/invalid.sh@25 -- # string+=9 00:13:01.139 12:56:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.139 12:56:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.139 12:56:06 -- target/invalid.sh@25 -- # printf %x 107 00:13:01.139 12:56:06 -- target/invalid.sh@25 -- # echo -e '\x6b' 00:13:01.139 12:56:06 -- target/invalid.sh@25 -- # string+=k 00:13:01.139 12:56:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.139 12:56:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.139 12:56:06 -- target/invalid.sh@25 -- # printf %x 102 00:13:01.139 12:56:06 -- target/invalid.sh@25 -- # echo -e '\x66' 00:13:01.139 12:56:06 -- target/invalid.sh@25 -- # string+=f 00:13:01.139 12:56:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.139 12:56:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.139 12:56:06 -- target/invalid.sh@25 -- # printf %x 87 00:13:01.139 12:56:06 -- target/invalid.sh@25 -- # echo -e '\x57' 00:13:01.139 12:56:06 -- target/invalid.sh@25 -- # string+=W 00:13:01.139 12:56:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.139 12:56:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.139 12:56:06 -- target/invalid.sh@25 -- # printf %x 110 00:13:01.139 12:56:06 -- target/invalid.sh@25 -- # echo -e '\x6e' 00:13:01.139 12:56:06 -- target/invalid.sh@25 -- # string+=n 00:13:01.139 12:56:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.139 12:56:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.139 12:56:06 -- target/invalid.sh@25 -- # printf %x 90 00:13:01.139 12:56:06 -- target/invalid.sh@25 -- # echo -e '\x5a' 00:13:01.139 12:56:06 -- target/invalid.sh@25 -- # string+=Z 00:13:01.139 12:56:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.139 12:56:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.139 12:56:06 -- target/invalid.sh@25 -- # printf %x 122 00:13:01.139 12:56:06 -- target/invalid.sh@25 -- # echo -e '\x7a' 00:13:01.139 12:56:06 -- target/invalid.sh@25 -- # string+=z 00:13:01.139 12:56:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.139 12:56:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.139 12:56:06 -- target/invalid.sh@25 -- # printf %x 65 00:13:01.139 12:56:06 -- target/invalid.sh@25 -- # echo -e '\x41' 00:13:01.139 12:56:06 -- target/invalid.sh@25 -- # string+=A 00:13:01.139 12:56:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.140 12:56:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.140 12:56:06 -- target/invalid.sh@25 -- # printf %x 32 00:13:01.140 12:56:06 -- target/invalid.sh@25 -- # echo -e '\x20' 00:13:01.140 12:56:06 -- target/invalid.sh@25 -- # string+=' ' 00:13:01.140 12:56:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.140 12:56:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.140 12:56:06 -- target/invalid.sh@25 -- # printf %x 79 00:13:01.140 12:56:06 -- target/invalid.sh@25 -- # echo -e '\x4f' 00:13:01.140 12:56:06 -- target/invalid.sh@25 -- # string+=O 00:13:01.140 12:56:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.400 12:56:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.400 12:56:06 -- target/invalid.sh@25 -- # printf %x 114 00:13:01.400 12:56:06 -- target/invalid.sh@25 -- # echo -e '\x72' 00:13:01.400 12:56:06 -- target/invalid.sh@25 -- # string+=r 00:13:01.400 12:56:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.400 12:56:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.400 12:56:06 -- target/invalid.sh@25 -- # printf %x 118 00:13:01.400 12:56:06 -- target/invalid.sh@25 -- # echo -e '\x76' 00:13:01.400 12:56:06 -- target/invalid.sh@25 -- # string+=v 00:13:01.400 12:56:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.400 12:56:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.400 12:56:06 -- target/invalid.sh@25 -- # printf %x 68 00:13:01.401 12:56:06 -- target/invalid.sh@25 -- # echo -e '\x44' 00:13:01.401 12:56:06 -- target/invalid.sh@25 -- # string+=D 00:13:01.401 12:56:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.401 12:56:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.401 12:56:06 -- target/invalid.sh@25 -- # printf %x 119 00:13:01.401 12:56:06 -- target/invalid.sh@25 -- # echo -e '\x77' 00:13:01.401 12:56:06 -- target/invalid.sh@25 -- # string+=w 00:13:01.401 12:56:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.401 12:56:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.401 12:56:06 -- target/invalid.sh@25 -- # printf %x 118 00:13:01.401 12:56:06 -- target/invalid.sh@25 -- # echo -e '\x76' 00:13:01.401 12:56:06 -- target/invalid.sh@25 -- # string+=v 00:13:01.401 12:56:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.401 12:56:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.401 12:56:06 -- target/invalid.sh@25 -- # printf %x 119 00:13:01.401 12:56:06 -- target/invalid.sh@25 -- # echo -e '\x77' 00:13:01.401 12:56:06 -- target/invalid.sh@25 -- # string+=w 00:13:01.401 12:56:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.401 12:56:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.401 12:56:06 -- target/invalid.sh@25 -- # printf %x 55 00:13:01.401 12:56:06 -- target/invalid.sh@25 -- # echo -e '\x37' 00:13:01.401 12:56:06 -- target/invalid.sh@25 -- # string+=7 00:13:01.401 12:56:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.401 12:56:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.401 12:56:06 -- target/invalid.sh@25 -- # printf %x 72 00:13:01.401 12:56:06 -- target/invalid.sh@25 -- # echo -e '\x48' 00:13:01.401 12:56:06 -- target/invalid.sh@25 -- # string+=H 00:13:01.401 12:56:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.401 12:56:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.401 12:56:06 -- target/invalid.sh@25 -- # printf %x 116 00:13:01.401 12:56:06 -- target/invalid.sh@25 -- # echo -e '\x74' 00:13:01.401 12:56:06 -- target/invalid.sh@25 -- # string+=t 00:13:01.401 12:56:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.401 12:56:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.401 12:56:06 -- target/invalid.sh@25 -- # printf %x 112 00:13:01.401 12:56:06 -- target/invalid.sh@25 -- # echo -e '\x70' 00:13:01.401 12:56:06 -- target/invalid.sh@25 -- # string+=p 00:13:01.401 12:56:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.401 12:56:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.401 12:56:06 -- target/invalid.sh@25 -- # printf %x 46 00:13:01.401 12:56:06 -- target/invalid.sh@25 -- # echo -e '\x2e' 00:13:01.401 12:56:06 -- target/invalid.sh@25 -- # string+=. 00:13:01.401 12:56:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.401 12:56:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.401 12:56:06 -- target/invalid.sh@25 -- # printf %x 96 00:13:01.401 12:56:06 -- target/invalid.sh@25 -- # echo -e '\x60' 00:13:01.401 12:56:06 -- target/invalid.sh@25 -- # string+='`' 00:13:01.401 12:56:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.401 12:56:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.401 12:56:06 -- target/invalid.sh@25 -- # printf %x 80 00:13:01.401 12:56:06 -- target/invalid.sh@25 -- # echo -e '\x50' 00:13:01.401 12:56:06 -- target/invalid.sh@25 -- # string+=P 00:13:01.401 12:56:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.401 12:56:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.401 12:56:06 -- target/invalid.sh@25 -- # printf %x 91 00:13:01.401 12:56:06 -- target/invalid.sh@25 -- # echo -e '\x5b' 00:13:01.401 12:56:06 -- target/invalid.sh@25 -- # string+='[' 00:13:01.401 12:56:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.401 12:56:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.401 12:56:06 -- target/invalid.sh@25 -- # printf %x 114 00:13:01.401 12:56:06 -- target/invalid.sh@25 -- # echo -e '\x72' 00:13:01.401 12:56:06 -- target/invalid.sh@25 -- # string+=r 00:13:01.401 12:56:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.401 12:56:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.401 12:56:06 -- target/invalid.sh@25 -- # printf %x 106 00:13:01.401 12:56:06 -- target/invalid.sh@25 -- # echo -e '\x6a' 00:13:01.401 12:56:06 -- target/invalid.sh@25 -- # string+=j 00:13:01.401 12:56:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.401 12:56:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.401 12:56:06 -- target/invalid.sh@25 -- # printf %x 94 00:13:01.401 12:56:06 -- target/invalid.sh@25 -- # echo -e '\x5e' 00:13:01.401 12:56:06 -- target/invalid.sh@25 -- # string+='^' 00:13:01.401 12:56:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.401 12:56:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.401 12:56:06 -- target/invalid.sh@25 -- # printf %x 122 00:13:01.401 12:56:06 -- target/invalid.sh@25 -- # echo -e '\x7a' 00:13:01.401 12:56:06 -- target/invalid.sh@25 -- # string+=z 00:13:01.401 12:56:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.401 12:56:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.401 12:56:06 -- target/invalid.sh@25 -- # printf %x 113 00:13:01.401 12:56:06 -- target/invalid.sh@25 -- # echo -e '\x71' 00:13:01.401 12:56:06 -- target/invalid.sh@25 -- # string+=q 00:13:01.401 12:56:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.401 12:56:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.401 12:56:06 -- target/invalid.sh@25 -- # printf %x 117 00:13:01.401 12:56:06 -- target/invalid.sh@25 -- # echo -e '\x75' 00:13:01.401 12:56:06 -- target/invalid.sh@25 -- # string+=u 00:13:01.401 12:56:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.401 12:56:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.401 12:56:06 -- target/invalid.sh@28 -- # [[ q == \- ]] 00:13:01.401 12:56:06 -- target/invalid.sh@31 -- # echo 'qxV]~S5;l09kfWnZzA OrvDwvw7Htp.`P[rj^zqu' 00:13:01.401 12:56:06 -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'qxV]~S5;l09kfWnZzA OrvDwvw7Htp.`P[rj^zqu' nqn.2016-06.io.spdk:cnode13019 00:13:01.661 [2024-04-26 12:56:06.486455] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13019: invalid model number 'qxV]~S5;l09kfWnZzA OrvDwvw7Htp.`P[rj^zqu' 00:13:01.661 12:56:06 -- target/invalid.sh@58 -- # out='request: 00:13:01.661 { 00:13:01.661 "nqn": "nqn.2016-06.io.spdk:cnode13019", 00:13:01.661 "model_number": "qxV]~S5;\u007fl09kfWnZzA OrvDwvw7Htp.`P[rj^zqu", 00:13:01.661 "method": "nvmf_create_subsystem", 00:13:01.661 "req_id": 1 00:13:01.661 } 00:13:01.661 Got JSON-RPC error response 00:13:01.661 response: 00:13:01.661 { 00:13:01.661 "code": -32602, 00:13:01.661 "message": "Invalid MN qxV]~S5;\u007fl09kfWnZzA OrvDwvw7Htp.`P[rj^zqu" 00:13:01.661 }' 00:13:01.661 12:56:06 -- target/invalid.sh@59 -- # [[ request: 00:13:01.661 { 00:13:01.661 "nqn": "nqn.2016-06.io.spdk:cnode13019", 00:13:01.661 "model_number": "qxV]~S5;\u007fl09kfWnZzA OrvDwvw7Htp.`P[rj^zqu", 00:13:01.661 "method": "nvmf_create_subsystem", 00:13:01.661 "req_id": 1 00:13:01.662 } 00:13:01.662 Got JSON-RPC error response 00:13:01.662 response: 00:13:01.662 { 00:13:01.662 "code": -32602, 00:13:01.662 "message": "Invalid MN qxV]~S5;\u007fl09kfWnZzA OrvDwvw7Htp.`P[rj^zqu" 00:13:01.662 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:01.662 12:56:06 -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:13:01.662 [2024-04-26 12:56:06.655074] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:01.662 12:56:06 -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:13:01.922 12:56:06 -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:13:01.922 12:56:06 -- target/invalid.sh@67 -- # echo '' 00:13:01.922 12:56:06 -- target/invalid.sh@67 -- # head -n 1 00:13:01.922 12:56:06 -- target/invalid.sh@67 -- # IP= 00:13:01.922 12:56:06 -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:13:02.183 [2024-04-26 12:56:07.004149] nvmf_rpc.c: 792:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:13:02.183 12:56:07 -- target/invalid.sh@69 -- # out='request: 00:13:02.183 { 00:13:02.183 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:02.183 "listen_address": { 00:13:02.183 "trtype": "tcp", 00:13:02.183 "traddr": "", 00:13:02.183 "trsvcid": "4421" 00:13:02.183 }, 00:13:02.183 "method": "nvmf_subsystem_remove_listener", 00:13:02.183 "req_id": 1 00:13:02.183 } 00:13:02.183 Got JSON-RPC error response 00:13:02.183 response: 00:13:02.183 { 00:13:02.183 "code": -32602, 00:13:02.183 "message": "Invalid parameters" 00:13:02.183 }' 00:13:02.183 12:56:07 -- target/invalid.sh@70 -- # [[ request: 00:13:02.183 { 00:13:02.183 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:02.183 "listen_address": { 00:13:02.183 "trtype": "tcp", 00:13:02.183 "traddr": "", 00:13:02.183 "trsvcid": "4421" 00:13:02.183 }, 00:13:02.183 "method": "nvmf_subsystem_remove_listener", 00:13:02.183 "req_id": 1 00:13:02.183 } 00:13:02.183 Got JSON-RPC error response 00:13:02.183 response: 00:13:02.183 { 00:13:02.183 "code": -32602, 00:13:02.183 "message": "Invalid parameters" 00:13:02.183 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:13:02.183 12:56:07 -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode18469 -i 0 00:13:02.183 [2024-04-26 12:56:07.168642] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18469: invalid cntlid range [0-65519] 00:13:02.183 12:56:07 -- target/invalid.sh@73 -- # out='request: 00:13:02.183 { 00:13:02.183 "nqn": "nqn.2016-06.io.spdk:cnode18469", 00:13:02.183 "min_cntlid": 0, 00:13:02.183 "method": "nvmf_create_subsystem", 00:13:02.183 "req_id": 1 00:13:02.183 } 00:13:02.183 Got JSON-RPC error response 00:13:02.183 response: 00:13:02.183 { 00:13:02.183 "code": -32602, 00:13:02.183 "message": "Invalid cntlid range [0-65519]" 00:13:02.183 }' 00:13:02.183 12:56:07 -- target/invalid.sh@74 -- # [[ request: 00:13:02.183 { 00:13:02.183 "nqn": "nqn.2016-06.io.spdk:cnode18469", 00:13:02.183 "min_cntlid": 0, 00:13:02.183 "method": "nvmf_create_subsystem", 00:13:02.183 "req_id": 1 00:13:02.183 } 00:13:02.183 Got JSON-RPC error response 00:13:02.183 response: 00:13:02.183 { 00:13:02.183 "code": -32602, 00:13:02.183 "message": "Invalid cntlid range [0-65519]" 00:13:02.183 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:02.183 12:56:07 -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode27434 -i 65520 00:13:02.443 [2024-04-26 12:56:07.341185] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27434: invalid cntlid range [65520-65519] 00:13:02.444 12:56:07 -- target/invalid.sh@75 -- # out='request: 00:13:02.444 { 00:13:02.444 "nqn": "nqn.2016-06.io.spdk:cnode27434", 00:13:02.444 "min_cntlid": 65520, 00:13:02.444 "method": "nvmf_create_subsystem", 00:13:02.444 "req_id": 1 00:13:02.444 } 00:13:02.444 Got JSON-RPC error response 00:13:02.444 response: 00:13:02.444 { 00:13:02.444 "code": -32602, 00:13:02.444 "message": "Invalid cntlid range [65520-65519]" 00:13:02.444 }' 00:13:02.444 12:56:07 -- target/invalid.sh@76 -- # [[ request: 00:13:02.444 { 00:13:02.444 "nqn": "nqn.2016-06.io.spdk:cnode27434", 00:13:02.444 "min_cntlid": 65520, 00:13:02.444 "method": "nvmf_create_subsystem", 00:13:02.444 "req_id": 1 00:13:02.444 } 00:13:02.444 Got JSON-RPC error response 00:13:02.444 response: 00:13:02.444 { 00:13:02.444 "code": -32602, 00:13:02.444 "message": "Invalid cntlid range [65520-65519]" 00:13:02.444 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:02.444 12:56:07 -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3478 -I 0 00:13:02.705 [2024-04-26 12:56:07.505718] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3478: invalid cntlid range [1-0] 00:13:02.705 12:56:07 -- target/invalid.sh@77 -- # out='request: 00:13:02.705 { 00:13:02.705 "nqn": "nqn.2016-06.io.spdk:cnode3478", 00:13:02.705 "max_cntlid": 0, 00:13:02.705 "method": "nvmf_create_subsystem", 00:13:02.705 "req_id": 1 00:13:02.705 } 00:13:02.705 Got JSON-RPC error response 00:13:02.705 response: 00:13:02.705 { 00:13:02.705 "code": -32602, 00:13:02.705 "message": "Invalid cntlid range [1-0]" 00:13:02.705 }' 00:13:02.705 12:56:07 -- target/invalid.sh@78 -- # [[ request: 00:13:02.705 { 00:13:02.705 "nqn": "nqn.2016-06.io.spdk:cnode3478", 00:13:02.705 "max_cntlid": 0, 00:13:02.705 "method": "nvmf_create_subsystem", 00:13:02.705 "req_id": 1 00:13:02.705 } 00:13:02.705 Got JSON-RPC error response 00:13:02.705 response: 00:13:02.705 { 00:13:02.705 "code": -32602, 00:13:02.705 "message": "Invalid cntlid range [1-0]" 00:13:02.705 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:02.705 12:56:07 -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode32033 -I 65520 00:13:02.705 [2024-04-26 12:56:07.678311] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode32033: invalid cntlid range [1-65520] 00:13:02.705 12:56:07 -- target/invalid.sh@79 -- # out='request: 00:13:02.705 { 00:13:02.705 "nqn": "nqn.2016-06.io.spdk:cnode32033", 00:13:02.705 "max_cntlid": 65520, 00:13:02.705 "method": "nvmf_create_subsystem", 00:13:02.705 "req_id": 1 00:13:02.705 } 00:13:02.705 Got JSON-RPC error response 00:13:02.705 response: 00:13:02.705 { 00:13:02.705 "code": -32602, 00:13:02.705 "message": "Invalid cntlid range [1-65520]" 00:13:02.705 }' 00:13:02.705 12:56:07 -- target/invalid.sh@80 -- # [[ request: 00:13:02.705 { 00:13:02.705 "nqn": "nqn.2016-06.io.spdk:cnode32033", 00:13:02.705 "max_cntlid": 65520, 00:13:02.705 "method": "nvmf_create_subsystem", 00:13:02.705 "req_id": 1 00:13:02.705 } 00:13:02.705 Got JSON-RPC error response 00:13:02.705 response: 00:13:02.705 { 00:13:02.705 "code": -32602, 00:13:02.705 "message": "Invalid cntlid range [1-65520]" 00:13:02.705 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:02.705 12:56:07 -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode19658 -i 6 -I 5 00:13:02.966 [2024-04-26 12:56:07.850796] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19658: invalid cntlid range [6-5] 00:13:02.966 12:56:07 -- target/invalid.sh@83 -- # out='request: 00:13:02.966 { 00:13:02.966 "nqn": "nqn.2016-06.io.spdk:cnode19658", 00:13:02.966 "min_cntlid": 6, 00:13:02.966 "max_cntlid": 5, 00:13:02.966 "method": "nvmf_create_subsystem", 00:13:02.966 "req_id": 1 00:13:02.966 } 00:13:02.966 Got JSON-RPC error response 00:13:02.966 response: 00:13:02.966 { 00:13:02.966 "code": -32602, 00:13:02.966 "message": "Invalid cntlid range [6-5]" 00:13:02.966 }' 00:13:02.966 12:56:07 -- target/invalid.sh@84 -- # [[ request: 00:13:02.966 { 00:13:02.966 "nqn": "nqn.2016-06.io.spdk:cnode19658", 00:13:02.966 "min_cntlid": 6, 00:13:02.966 "max_cntlid": 5, 00:13:02.966 "method": "nvmf_create_subsystem", 00:13:02.966 "req_id": 1 00:13:02.966 } 00:13:02.966 Got JSON-RPC error response 00:13:02.966 response: 00:13:02.966 { 00:13:02.966 "code": -32602, 00:13:02.966 "message": "Invalid cntlid range [6-5]" 00:13:02.966 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:02.966 12:56:07 -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:13:02.966 12:56:07 -- target/invalid.sh@87 -- # out='request: 00:13:02.966 { 00:13:02.966 "name": "foobar", 00:13:02.966 "method": "nvmf_delete_target", 00:13:02.966 "req_id": 1 00:13:02.966 } 00:13:02.966 Got JSON-RPC error response 00:13:02.966 response: 00:13:02.966 { 00:13:02.966 "code": -32602, 00:13:02.966 "message": "The specified target doesn'\''t exist, cannot delete it." 00:13:02.966 }' 00:13:02.966 12:56:07 -- target/invalid.sh@88 -- # [[ request: 00:13:02.966 { 00:13:02.966 "name": "foobar", 00:13:02.966 "method": "nvmf_delete_target", 00:13:02.966 "req_id": 1 00:13:02.966 } 00:13:02.966 Got JSON-RPC error response 00:13:02.966 response: 00:13:02.966 { 00:13:02.966 "code": -32602, 00:13:02.967 "message": "The specified target doesn't exist, cannot delete it." 00:13:02.967 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:13:02.967 12:56:07 -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:13:02.967 12:56:07 -- target/invalid.sh@91 -- # nvmftestfini 00:13:02.967 12:56:07 -- nvmf/common.sh@477 -- # nvmfcleanup 00:13:02.967 12:56:07 -- nvmf/common.sh@117 -- # sync 00:13:02.967 12:56:07 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:02.967 12:56:07 -- nvmf/common.sh@120 -- # set +e 00:13:02.967 12:56:07 -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:02.967 12:56:07 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:02.967 rmmod nvme_tcp 00:13:02.967 rmmod nvme_fabrics 00:13:02.967 rmmod nvme_keyring 00:13:02.967 12:56:08 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:03.228 12:56:08 -- nvmf/common.sh@124 -- # set -e 00:13:03.228 12:56:08 -- nvmf/common.sh@125 -- # return 0 00:13:03.228 12:56:08 -- nvmf/common.sh@478 -- # '[' -n 3877518 ']' 00:13:03.228 12:56:08 -- nvmf/common.sh@479 -- # killprocess 3877518 00:13:03.228 12:56:08 -- common/autotest_common.sh@936 -- # '[' -z 3877518 ']' 00:13:03.228 12:56:08 -- common/autotest_common.sh@940 -- # kill -0 3877518 00:13:03.228 12:56:08 -- common/autotest_common.sh@941 -- # uname 00:13:03.228 12:56:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:03.228 12:56:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3877518 00:13:03.228 12:56:08 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:03.228 12:56:08 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:03.228 12:56:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3877518' 00:13:03.228 killing process with pid 3877518 00:13:03.228 12:56:08 -- common/autotest_common.sh@955 -- # kill 3877518 00:13:03.228 12:56:08 -- common/autotest_common.sh@960 -- # wait 3877518 00:13:03.228 12:56:08 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:13:03.229 12:56:08 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:13:03.229 12:56:08 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:13:03.229 12:56:08 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:03.229 12:56:08 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:03.229 12:56:08 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:03.229 12:56:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:03.229 12:56:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:05.779 12:56:10 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:05.779 00:13:05.779 real 0m13.471s 00:13:05.779 user 0m18.949s 00:13:05.779 sys 0m6.340s 00:13:05.779 12:56:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:05.779 12:56:10 -- common/autotest_common.sh@10 -- # set +x 00:13:05.779 ************************************ 00:13:05.779 END TEST nvmf_invalid 00:13:05.779 ************************************ 00:13:05.779 12:56:10 -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:13:05.779 12:56:10 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:05.779 12:56:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:05.779 12:56:10 -- common/autotest_common.sh@10 -- # set +x 00:13:05.779 ************************************ 00:13:05.779 START TEST nvmf_abort 00:13:05.779 ************************************ 00:13:05.779 12:56:10 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:13:05.779 * Looking for test storage... 00:13:05.779 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:05.779 12:56:10 -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:05.779 12:56:10 -- nvmf/common.sh@7 -- # uname -s 00:13:05.779 12:56:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:05.779 12:56:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:05.779 12:56:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:05.779 12:56:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:05.779 12:56:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:05.779 12:56:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:05.779 12:56:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:05.779 12:56:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:05.779 12:56:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:05.779 12:56:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:05.779 12:56:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:05.779 12:56:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:05.779 12:56:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:05.779 12:56:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:05.779 12:56:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:05.779 12:56:10 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:05.779 12:56:10 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:05.779 12:56:10 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:05.779 12:56:10 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:05.779 12:56:10 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:05.779 12:56:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:05.779 12:56:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:05.779 12:56:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:05.779 12:56:10 -- paths/export.sh@5 -- # export PATH 00:13:05.779 12:56:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:05.779 12:56:10 -- nvmf/common.sh@47 -- # : 0 00:13:05.779 12:56:10 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:05.779 12:56:10 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:05.779 12:56:10 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:05.779 12:56:10 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:05.779 12:56:10 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:05.779 12:56:10 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:05.779 12:56:10 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:05.779 12:56:10 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:05.779 12:56:10 -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:05.779 12:56:10 -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:13:05.779 12:56:10 -- target/abort.sh@14 -- # nvmftestinit 00:13:05.779 12:56:10 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:13:05.779 12:56:10 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:05.779 12:56:10 -- nvmf/common.sh@437 -- # prepare_net_devs 00:13:05.779 12:56:10 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:13:05.779 12:56:10 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:13:05.779 12:56:10 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:05.780 12:56:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:05.780 12:56:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:05.780 12:56:10 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:13:05.780 12:56:10 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:13:05.780 12:56:10 -- nvmf/common.sh@285 -- # xtrace_disable 00:13:05.780 12:56:10 -- common/autotest_common.sh@10 -- # set +x 00:13:12.374 12:56:17 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:12.374 12:56:17 -- nvmf/common.sh@291 -- # pci_devs=() 00:13:12.374 12:56:17 -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:12.374 12:56:17 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:12.374 12:56:17 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:12.374 12:56:17 -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:12.374 12:56:17 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:12.374 12:56:17 -- nvmf/common.sh@295 -- # net_devs=() 00:13:12.374 12:56:17 -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:12.374 12:56:17 -- nvmf/common.sh@296 -- # e810=() 00:13:12.374 12:56:17 -- nvmf/common.sh@296 -- # local -ga e810 00:13:12.374 12:56:17 -- nvmf/common.sh@297 -- # x722=() 00:13:12.374 12:56:17 -- nvmf/common.sh@297 -- # local -ga x722 00:13:12.374 12:56:17 -- nvmf/common.sh@298 -- # mlx=() 00:13:12.374 12:56:17 -- nvmf/common.sh@298 -- # local -ga mlx 00:13:12.374 12:56:17 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:12.374 12:56:17 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:12.374 12:56:17 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:12.374 12:56:17 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:12.374 12:56:17 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:12.374 12:56:17 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:12.374 12:56:17 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:12.374 12:56:17 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:12.374 12:56:17 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:12.374 12:56:17 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:12.374 12:56:17 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:12.374 12:56:17 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:12.374 12:56:17 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:12.374 12:56:17 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:12.374 12:56:17 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:12.374 12:56:17 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:12.374 12:56:17 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:12.374 12:56:17 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:12.374 12:56:17 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:13:12.374 Found 0000:31:00.0 (0x8086 - 0x159b) 00:13:12.374 12:56:17 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:12.374 12:56:17 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:12.374 12:56:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:12.374 12:56:17 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:12.374 12:56:17 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:12.374 12:56:17 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:12.374 12:56:17 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:13:12.374 Found 0000:31:00.1 (0x8086 - 0x159b) 00:13:12.374 12:56:17 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:12.374 12:56:17 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:12.374 12:56:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:12.374 12:56:17 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:12.374 12:56:17 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:12.374 12:56:17 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:12.374 12:56:17 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:12.374 12:56:17 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:12.374 12:56:17 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:12.374 12:56:17 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:12.374 12:56:17 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:12.374 12:56:17 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:12.374 12:56:17 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:13:12.374 Found net devices under 0000:31:00.0: cvl_0_0 00:13:12.374 12:56:17 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:12.374 12:56:17 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:12.374 12:56:17 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:12.374 12:56:17 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:12.374 12:56:17 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:12.374 12:56:17 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:13:12.374 Found net devices under 0000:31:00.1: cvl_0_1 00:13:12.374 12:56:17 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:12.374 12:56:17 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:13:12.374 12:56:17 -- nvmf/common.sh@403 -- # is_hw=yes 00:13:12.374 12:56:17 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:13:12.374 12:56:17 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:13:12.374 12:56:17 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:13:12.374 12:56:17 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:12.374 12:56:17 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:12.374 12:56:17 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:12.374 12:56:17 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:12.374 12:56:17 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:12.374 12:56:17 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:12.374 12:56:17 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:12.374 12:56:17 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:12.374 12:56:17 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:12.374 12:56:17 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:12.374 12:56:17 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:12.374 12:56:17 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:12.374 12:56:17 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:12.636 12:56:17 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:12.636 12:56:17 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:12.636 12:56:17 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:12.637 12:56:17 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:12.637 12:56:17 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:12.637 12:56:17 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:12.637 12:56:17 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:12.637 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:12.637 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.488 ms 00:13:12.637 00:13:12.637 --- 10.0.0.2 ping statistics --- 00:13:12.637 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:12.637 rtt min/avg/max/mdev = 0.488/0.488/0.488/0.000 ms 00:13:12.637 12:56:17 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:12.898 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:12.898 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.320 ms 00:13:12.898 00:13:12.899 --- 10.0.0.1 ping statistics --- 00:13:12.899 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:12.899 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:13:12.899 12:56:17 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:12.899 12:56:17 -- nvmf/common.sh@411 -- # return 0 00:13:12.899 12:56:17 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:13:12.899 12:56:17 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:12.899 12:56:17 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:13:12.899 12:56:17 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:13:12.899 12:56:17 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:12.899 12:56:17 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:13:12.899 12:56:17 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:13:12.899 12:56:17 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:13:12.899 12:56:17 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:13:12.899 12:56:17 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:12.899 12:56:17 -- common/autotest_common.sh@10 -- # set +x 00:13:12.899 12:56:17 -- nvmf/common.sh@470 -- # nvmfpid=3882691 00:13:12.899 12:56:17 -- nvmf/common.sh@471 -- # waitforlisten 3882691 00:13:12.899 12:56:17 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:12.899 12:56:17 -- common/autotest_common.sh@817 -- # '[' -z 3882691 ']' 00:13:12.899 12:56:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:12.899 12:56:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:12.899 12:56:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:12.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:12.899 12:56:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:12.899 12:56:17 -- common/autotest_common.sh@10 -- # set +x 00:13:12.899 [2024-04-26 12:56:17.792618] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:13:12.899 [2024-04-26 12:56:17.792680] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:12.899 EAL: No free 2048 kB hugepages reported on node 1 00:13:12.899 [2024-04-26 12:56:17.880870] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:13.160 [2024-04-26 12:56:17.972754] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:13.160 [2024-04-26 12:56:17.972814] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:13.160 [2024-04-26 12:56:17.972823] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:13.160 [2024-04-26 12:56:17.972830] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:13.160 [2024-04-26 12:56:17.972844] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:13.160 [2024-04-26 12:56:17.973028] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:13.160 [2024-04-26 12:56:17.973285] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:13.160 [2024-04-26 12:56:17.973286] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:13.733 12:56:18 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:13.733 12:56:18 -- common/autotest_common.sh@850 -- # return 0 00:13:13.733 12:56:18 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:13:13.734 12:56:18 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:13.734 12:56:18 -- common/autotest_common.sh@10 -- # set +x 00:13:13.734 12:56:18 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:13.734 12:56:18 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:13:13.734 12:56:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:13.734 12:56:18 -- common/autotest_common.sh@10 -- # set +x 00:13:13.734 [2024-04-26 12:56:18.626982] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:13.734 12:56:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:13.734 12:56:18 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:13:13.734 12:56:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:13.734 12:56:18 -- common/autotest_common.sh@10 -- # set +x 00:13:13.734 Malloc0 00:13:13.734 12:56:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:13.734 12:56:18 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:13.734 12:56:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:13.734 12:56:18 -- common/autotest_common.sh@10 -- # set +x 00:13:13.734 Delay0 00:13:13.734 12:56:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:13.734 12:56:18 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:13.734 12:56:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:13.734 12:56:18 -- common/autotest_common.sh@10 -- # set +x 00:13:13.734 12:56:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:13.734 12:56:18 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:13:13.734 12:56:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:13.734 12:56:18 -- common/autotest_common.sh@10 -- # set +x 00:13:13.734 12:56:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:13.734 12:56:18 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:13.734 12:56:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:13.734 12:56:18 -- common/autotest_common.sh@10 -- # set +x 00:13:13.734 [2024-04-26 12:56:18.702315] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:13.734 12:56:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:13.734 12:56:18 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:13.734 12:56:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:13.734 12:56:18 -- common/autotest_common.sh@10 -- # set +x 00:13:13.734 12:56:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:13.734 12:56:18 -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:13:13.734 EAL: No free 2048 kB hugepages reported on node 1 00:13:13.994 [2024-04-26 12:56:18.823251] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:13:15.909 Initializing NVMe Controllers 00:13:15.909 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:13:15.910 controller IO queue size 128 less than required 00:13:15.910 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:13:15.910 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:13:15.910 Initialization complete. Launching workers. 00:13:15.910 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 35347 00:13:15.910 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 35408, failed to submit 62 00:13:15.910 success 35351, unsuccess 57, failed 0 00:13:15.910 12:56:20 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:15.910 12:56:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:15.910 12:56:20 -- common/autotest_common.sh@10 -- # set +x 00:13:15.910 12:56:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:15.910 12:56:20 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:13:15.910 12:56:20 -- target/abort.sh@38 -- # nvmftestfini 00:13:15.910 12:56:20 -- nvmf/common.sh@477 -- # nvmfcleanup 00:13:15.910 12:56:20 -- nvmf/common.sh@117 -- # sync 00:13:15.910 12:56:20 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:15.910 12:56:20 -- nvmf/common.sh@120 -- # set +e 00:13:15.910 12:56:20 -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:15.910 12:56:20 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:15.910 rmmod nvme_tcp 00:13:15.910 rmmod nvme_fabrics 00:13:15.910 rmmod nvme_keyring 00:13:15.910 12:56:20 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:15.910 12:56:20 -- nvmf/common.sh@124 -- # set -e 00:13:15.910 12:56:20 -- nvmf/common.sh@125 -- # return 0 00:13:15.910 12:56:20 -- nvmf/common.sh@478 -- # '[' -n 3882691 ']' 00:13:15.910 12:56:20 -- nvmf/common.sh@479 -- # killprocess 3882691 00:13:15.910 12:56:20 -- common/autotest_common.sh@936 -- # '[' -z 3882691 ']' 00:13:15.910 12:56:20 -- common/autotest_common.sh@940 -- # kill -0 3882691 00:13:15.910 12:56:20 -- common/autotest_common.sh@941 -- # uname 00:13:15.910 12:56:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:15.910 12:56:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3882691 00:13:16.171 12:56:20 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:13:16.171 12:56:20 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:13:16.171 12:56:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3882691' 00:13:16.171 killing process with pid 3882691 00:13:16.171 12:56:20 -- common/autotest_common.sh@955 -- # kill 3882691 00:13:16.171 12:56:20 -- common/autotest_common.sh@960 -- # wait 3882691 00:13:16.171 12:56:21 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:13:16.171 12:56:21 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:13:16.171 12:56:21 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:13:16.171 12:56:21 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:16.171 12:56:21 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:16.171 12:56:21 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:16.171 12:56:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:16.171 12:56:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:18.715 12:56:23 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:18.715 00:13:18.715 real 0m12.742s 00:13:18.715 user 0m13.201s 00:13:18.715 sys 0m6.162s 00:13:18.716 12:56:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:18.716 12:56:23 -- common/autotest_common.sh@10 -- # set +x 00:13:18.716 ************************************ 00:13:18.716 END TEST nvmf_abort 00:13:18.716 ************************************ 00:13:18.716 12:56:23 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:18.716 12:56:23 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:18.716 12:56:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:18.716 12:56:23 -- common/autotest_common.sh@10 -- # set +x 00:13:18.716 ************************************ 00:13:18.716 START TEST nvmf_ns_hotplug_stress 00:13:18.716 ************************************ 00:13:18.716 12:56:23 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:18.716 * Looking for test storage... 00:13:18.716 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:18.716 12:56:23 -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:18.716 12:56:23 -- nvmf/common.sh@7 -- # uname -s 00:13:18.716 12:56:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:18.716 12:56:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:18.716 12:56:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:18.716 12:56:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:18.716 12:56:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:18.716 12:56:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:18.716 12:56:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:18.716 12:56:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:18.716 12:56:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:18.716 12:56:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:18.716 12:56:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:18.716 12:56:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:18.716 12:56:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:18.716 12:56:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:18.716 12:56:23 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:18.716 12:56:23 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:18.716 12:56:23 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:18.716 12:56:23 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:18.716 12:56:23 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:18.716 12:56:23 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:18.716 12:56:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.716 12:56:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.716 12:56:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.716 12:56:23 -- paths/export.sh@5 -- # export PATH 00:13:18.716 12:56:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.716 12:56:23 -- nvmf/common.sh@47 -- # : 0 00:13:18.716 12:56:23 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:18.716 12:56:23 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:18.716 12:56:23 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:18.716 12:56:23 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:18.716 12:56:23 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:18.716 12:56:23 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:18.716 12:56:23 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:18.716 12:56:23 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:18.716 12:56:23 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:18.716 12:56:23 -- target/ns_hotplug_stress.sh@13 -- # nvmftestinit 00:13:18.716 12:56:23 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:13:18.716 12:56:23 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:18.716 12:56:23 -- nvmf/common.sh@437 -- # prepare_net_devs 00:13:18.716 12:56:23 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:13:18.716 12:56:23 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:13:18.716 12:56:23 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:18.716 12:56:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:18.716 12:56:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:18.716 12:56:23 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:13:18.716 12:56:23 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:13:18.716 12:56:23 -- nvmf/common.sh@285 -- # xtrace_disable 00:13:18.716 12:56:23 -- common/autotest_common.sh@10 -- # set +x 00:13:25.421 12:56:30 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:25.421 12:56:30 -- nvmf/common.sh@291 -- # pci_devs=() 00:13:25.421 12:56:30 -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:25.421 12:56:30 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:25.421 12:56:30 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:25.421 12:56:30 -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:25.421 12:56:30 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:25.421 12:56:30 -- nvmf/common.sh@295 -- # net_devs=() 00:13:25.421 12:56:30 -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:25.421 12:56:30 -- nvmf/common.sh@296 -- # e810=() 00:13:25.421 12:56:30 -- nvmf/common.sh@296 -- # local -ga e810 00:13:25.421 12:56:30 -- nvmf/common.sh@297 -- # x722=() 00:13:25.421 12:56:30 -- nvmf/common.sh@297 -- # local -ga x722 00:13:25.421 12:56:30 -- nvmf/common.sh@298 -- # mlx=() 00:13:25.421 12:56:30 -- nvmf/common.sh@298 -- # local -ga mlx 00:13:25.421 12:56:30 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:25.421 12:56:30 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:25.421 12:56:30 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:25.421 12:56:30 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:25.421 12:56:30 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:25.421 12:56:30 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:25.421 12:56:30 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:25.421 12:56:30 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:25.421 12:56:30 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:25.421 12:56:30 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:25.421 12:56:30 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:25.421 12:56:30 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:25.421 12:56:30 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:25.421 12:56:30 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:25.421 12:56:30 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:25.421 12:56:30 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:25.421 12:56:30 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:25.421 12:56:30 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:25.421 12:56:30 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:13:25.421 Found 0000:31:00.0 (0x8086 - 0x159b) 00:13:25.421 12:56:30 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:25.421 12:56:30 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:25.421 12:56:30 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:25.421 12:56:30 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:25.421 12:56:30 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:25.421 12:56:30 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:25.421 12:56:30 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:13:25.421 Found 0000:31:00.1 (0x8086 - 0x159b) 00:13:25.421 12:56:30 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:25.421 12:56:30 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:25.421 12:56:30 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:25.421 12:56:30 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:25.421 12:56:30 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:25.421 12:56:30 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:25.421 12:56:30 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:25.421 12:56:30 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:25.421 12:56:30 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:25.422 12:56:30 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:25.422 12:56:30 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:25.422 12:56:30 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:25.422 12:56:30 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:13:25.422 Found net devices under 0000:31:00.0: cvl_0_0 00:13:25.422 12:56:30 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:25.422 12:56:30 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:25.422 12:56:30 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:25.422 12:56:30 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:25.422 12:56:30 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:25.422 12:56:30 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:13:25.422 Found net devices under 0000:31:00.1: cvl_0_1 00:13:25.422 12:56:30 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:25.422 12:56:30 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:13:25.422 12:56:30 -- nvmf/common.sh@403 -- # is_hw=yes 00:13:25.422 12:56:30 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:13:25.422 12:56:30 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:13:25.422 12:56:30 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:13:25.422 12:56:30 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:25.422 12:56:30 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:25.422 12:56:30 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:25.422 12:56:30 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:25.422 12:56:30 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:25.422 12:56:30 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:25.422 12:56:30 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:25.422 12:56:30 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:25.422 12:56:30 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:25.422 12:56:30 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:25.422 12:56:30 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:25.422 12:56:30 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:25.422 12:56:30 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:25.683 12:56:30 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:25.683 12:56:30 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:25.683 12:56:30 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:25.683 12:56:30 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:25.683 12:56:30 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:25.683 12:56:30 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:25.683 12:56:30 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:25.683 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:25.683 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.584 ms 00:13:25.683 00:13:25.683 --- 10.0.0.2 ping statistics --- 00:13:25.683 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:25.684 rtt min/avg/max/mdev = 0.584/0.584/0.584/0.000 ms 00:13:25.684 12:56:30 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:25.684 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:25.684 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.345 ms 00:13:25.684 00:13:25.684 --- 10.0.0.1 ping statistics --- 00:13:25.684 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:25.684 rtt min/avg/max/mdev = 0.345/0.345/0.345/0.000 ms 00:13:25.684 12:56:30 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:25.684 12:56:30 -- nvmf/common.sh@411 -- # return 0 00:13:25.684 12:56:30 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:13:25.684 12:56:30 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:25.684 12:56:30 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:13:25.684 12:56:30 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:13:25.684 12:56:30 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:25.684 12:56:30 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:13:25.684 12:56:30 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:13:25.945 12:56:30 -- target/ns_hotplug_stress.sh@14 -- # nvmfappstart -m 0xE 00:13:25.945 12:56:30 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:13:25.945 12:56:30 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:25.945 12:56:30 -- common/autotest_common.sh@10 -- # set +x 00:13:25.945 12:56:30 -- nvmf/common.sh@470 -- # nvmfpid=3887538 00:13:25.945 12:56:30 -- nvmf/common.sh@471 -- # waitforlisten 3887538 00:13:25.945 12:56:30 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:25.945 12:56:30 -- common/autotest_common.sh@817 -- # '[' -z 3887538 ']' 00:13:25.945 12:56:30 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:25.945 12:56:30 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:25.945 12:56:30 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:25.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:25.945 12:56:30 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:25.945 12:56:30 -- common/autotest_common.sh@10 -- # set +x 00:13:25.945 [2024-04-26 12:56:30.838306] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:13:25.945 [2024-04-26 12:56:30.838370] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:25.945 EAL: No free 2048 kB hugepages reported on node 1 00:13:25.945 [2024-04-26 12:56:30.905491] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:25.945 [2024-04-26 12:56:30.990054] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:25.945 [2024-04-26 12:56:30.990112] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:25.945 [2024-04-26 12:56:30.990120] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:25.945 [2024-04-26 12:56:30.990125] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:25.945 [2024-04-26 12:56:30.990131] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:25.945 [2024-04-26 12:56:30.990297] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:25.945 [2024-04-26 12:56:30.992868] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:25.945 [2024-04-26 12:56:30.993081] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:26.887 12:56:31 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:26.887 12:56:31 -- common/autotest_common.sh@850 -- # return 0 00:13:26.887 12:56:31 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:13:26.887 12:56:31 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:26.887 12:56:31 -- common/autotest_common.sh@10 -- # set +x 00:13:26.887 12:56:31 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:26.887 12:56:31 -- target/ns_hotplug_stress.sh@16 -- # null_size=1000 00:13:26.887 12:56:31 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:26.887 [2024-04-26 12:56:31.859356] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:26.887 12:56:31 -- target/ns_hotplug_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:27.147 12:56:32 -- target/ns_hotplug_stress.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:27.147 [2024-04-26 12:56:32.196499] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:27.408 12:56:32 -- target/ns_hotplug_stress.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:27.408 12:56:32 -- target/ns_hotplug_stress.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:13:27.670 Malloc0 00:13:27.670 12:56:32 -- target/ns_hotplug_stress.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:27.670 Delay0 00:13:27.670 12:56:32 -- target/ns_hotplug_stress.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:27.931 12:56:32 -- target/ns_hotplug_stress.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:13:28.191 NULL1 00:13:28.191 12:56:33 -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:28.191 12:56:33 -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:13:28.191 12:56:33 -- target/ns_hotplug_stress.sh@33 -- # PERF_PID=3888148 00:13:28.191 12:56:33 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3888148 00:13:28.191 12:56:33 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:28.191 EAL: No free 2048 kB hugepages reported on node 1 00:13:29.571 Read completed with error (sct=0, sc=11) 00:13:29.571 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:29.571 12:56:34 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:29.571 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:29.571 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:29.571 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:29.571 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:29.571 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:29.571 12:56:34 -- target/ns_hotplug_stress.sh@40 -- # null_size=1001 00:13:29.571 12:56:34 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:13:29.831 true 00:13:29.831 12:56:34 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3888148 00:13:29.831 12:56:34 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:30.771 12:56:35 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:30.771 12:56:35 -- target/ns_hotplug_stress.sh@40 -- # null_size=1002 00:13:30.771 12:56:35 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:13:31.031 true 00:13:31.031 12:56:35 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3888148 00:13:31.031 12:56:35 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:31.031 12:56:36 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:31.291 12:56:36 -- target/ns_hotplug_stress.sh@40 -- # null_size=1003 00:13:31.291 12:56:36 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:13:31.292 true 00:13:31.552 12:56:36 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3888148 00:13:31.552 12:56:36 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:31.552 12:56:36 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:31.812 12:56:36 -- target/ns_hotplug_stress.sh@40 -- # null_size=1004 00:13:31.812 12:56:36 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:13:31.812 true 00:13:31.812 12:56:36 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3888148 00:13:31.812 12:56:36 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:32.078 12:56:37 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:32.340 12:56:37 -- target/ns_hotplug_stress.sh@40 -- # null_size=1005 00:13:32.340 12:56:37 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:13:32.340 true 00:13:32.340 12:56:37 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3888148 00:13:32.340 12:56:37 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:32.600 12:56:37 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:32.861 12:56:37 -- target/ns_hotplug_stress.sh@40 -- # null_size=1006 00:13:32.861 12:56:37 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:13:32.861 true 00:13:32.861 12:56:37 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3888148 00:13:32.861 12:56:37 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:33.122 12:56:38 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:33.122 12:56:38 -- target/ns_hotplug_stress.sh@40 -- # null_size=1007 00:13:33.122 12:56:38 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:13:33.383 true 00:13:33.383 12:56:38 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3888148 00:13:33.383 12:56:38 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:33.645 12:56:38 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:33.645 12:56:38 -- target/ns_hotplug_stress.sh@40 -- # null_size=1008 00:13:33.645 12:56:38 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:13:33.907 true 00:13:33.907 12:56:38 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3888148 00:13:33.907 12:56:38 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:34.168 12:56:38 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:34.168 12:56:39 -- target/ns_hotplug_stress.sh@40 -- # null_size=1009 00:13:34.168 12:56:39 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:13:34.429 true 00:13:34.429 12:56:39 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3888148 00:13:34.429 12:56:39 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:34.429 12:56:39 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:34.690 12:56:39 -- target/ns_hotplug_stress.sh@40 -- # null_size=1010 00:13:34.690 12:56:39 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:13:34.950 true 00:13:34.950 12:56:39 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3888148 00:13:34.950 12:56:39 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:34.950 12:56:39 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:35.212 12:56:40 -- target/ns_hotplug_stress.sh@40 -- # null_size=1011 00:13:35.212 12:56:40 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:13:35.472 true 00:13:35.472 12:56:40 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3888148 00:13:35.472 12:56:40 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:35.472 12:56:40 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:35.733 12:56:40 -- target/ns_hotplug_stress.sh@40 -- # null_size=1012 00:13:35.733 12:56:40 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:13:35.733 true 00:13:35.993 12:56:40 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3888148 00:13:35.993 12:56:40 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:36.936 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:36.936 12:56:41 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:36.936 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:36.936 12:56:41 -- target/ns_hotplug_stress.sh@40 -- # null_size=1013 00:13:36.936 12:56:41 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:13:37.196 true 00:13:37.196 12:56:42 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3888148 00:13:37.196 12:56:42 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:37.196 12:56:42 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:37.456 12:56:42 -- target/ns_hotplug_stress.sh@40 -- # null_size=1014 00:13:37.456 12:56:42 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:13:37.456 true 00:13:37.716 12:56:42 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3888148 00:13:37.716 12:56:42 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:37.716 12:56:42 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:37.977 12:56:42 -- target/ns_hotplug_stress.sh@40 -- # null_size=1015 00:13:37.977 12:56:42 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:13:37.977 true 00:13:37.977 12:56:43 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3888148 00:13:37.977 12:56:43 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:38.237 12:56:43 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:38.504 12:56:43 -- target/ns_hotplug_stress.sh@40 -- # null_size=1016 00:13:38.504 12:56:43 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:13:38.504 true 00:13:38.504 12:56:43 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3888148 00:13:38.504 12:56:43 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:38.767 12:56:43 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:38.767 12:56:43 -- target/ns_hotplug_stress.sh@40 -- # null_size=1017 00:13:38.767 12:56:43 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:13:39.029 true 00:13:39.029 12:56:43 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3888148 00:13:39.029 12:56:43 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:39.973 12:56:44 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:40.234 12:56:45 -- target/ns_hotplug_stress.sh@40 -- # null_size=1018 00:13:40.234 12:56:45 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:13:40.234 true 00:13:40.235 12:56:45 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3888148 00:13:40.235 12:56:45 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:40.496 12:56:45 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:40.496 12:56:45 -- target/ns_hotplug_stress.sh@40 -- # null_size=1019 00:13:40.496 12:56:45 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:13:40.757 true 00:13:40.757 12:56:45 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3888148 00:13:40.757 12:56:45 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:41.018 12:56:45 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:41.018 12:56:45 -- target/ns_hotplug_stress.sh@40 -- # null_size=1020 00:13:41.018 12:56:45 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:13:41.279 true 00:13:41.279 12:56:46 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3888148 00:13:41.279 12:56:46 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:41.279 12:56:46 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:41.541 12:56:46 -- target/ns_hotplug_stress.sh@40 -- # null_size=1021 00:13:41.541 12:56:46 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:13:41.802 true 00:13:41.802 12:56:46 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3888148 00:13:41.802 12:56:46 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:41.802 12:56:46 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:42.064 12:56:46 -- target/ns_hotplug_stress.sh@40 -- # null_size=1022 00:13:42.064 12:56:46 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:13:42.064 true 00:13:42.064 12:56:47 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3888148 00:13:42.064 12:56:47 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:42.325 12:56:47 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:42.587 12:56:47 -- target/ns_hotplug_stress.sh@40 -- # null_size=1023 00:13:42.587 12:56:47 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:13:42.587 true 00:13:42.587 12:56:47 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3888148 00:13:42.587 12:56:47 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:42.848 12:56:47 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:42.848 12:56:47 -- target/ns_hotplug_stress.sh@40 -- # null_size=1024 00:13:42.848 12:56:47 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:13:43.109 true 00:13:43.109 12:56:48 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3888148 00:13:43.109 12:56:48 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:44.052 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:44.052 12:56:49 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:44.052 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:44.313 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:44.313 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:44.313 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:44.313 12:56:49 -- target/ns_hotplug_stress.sh@40 -- # null_size=1025 00:13:44.313 12:56:49 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:13:44.574 true 00:13:44.574 12:56:49 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3888148 00:13:44.574 12:56:49 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:45.515 12:56:50 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:45.515 12:56:50 -- target/ns_hotplug_stress.sh@40 -- # null_size=1026 00:13:45.515 12:56:50 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:13:45.774 true 00:13:45.774 12:56:50 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3888148 00:13:45.774 12:56:50 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:45.774 12:56:50 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:46.034 12:56:50 -- target/ns_hotplug_stress.sh@40 -- # null_size=1027 00:13:46.034 12:56:50 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:13:46.294 true 00:13:46.294 12:56:51 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3888148 00:13:46.294 12:56:51 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:46.294 12:56:51 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:46.554 12:56:51 -- target/ns_hotplug_stress.sh@40 -- # null_size=1028 00:13:46.554 12:56:51 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:13:46.554 true 00:13:46.815 12:56:51 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3888148 00:13:46.815 12:56:51 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:46.815 12:56:51 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:47.076 12:56:51 -- target/ns_hotplug_stress.sh@40 -- # null_size=1029 00:13:47.076 12:56:51 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:13:47.076 true 00:13:47.336 12:56:52 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3888148 00:13:47.336 12:56:52 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:47.336 12:56:52 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:47.596 12:56:52 -- target/ns_hotplug_stress.sh@40 -- # null_size=1030 00:13:47.596 12:56:52 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:13:47.596 true 00:13:47.596 12:56:52 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3888148 00:13:47.596 12:56:52 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:47.856 12:56:52 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:48.116 12:56:52 -- target/ns_hotplug_stress.sh@40 -- # null_size=1031 00:13:48.116 12:56:52 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:13:48.116 true 00:13:48.116 12:56:53 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3888148 00:13:48.116 12:56:53 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:48.382 12:56:53 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:48.382 12:56:53 -- target/ns_hotplug_stress.sh@40 -- # null_size=1032 00:13:48.382 12:56:53 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:13:48.642 true 00:13:48.642 12:56:53 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3888148 00:13:48.642 12:56:53 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:48.902 12:56:53 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:48.902 12:56:53 -- target/ns_hotplug_stress.sh@40 -- # null_size=1033 00:13:48.902 12:56:53 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:13:49.162 true 00:13:49.162 12:56:54 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3888148 00:13:49.162 12:56:54 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:49.422 12:56:54 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:49.422 12:56:54 -- target/ns_hotplug_stress.sh@40 -- # null_size=1034 00:13:49.422 12:56:54 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:13:49.681 true 00:13:49.681 12:56:54 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3888148 00:13:49.681 12:56:54 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:50.679 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:50.679 12:56:55 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:50.679 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:50.679 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:50.679 12:56:55 -- target/ns_hotplug_stress.sh@40 -- # null_size=1035 00:13:50.679 12:56:55 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:13:50.938 true 00:13:50.939 12:56:55 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3888148 00:13:50.939 12:56:55 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:51.878 12:56:56 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:51.878 12:56:56 -- target/ns_hotplug_stress.sh@40 -- # null_size=1036 00:13:51.878 12:56:56 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:13:52.138 true 00:13:52.138 12:56:57 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3888148 00:13:52.138 12:56:57 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:52.398 12:56:57 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:52.398 12:56:57 -- target/ns_hotplug_stress.sh@40 -- # null_size=1037 00:13:52.398 12:56:57 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:13:52.658 true 00:13:52.658 12:56:57 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3888148 00:13:52.658 12:56:57 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:52.658 12:56:57 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:52.918 12:56:57 -- target/ns_hotplug_stress.sh@40 -- # null_size=1038 00:13:52.918 12:56:57 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:13:53.178 true 00:13:53.178 12:56:58 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3888148 00:13:53.178 12:56:58 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:53.178 12:56:58 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:53.439 12:56:58 -- target/ns_hotplug_stress.sh@40 -- # null_size=1039 00:13:53.439 12:56:58 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:13:53.700 true 00:13:53.700 12:56:58 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3888148 00:13:53.700 12:56:58 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:53.700 12:56:58 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:53.961 12:56:58 -- target/ns_hotplug_stress.sh@40 -- # null_size=1040 00:13:53.961 12:56:58 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:13:54.222 true 00:13:54.222 12:56:59 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3888148 00:13:54.222 12:56:59 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:54.222 12:56:59 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:54.483 12:56:59 -- target/ns_hotplug_stress.sh@40 -- # null_size=1041 00:13:54.483 12:56:59 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:13:54.483 true 00:13:54.483 12:56:59 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3888148 00:13:54.483 12:56:59 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:54.744 12:56:59 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:55.005 12:56:59 -- target/ns_hotplug_stress.sh@40 -- # null_size=1042 00:13:55.005 12:56:59 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:13:55.005 true 00:13:55.005 12:57:00 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3888148 00:13:55.005 12:57:00 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:55.948 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:55.948 12:57:00 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:55.948 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:56.208 12:57:01 -- target/ns_hotplug_stress.sh@40 -- # null_size=1043 00:13:56.208 12:57:01 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:13:56.208 true 00:13:56.208 12:57:01 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3888148 00:13:56.208 12:57:01 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:56.469 12:57:01 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:56.731 12:57:01 -- target/ns_hotplug_stress.sh@40 -- # null_size=1044 00:13:56.731 12:57:01 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:13:56.731 true 00:13:56.731 12:57:01 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3888148 00:13:56.731 12:57:01 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:56.991 12:57:01 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:57.252 12:57:02 -- target/ns_hotplug_stress.sh@40 -- # null_size=1045 00:13:57.252 12:57:02 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:13:57.252 true 00:13:57.252 12:57:02 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3888148 00:13:57.252 12:57:02 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:57.512 12:57:02 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:57.512 12:57:02 -- target/ns_hotplug_stress.sh@40 -- # null_size=1046 00:13:57.512 12:57:02 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:13:57.772 true 00:13:57.772 12:57:02 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3888148 00:13:57.772 12:57:02 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:58.032 12:57:02 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:58.032 12:57:03 -- target/ns_hotplug_stress.sh@40 -- # null_size=1047 00:13:58.032 12:57:03 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:13:58.293 true 00:13:58.293 12:57:03 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3888148 00:13:58.293 12:57:03 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:58.552 12:57:03 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:58.552 Initializing NVMe Controllers 00:13:58.552 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:58.552 Controller IO queue size 128, less than required. 00:13:58.552 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:58.552 Controller IO queue size 128, less than required. 00:13:58.552 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:58.552 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:58.552 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:13:58.552 Initialization complete. Launching workers. 00:13:58.552 ======================================================== 00:13:58.552 Latency(us) 00:13:58.552 Device Information : IOPS MiB/s Average min max 00:13:58.552 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 644.55 0.31 53293.86 2355.26 1227152.37 00:13:58.552 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 7933.85 3.87 16134.49 1429.36 449050.78 00:13:58.552 ======================================================== 00:13:58.552 Total : 8578.40 4.19 18926.51 1429.36 1227152.37 00:13:58.552 00:13:58.552 12:57:03 -- target/ns_hotplug_stress.sh@40 -- # null_size=1048 00:13:58.552 12:57:03 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:13:58.813 true 00:13:58.813 12:57:03 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3888148 00:13:58.813 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 35: kill: (3888148) - No such process 00:13:58.813 12:57:03 -- target/ns_hotplug_stress.sh@44 -- # wait 3888148 00:13:58.813 12:57:03 -- target/ns_hotplug_stress.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:13:58.813 12:57:03 -- target/ns_hotplug_stress.sh@48 -- # nvmftestfini 00:13:58.813 12:57:03 -- nvmf/common.sh@477 -- # nvmfcleanup 00:13:58.813 12:57:03 -- nvmf/common.sh@117 -- # sync 00:13:58.813 12:57:03 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:58.813 12:57:03 -- nvmf/common.sh@120 -- # set +e 00:13:58.813 12:57:03 -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:58.813 12:57:03 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:58.813 rmmod nvme_tcp 00:13:58.813 rmmod nvme_fabrics 00:13:58.813 rmmod nvme_keyring 00:13:58.813 12:57:03 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:58.813 12:57:03 -- nvmf/common.sh@124 -- # set -e 00:13:58.813 12:57:03 -- nvmf/common.sh@125 -- # return 0 00:13:58.813 12:57:03 -- nvmf/common.sh@478 -- # '[' -n 3887538 ']' 00:13:58.813 12:57:03 -- nvmf/common.sh@479 -- # killprocess 3887538 00:13:58.813 12:57:03 -- common/autotest_common.sh@936 -- # '[' -z 3887538 ']' 00:13:58.813 12:57:03 -- common/autotest_common.sh@940 -- # kill -0 3887538 00:13:58.813 12:57:03 -- common/autotest_common.sh@941 -- # uname 00:13:58.813 12:57:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:58.813 12:57:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3887538 00:13:58.813 12:57:03 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:13:58.813 12:57:03 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:13:58.813 12:57:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3887538' 00:13:58.813 killing process with pid 3887538 00:13:58.813 12:57:03 -- common/autotest_common.sh@955 -- # kill 3887538 00:13:58.813 12:57:03 -- common/autotest_common.sh@960 -- # wait 3887538 00:13:59.073 12:57:03 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:13:59.073 12:57:03 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:13:59.073 12:57:03 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:13:59.073 12:57:03 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:59.073 12:57:03 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:59.073 12:57:03 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:59.073 12:57:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:59.073 12:57:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:00.984 12:57:06 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:00.984 00:14:00.984 real 0m42.615s 00:14:00.984 user 2m31.348s 00:14:00.984 sys 0m10.774s 00:14:00.984 12:57:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:00.984 12:57:06 -- common/autotest_common.sh@10 -- # set +x 00:14:00.984 ************************************ 00:14:00.984 END TEST nvmf_ns_hotplug_stress 00:14:00.984 ************************************ 00:14:01.245 12:57:06 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:01.245 12:57:06 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:01.245 12:57:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:01.245 12:57:06 -- common/autotest_common.sh@10 -- # set +x 00:14:01.245 ************************************ 00:14:01.245 START TEST nvmf_connect_stress 00:14:01.245 ************************************ 00:14:01.245 12:57:06 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:01.245 * Looking for test storage... 00:14:01.245 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:01.245 12:57:06 -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:01.245 12:57:06 -- nvmf/common.sh@7 -- # uname -s 00:14:01.245 12:57:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:01.245 12:57:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:01.245 12:57:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:01.245 12:57:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:01.245 12:57:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:01.245 12:57:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:01.245 12:57:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:01.245 12:57:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:01.245 12:57:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:01.245 12:57:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:01.245 12:57:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:01.245 12:57:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:01.245 12:57:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:01.245 12:57:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:01.245 12:57:06 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:01.245 12:57:06 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:01.245 12:57:06 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:01.245 12:57:06 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:01.245 12:57:06 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:01.245 12:57:06 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:01.245 12:57:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:01.246 12:57:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:01.507 12:57:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:01.507 12:57:06 -- paths/export.sh@5 -- # export PATH 00:14:01.507 12:57:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:01.507 12:57:06 -- nvmf/common.sh@47 -- # : 0 00:14:01.507 12:57:06 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:01.507 12:57:06 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:01.507 12:57:06 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:01.507 12:57:06 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:01.507 12:57:06 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:01.507 12:57:06 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:01.507 12:57:06 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:01.507 12:57:06 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:01.507 12:57:06 -- target/connect_stress.sh@12 -- # nvmftestinit 00:14:01.507 12:57:06 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:14:01.507 12:57:06 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:01.507 12:57:06 -- nvmf/common.sh@437 -- # prepare_net_devs 00:14:01.507 12:57:06 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:14:01.507 12:57:06 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:14:01.507 12:57:06 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:01.507 12:57:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:01.507 12:57:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:01.507 12:57:06 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:14:01.507 12:57:06 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:14:01.507 12:57:06 -- nvmf/common.sh@285 -- # xtrace_disable 00:14:01.507 12:57:06 -- common/autotest_common.sh@10 -- # set +x 00:14:09.650 12:57:13 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:09.650 12:57:13 -- nvmf/common.sh@291 -- # pci_devs=() 00:14:09.650 12:57:13 -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:09.650 12:57:13 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:09.650 12:57:13 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:09.650 12:57:13 -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:09.650 12:57:13 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:09.650 12:57:13 -- nvmf/common.sh@295 -- # net_devs=() 00:14:09.650 12:57:13 -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:09.650 12:57:13 -- nvmf/common.sh@296 -- # e810=() 00:14:09.650 12:57:13 -- nvmf/common.sh@296 -- # local -ga e810 00:14:09.650 12:57:13 -- nvmf/common.sh@297 -- # x722=() 00:14:09.650 12:57:13 -- nvmf/common.sh@297 -- # local -ga x722 00:14:09.650 12:57:13 -- nvmf/common.sh@298 -- # mlx=() 00:14:09.650 12:57:13 -- nvmf/common.sh@298 -- # local -ga mlx 00:14:09.650 12:57:13 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:09.650 12:57:13 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:09.650 12:57:13 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:09.650 12:57:13 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:09.650 12:57:13 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:09.650 12:57:13 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:09.650 12:57:13 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:09.650 12:57:13 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:09.650 12:57:13 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:09.650 12:57:13 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:09.650 12:57:13 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:09.650 12:57:13 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:09.650 12:57:13 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:09.650 12:57:13 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:09.650 12:57:13 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:09.650 12:57:13 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:09.650 12:57:13 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:09.650 12:57:13 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:09.650 12:57:13 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:14:09.650 Found 0000:31:00.0 (0x8086 - 0x159b) 00:14:09.650 12:57:13 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:09.650 12:57:13 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:09.650 12:57:13 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:09.650 12:57:13 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:09.650 12:57:13 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:09.650 12:57:13 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:09.650 12:57:13 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:14:09.650 Found 0000:31:00.1 (0x8086 - 0x159b) 00:14:09.650 12:57:13 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:09.650 12:57:13 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:09.650 12:57:13 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:09.650 12:57:13 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:09.650 12:57:13 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:09.650 12:57:13 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:09.650 12:57:13 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:09.650 12:57:13 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:09.650 12:57:13 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:09.650 12:57:13 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:09.650 12:57:13 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:09.650 12:57:13 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:09.650 12:57:13 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:14:09.650 Found net devices under 0000:31:00.0: cvl_0_0 00:14:09.650 12:57:13 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:09.650 12:57:13 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:09.650 12:57:13 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:09.650 12:57:13 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:09.650 12:57:13 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:09.650 12:57:13 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:14:09.650 Found net devices under 0000:31:00.1: cvl_0_1 00:14:09.650 12:57:13 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:09.650 12:57:13 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:14:09.650 12:57:13 -- nvmf/common.sh@403 -- # is_hw=yes 00:14:09.650 12:57:13 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:14:09.650 12:57:13 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:14:09.650 12:57:13 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:14:09.650 12:57:13 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:09.650 12:57:13 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:09.650 12:57:13 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:09.650 12:57:13 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:09.650 12:57:13 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:09.650 12:57:13 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:09.650 12:57:13 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:09.650 12:57:13 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:09.650 12:57:13 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:09.650 12:57:13 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:09.650 12:57:13 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:09.650 12:57:13 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:09.650 12:57:13 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:09.650 12:57:13 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:09.650 12:57:13 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:09.650 12:57:13 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:09.650 12:57:13 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:09.650 12:57:13 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:09.650 12:57:13 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:09.650 12:57:13 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:09.650 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:09.650 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.556 ms 00:14:09.650 00:14:09.650 --- 10.0.0.2 ping statistics --- 00:14:09.650 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:09.650 rtt min/avg/max/mdev = 0.556/0.556/0.556/0.000 ms 00:14:09.650 12:57:13 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:09.650 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:09.650 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.328 ms 00:14:09.650 00:14:09.650 --- 10.0.0.1 ping statistics --- 00:14:09.650 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:09.650 rtt min/avg/max/mdev = 0.328/0.328/0.328/0.000 ms 00:14:09.650 12:57:13 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:09.650 12:57:13 -- nvmf/common.sh@411 -- # return 0 00:14:09.650 12:57:13 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:14:09.650 12:57:13 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:09.650 12:57:13 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:14:09.650 12:57:13 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:14:09.650 12:57:13 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:09.651 12:57:13 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:14:09.651 12:57:13 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:14:09.651 12:57:13 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:14:09.651 12:57:13 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:09.651 12:57:13 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:09.651 12:57:13 -- common/autotest_common.sh@10 -- # set +x 00:14:09.651 12:57:13 -- nvmf/common.sh@470 -- # nvmfpid=3898957 00:14:09.651 12:57:13 -- nvmf/common.sh@471 -- # waitforlisten 3898957 00:14:09.651 12:57:13 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:09.651 12:57:13 -- common/autotest_common.sh@817 -- # '[' -z 3898957 ']' 00:14:09.651 12:57:13 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:09.651 12:57:13 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:09.651 12:57:13 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:09.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:09.651 12:57:13 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:09.651 12:57:13 -- common/autotest_common.sh@10 -- # set +x 00:14:09.651 [2024-04-26 12:57:13.735943] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:14:09.651 [2024-04-26 12:57:13.736003] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:09.651 EAL: No free 2048 kB hugepages reported on node 1 00:14:09.651 [2024-04-26 12:57:13.825972] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:09.651 [2024-04-26 12:57:13.920539] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:09.651 [2024-04-26 12:57:13.920603] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:09.651 [2024-04-26 12:57:13.920611] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:09.651 [2024-04-26 12:57:13.920618] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:09.651 [2024-04-26 12:57:13.920625] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:09.651 [2024-04-26 12:57:13.920760] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:09.651 [2024-04-26 12:57:13.920911] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:09.651 [2024-04-26 12:57:13.920943] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:09.651 12:57:14 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:09.651 12:57:14 -- common/autotest_common.sh@850 -- # return 0 00:14:09.651 12:57:14 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:09.651 12:57:14 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:09.651 12:57:14 -- common/autotest_common.sh@10 -- # set +x 00:14:09.651 12:57:14 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:09.651 12:57:14 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:09.651 12:57:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:09.651 12:57:14 -- common/autotest_common.sh@10 -- # set +x 00:14:09.651 [2024-04-26 12:57:14.574201] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:09.651 12:57:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:09.651 12:57:14 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:09.651 12:57:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:09.651 12:57:14 -- common/autotest_common.sh@10 -- # set +x 00:14:09.651 12:57:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:09.651 12:57:14 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:09.651 12:57:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:09.651 12:57:14 -- common/autotest_common.sh@10 -- # set +x 00:14:09.651 [2024-04-26 12:57:14.598564] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:09.651 12:57:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:09.651 12:57:14 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:09.651 12:57:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:09.651 12:57:14 -- common/autotest_common.sh@10 -- # set +x 00:14:09.651 NULL1 00:14:09.651 12:57:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:09.651 12:57:14 -- target/connect_stress.sh@21 -- # PERF_PID=3899291 00:14:09.651 12:57:14 -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:09.651 12:57:14 -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:14:09.651 12:57:14 -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:09.651 12:57:14 -- target/connect_stress.sh@27 -- # seq 1 20 00:14:09.651 12:57:14 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:09.651 12:57:14 -- target/connect_stress.sh@28 -- # cat 00:14:09.651 12:57:14 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:09.651 12:57:14 -- target/connect_stress.sh@28 -- # cat 00:14:09.651 12:57:14 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:09.651 12:57:14 -- target/connect_stress.sh@28 -- # cat 00:14:09.651 12:57:14 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:09.651 12:57:14 -- target/connect_stress.sh@28 -- # cat 00:14:09.651 12:57:14 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:09.651 12:57:14 -- target/connect_stress.sh@28 -- # cat 00:14:09.651 12:57:14 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:09.651 12:57:14 -- target/connect_stress.sh@28 -- # cat 00:14:09.651 EAL: No free 2048 kB hugepages reported on node 1 00:14:09.651 12:57:14 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:09.651 12:57:14 -- target/connect_stress.sh@28 -- # cat 00:14:09.651 12:57:14 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:09.651 12:57:14 -- target/connect_stress.sh@28 -- # cat 00:14:09.651 12:57:14 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:09.651 12:57:14 -- target/connect_stress.sh@28 -- # cat 00:14:09.651 12:57:14 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:09.651 12:57:14 -- target/connect_stress.sh@28 -- # cat 00:14:09.651 12:57:14 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:09.651 12:57:14 -- target/connect_stress.sh@28 -- # cat 00:14:09.651 12:57:14 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:09.651 12:57:14 -- target/connect_stress.sh@28 -- # cat 00:14:09.651 12:57:14 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:09.651 12:57:14 -- target/connect_stress.sh@28 -- # cat 00:14:09.651 12:57:14 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:09.651 12:57:14 -- target/connect_stress.sh@28 -- # cat 00:14:09.651 12:57:14 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:09.651 12:57:14 -- target/connect_stress.sh@28 -- # cat 00:14:09.651 12:57:14 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:09.651 12:57:14 -- target/connect_stress.sh@28 -- # cat 00:14:09.912 12:57:14 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:09.912 12:57:14 -- target/connect_stress.sh@28 -- # cat 00:14:09.912 12:57:14 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:09.912 12:57:14 -- target/connect_stress.sh@28 -- # cat 00:14:09.912 12:57:14 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:09.912 12:57:14 -- target/connect_stress.sh@28 -- # cat 00:14:09.912 12:57:14 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:09.912 12:57:14 -- target/connect_stress.sh@28 -- # cat 00:14:09.912 12:57:14 -- target/connect_stress.sh@34 -- # kill -0 3899291 00:14:09.913 12:57:14 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:09.913 12:57:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:09.913 12:57:14 -- common/autotest_common.sh@10 -- # set +x 00:14:10.173 12:57:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:10.173 12:57:15 -- target/connect_stress.sh@34 -- # kill -0 3899291 00:14:10.173 12:57:15 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:10.173 12:57:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:10.173 12:57:15 -- common/autotest_common.sh@10 -- # set +x 00:14:10.433 12:57:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:10.433 12:57:15 -- target/connect_stress.sh@34 -- # kill -0 3899291 00:14:10.433 12:57:15 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:10.433 12:57:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:10.433 12:57:15 -- common/autotest_common.sh@10 -- # set +x 00:14:10.693 12:57:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:10.693 12:57:15 -- target/connect_stress.sh@34 -- # kill -0 3899291 00:14:10.693 12:57:15 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:10.693 12:57:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:10.693 12:57:15 -- common/autotest_common.sh@10 -- # set +x 00:14:11.264 12:57:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:11.264 12:57:16 -- target/connect_stress.sh@34 -- # kill -0 3899291 00:14:11.264 12:57:16 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:11.264 12:57:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:11.265 12:57:16 -- common/autotest_common.sh@10 -- # set +x 00:14:11.525 12:57:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:11.525 12:57:16 -- target/connect_stress.sh@34 -- # kill -0 3899291 00:14:11.525 12:57:16 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:11.525 12:57:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:11.525 12:57:16 -- common/autotest_common.sh@10 -- # set +x 00:14:11.785 12:57:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:11.785 12:57:16 -- target/connect_stress.sh@34 -- # kill -0 3899291 00:14:11.785 12:57:16 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:11.785 12:57:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:11.785 12:57:16 -- common/autotest_common.sh@10 -- # set +x 00:14:12.045 12:57:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:12.045 12:57:17 -- target/connect_stress.sh@34 -- # kill -0 3899291 00:14:12.045 12:57:17 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:12.045 12:57:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:12.045 12:57:17 -- common/autotest_common.sh@10 -- # set +x 00:14:12.306 12:57:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:12.306 12:57:17 -- target/connect_stress.sh@34 -- # kill -0 3899291 00:14:12.306 12:57:17 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:12.306 12:57:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:12.306 12:57:17 -- common/autotest_common.sh@10 -- # set +x 00:14:12.878 12:57:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:12.878 12:57:17 -- target/connect_stress.sh@34 -- # kill -0 3899291 00:14:12.878 12:57:17 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:12.878 12:57:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:12.878 12:57:17 -- common/autotest_common.sh@10 -- # set +x 00:14:13.139 12:57:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:13.139 12:57:17 -- target/connect_stress.sh@34 -- # kill -0 3899291 00:14:13.139 12:57:17 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:13.139 12:57:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:13.139 12:57:17 -- common/autotest_common.sh@10 -- # set +x 00:14:13.399 12:57:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:13.399 12:57:18 -- target/connect_stress.sh@34 -- # kill -0 3899291 00:14:13.399 12:57:18 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:13.399 12:57:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:13.399 12:57:18 -- common/autotest_common.sh@10 -- # set +x 00:14:13.660 12:57:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:13.660 12:57:18 -- target/connect_stress.sh@34 -- # kill -0 3899291 00:14:13.661 12:57:18 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:13.661 12:57:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:13.661 12:57:18 -- common/autotest_common.sh@10 -- # set +x 00:14:13.922 12:57:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:13.922 12:57:18 -- target/connect_stress.sh@34 -- # kill -0 3899291 00:14:13.922 12:57:18 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:13.922 12:57:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:13.922 12:57:18 -- common/autotest_common.sh@10 -- # set +x 00:14:14.494 12:57:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:14.494 12:57:19 -- target/connect_stress.sh@34 -- # kill -0 3899291 00:14:14.494 12:57:19 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:14.494 12:57:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:14.494 12:57:19 -- common/autotest_common.sh@10 -- # set +x 00:14:14.754 12:57:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:14.754 12:57:19 -- target/connect_stress.sh@34 -- # kill -0 3899291 00:14:14.754 12:57:19 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:14.754 12:57:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:14.754 12:57:19 -- common/autotest_common.sh@10 -- # set +x 00:14:15.015 12:57:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:15.015 12:57:19 -- target/connect_stress.sh@34 -- # kill -0 3899291 00:14:15.015 12:57:19 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:15.015 12:57:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:15.015 12:57:19 -- common/autotest_common.sh@10 -- # set +x 00:14:15.276 12:57:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:15.276 12:57:20 -- target/connect_stress.sh@34 -- # kill -0 3899291 00:14:15.276 12:57:20 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:15.276 12:57:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:15.276 12:57:20 -- common/autotest_common.sh@10 -- # set +x 00:14:15.536 12:57:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:15.536 12:57:20 -- target/connect_stress.sh@34 -- # kill -0 3899291 00:14:15.536 12:57:20 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:15.536 12:57:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:15.536 12:57:20 -- common/autotest_common.sh@10 -- # set +x 00:14:16.107 12:57:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:16.107 12:57:20 -- target/connect_stress.sh@34 -- # kill -0 3899291 00:14:16.107 12:57:20 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:16.107 12:57:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:16.107 12:57:20 -- common/autotest_common.sh@10 -- # set +x 00:14:16.368 12:57:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:16.368 12:57:21 -- target/connect_stress.sh@34 -- # kill -0 3899291 00:14:16.368 12:57:21 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:16.368 12:57:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:16.368 12:57:21 -- common/autotest_common.sh@10 -- # set +x 00:14:16.629 12:57:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:16.629 12:57:21 -- target/connect_stress.sh@34 -- # kill -0 3899291 00:14:16.629 12:57:21 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:16.629 12:57:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:16.629 12:57:21 -- common/autotest_common.sh@10 -- # set +x 00:14:16.889 12:57:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:16.889 12:57:21 -- target/connect_stress.sh@34 -- # kill -0 3899291 00:14:16.889 12:57:21 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:16.889 12:57:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:16.889 12:57:21 -- common/autotest_common.sh@10 -- # set +x 00:14:17.149 12:57:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:17.409 12:57:22 -- target/connect_stress.sh@34 -- # kill -0 3899291 00:14:17.409 12:57:22 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:17.409 12:57:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:17.409 12:57:22 -- common/autotest_common.sh@10 -- # set +x 00:14:17.670 12:57:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:17.670 12:57:22 -- target/connect_stress.sh@34 -- # kill -0 3899291 00:14:17.670 12:57:22 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:17.670 12:57:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:17.670 12:57:22 -- common/autotest_common.sh@10 -- # set +x 00:14:17.931 12:57:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:17.931 12:57:22 -- target/connect_stress.sh@34 -- # kill -0 3899291 00:14:17.931 12:57:22 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:17.931 12:57:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:17.931 12:57:22 -- common/autotest_common.sh@10 -- # set +x 00:14:18.191 12:57:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:18.191 12:57:23 -- target/connect_stress.sh@34 -- # kill -0 3899291 00:14:18.191 12:57:23 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:18.191 12:57:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:18.191 12:57:23 -- common/autotest_common.sh@10 -- # set +x 00:14:18.451 12:57:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:18.451 12:57:23 -- target/connect_stress.sh@34 -- # kill -0 3899291 00:14:18.451 12:57:23 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:18.451 12:57:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:18.451 12:57:23 -- common/autotest_common.sh@10 -- # set +x 00:14:19.022 12:57:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:19.022 12:57:23 -- target/connect_stress.sh@34 -- # kill -0 3899291 00:14:19.022 12:57:23 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:19.022 12:57:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:19.022 12:57:23 -- common/autotest_common.sh@10 -- # set +x 00:14:19.282 12:57:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:19.282 12:57:24 -- target/connect_stress.sh@34 -- # kill -0 3899291 00:14:19.282 12:57:24 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:19.282 12:57:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:19.282 12:57:24 -- common/autotest_common.sh@10 -- # set +x 00:14:19.542 12:57:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:19.542 12:57:24 -- target/connect_stress.sh@34 -- # kill -0 3899291 00:14:19.542 12:57:24 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:19.542 12:57:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:19.542 12:57:24 -- common/autotest_common.sh@10 -- # set +x 00:14:19.804 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:19.804 12:57:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:19.804 12:57:24 -- target/connect_stress.sh@34 -- # kill -0 3899291 00:14:19.804 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (3899291) - No such process 00:14:19.804 12:57:24 -- target/connect_stress.sh@38 -- # wait 3899291 00:14:19.804 12:57:24 -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:19.804 12:57:24 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:19.804 12:57:24 -- target/connect_stress.sh@43 -- # nvmftestfini 00:14:19.804 12:57:24 -- nvmf/common.sh@477 -- # nvmfcleanup 00:14:19.804 12:57:24 -- nvmf/common.sh@117 -- # sync 00:14:19.804 12:57:24 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:19.804 12:57:24 -- nvmf/common.sh@120 -- # set +e 00:14:19.804 12:57:24 -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:19.804 12:57:24 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:19.804 rmmod nvme_tcp 00:14:19.804 rmmod nvme_fabrics 00:14:19.804 rmmod nvme_keyring 00:14:20.065 12:57:24 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:20.065 12:57:24 -- nvmf/common.sh@124 -- # set -e 00:14:20.065 12:57:24 -- nvmf/common.sh@125 -- # return 0 00:14:20.065 12:57:24 -- nvmf/common.sh@478 -- # '[' -n 3898957 ']' 00:14:20.065 12:57:24 -- nvmf/common.sh@479 -- # killprocess 3898957 00:14:20.065 12:57:24 -- common/autotest_common.sh@936 -- # '[' -z 3898957 ']' 00:14:20.065 12:57:24 -- common/autotest_common.sh@940 -- # kill -0 3898957 00:14:20.065 12:57:24 -- common/autotest_common.sh@941 -- # uname 00:14:20.065 12:57:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:20.065 12:57:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3898957 00:14:20.065 12:57:24 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:20.065 12:57:24 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:20.065 12:57:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3898957' 00:14:20.065 killing process with pid 3898957 00:14:20.065 12:57:24 -- common/autotest_common.sh@955 -- # kill 3898957 00:14:20.065 12:57:24 -- common/autotest_common.sh@960 -- # wait 3898957 00:14:20.065 12:57:25 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:14:20.065 12:57:25 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:14:20.065 12:57:25 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:14:20.065 12:57:25 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:20.065 12:57:25 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:20.065 12:57:25 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:20.065 12:57:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:20.065 12:57:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:22.611 12:57:27 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:22.611 00:14:22.611 real 0m20.939s 00:14:22.611 user 0m42.221s 00:14:22.611 sys 0m8.653s 00:14:22.611 12:57:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:22.611 12:57:27 -- common/autotest_common.sh@10 -- # set +x 00:14:22.611 ************************************ 00:14:22.611 END TEST nvmf_connect_stress 00:14:22.611 ************************************ 00:14:22.611 12:57:27 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:22.611 12:57:27 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:22.611 12:57:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:22.611 12:57:27 -- common/autotest_common.sh@10 -- # set +x 00:14:22.611 ************************************ 00:14:22.611 START TEST nvmf_fused_ordering 00:14:22.611 ************************************ 00:14:22.611 12:57:27 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:22.611 * Looking for test storage... 00:14:22.611 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:22.611 12:57:27 -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:22.611 12:57:27 -- nvmf/common.sh@7 -- # uname -s 00:14:22.611 12:57:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:22.611 12:57:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:22.611 12:57:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:22.611 12:57:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:22.611 12:57:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:22.611 12:57:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:22.611 12:57:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:22.611 12:57:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:22.611 12:57:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:22.611 12:57:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:22.611 12:57:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:22.611 12:57:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:22.611 12:57:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:22.611 12:57:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:22.611 12:57:27 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:22.611 12:57:27 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:22.611 12:57:27 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:22.612 12:57:27 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:22.612 12:57:27 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:22.612 12:57:27 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:22.612 12:57:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.612 12:57:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.612 12:57:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.612 12:57:27 -- paths/export.sh@5 -- # export PATH 00:14:22.612 12:57:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.612 12:57:27 -- nvmf/common.sh@47 -- # : 0 00:14:22.612 12:57:27 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:22.612 12:57:27 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:22.612 12:57:27 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:22.612 12:57:27 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:22.612 12:57:27 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:22.612 12:57:27 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:22.612 12:57:27 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:22.612 12:57:27 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:22.612 12:57:27 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:14:22.612 12:57:27 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:14:22.612 12:57:27 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:22.612 12:57:27 -- nvmf/common.sh@437 -- # prepare_net_devs 00:14:22.612 12:57:27 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:14:22.612 12:57:27 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:14:22.612 12:57:27 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:22.612 12:57:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:22.612 12:57:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:22.612 12:57:27 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:14:22.612 12:57:27 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:14:22.612 12:57:27 -- nvmf/common.sh@285 -- # xtrace_disable 00:14:22.612 12:57:27 -- common/autotest_common.sh@10 -- # set +x 00:14:29.305 12:57:33 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:29.305 12:57:33 -- nvmf/common.sh@291 -- # pci_devs=() 00:14:29.305 12:57:33 -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:29.305 12:57:33 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:29.305 12:57:33 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:29.305 12:57:33 -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:29.305 12:57:33 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:29.305 12:57:33 -- nvmf/common.sh@295 -- # net_devs=() 00:14:29.305 12:57:33 -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:29.305 12:57:33 -- nvmf/common.sh@296 -- # e810=() 00:14:29.305 12:57:33 -- nvmf/common.sh@296 -- # local -ga e810 00:14:29.305 12:57:33 -- nvmf/common.sh@297 -- # x722=() 00:14:29.305 12:57:33 -- nvmf/common.sh@297 -- # local -ga x722 00:14:29.305 12:57:33 -- nvmf/common.sh@298 -- # mlx=() 00:14:29.305 12:57:33 -- nvmf/common.sh@298 -- # local -ga mlx 00:14:29.305 12:57:33 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:29.305 12:57:33 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:29.305 12:57:33 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:29.305 12:57:33 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:29.305 12:57:33 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:29.305 12:57:33 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:29.305 12:57:33 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:29.305 12:57:33 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:29.305 12:57:33 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:29.305 12:57:33 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:29.305 12:57:33 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:29.305 12:57:33 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:29.305 12:57:33 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:29.305 12:57:33 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:29.305 12:57:33 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:29.305 12:57:33 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:29.305 12:57:33 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:29.305 12:57:33 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:29.305 12:57:33 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:14:29.305 Found 0000:31:00.0 (0x8086 - 0x159b) 00:14:29.305 12:57:33 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:29.305 12:57:33 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:29.305 12:57:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:29.305 12:57:33 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:29.305 12:57:33 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:29.305 12:57:33 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:29.305 12:57:33 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:14:29.305 Found 0000:31:00.1 (0x8086 - 0x159b) 00:14:29.305 12:57:33 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:29.305 12:57:33 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:29.305 12:57:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:29.305 12:57:33 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:29.305 12:57:33 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:29.305 12:57:33 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:29.305 12:57:33 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:29.305 12:57:33 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:29.305 12:57:33 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:29.305 12:57:33 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:29.305 12:57:33 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:29.305 12:57:33 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:29.305 12:57:33 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:14:29.305 Found net devices under 0000:31:00.0: cvl_0_0 00:14:29.305 12:57:33 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:29.305 12:57:33 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:29.305 12:57:33 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:29.305 12:57:33 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:29.305 12:57:33 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:29.305 12:57:33 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:14:29.305 Found net devices under 0000:31:00.1: cvl_0_1 00:14:29.305 12:57:33 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:29.305 12:57:33 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:14:29.305 12:57:33 -- nvmf/common.sh@403 -- # is_hw=yes 00:14:29.305 12:57:33 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:14:29.305 12:57:33 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:14:29.305 12:57:33 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:14:29.305 12:57:33 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:29.305 12:57:33 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:29.305 12:57:33 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:29.305 12:57:33 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:29.305 12:57:33 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:29.305 12:57:33 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:29.305 12:57:33 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:29.305 12:57:33 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:29.305 12:57:33 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:29.305 12:57:33 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:29.305 12:57:33 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:29.305 12:57:33 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:29.305 12:57:33 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:29.305 12:57:33 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:29.305 12:57:33 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:29.305 12:57:33 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:29.305 12:57:33 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:29.305 12:57:34 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:29.305 12:57:34 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:29.305 12:57:34 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:29.305 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:29.305 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.710 ms 00:14:29.305 00:14:29.305 --- 10.0.0.2 ping statistics --- 00:14:29.305 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:29.305 rtt min/avg/max/mdev = 0.710/0.710/0.710/0.000 ms 00:14:29.305 12:57:34 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:29.305 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:29.305 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.205 ms 00:14:29.305 00:14:29.305 --- 10.0.0.1 ping statistics --- 00:14:29.305 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:29.305 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:14:29.306 12:57:34 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:29.306 12:57:34 -- nvmf/common.sh@411 -- # return 0 00:14:29.306 12:57:34 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:14:29.306 12:57:34 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:29.306 12:57:34 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:14:29.306 12:57:34 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:14:29.306 12:57:34 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:29.306 12:57:34 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:14:29.306 12:57:34 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:14:29.306 12:57:34 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:14:29.306 12:57:34 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:29.306 12:57:34 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:29.306 12:57:34 -- common/autotest_common.sh@10 -- # set +x 00:14:29.306 12:57:34 -- nvmf/common.sh@470 -- # nvmfpid=3905388 00:14:29.306 12:57:34 -- nvmf/common.sh@471 -- # waitforlisten 3905388 00:14:29.306 12:57:34 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:29.306 12:57:34 -- common/autotest_common.sh@817 -- # '[' -z 3905388 ']' 00:14:29.306 12:57:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:29.306 12:57:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:29.306 12:57:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:29.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:29.306 12:57:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:29.306 12:57:34 -- common/autotest_common.sh@10 -- # set +x 00:14:29.306 [2024-04-26 12:57:34.211600] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:14:29.306 [2024-04-26 12:57:34.211675] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:29.306 EAL: No free 2048 kB hugepages reported on node 1 00:14:29.306 [2024-04-26 12:57:34.294235] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:29.306 [2024-04-26 12:57:34.364931] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:29.306 [2024-04-26 12:57:34.364978] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:29.306 [2024-04-26 12:57:34.364987] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:29.306 [2024-04-26 12:57:34.364993] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:29.306 [2024-04-26 12:57:34.364999] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:29.306 [2024-04-26 12:57:34.365023] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:30.251 12:57:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:30.251 12:57:34 -- common/autotest_common.sh@850 -- # return 0 00:14:30.251 12:57:34 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:30.251 12:57:34 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:30.251 12:57:34 -- common/autotest_common.sh@10 -- # set +x 00:14:30.251 12:57:35 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:30.251 12:57:35 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:30.251 12:57:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:30.251 12:57:35 -- common/autotest_common.sh@10 -- # set +x 00:14:30.251 [2024-04-26 12:57:35.025887] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:30.251 12:57:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:30.251 12:57:35 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:30.251 12:57:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:30.251 12:57:35 -- common/autotest_common.sh@10 -- # set +x 00:14:30.251 12:57:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:30.251 12:57:35 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:30.251 12:57:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:30.251 12:57:35 -- common/autotest_common.sh@10 -- # set +x 00:14:30.251 [2024-04-26 12:57:35.050146] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:30.251 12:57:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:30.251 12:57:35 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:30.251 12:57:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:30.251 12:57:35 -- common/autotest_common.sh@10 -- # set +x 00:14:30.251 NULL1 00:14:30.251 12:57:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:30.251 12:57:35 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:14:30.251 12:57:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:30.251 12:57:35 -- common/autotest_common.sh@10 -- # set +x 00:14:30.251 12:57:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:30.251 12:57:35 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:30.251 12:57:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:30.251 12:57:35 -- common/autotest_common.sh@10 -- # set +x 00:14:30.251 12:57:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:30.251 12:57:35 -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:14:30.251 [2024-04-26 12:57:35.119234] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:14:30.252 [2024-04-26 12:57:35.119291] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3905685 ] 00:14:30.252 EAL: No free 2048 kB hugepages reported on node 1 00:14:30.824 Attached to nqn.2016-06.io.spdk:cnode1 00:14:30.824 Namespace ID: 1 size: 1GB 00:14:30.824 fused_ordering(0) 00:14:30.824 fused_ordering(1) 00:14:30.824 fused_ordering(2) 00:14:30.824 fused_ordering(3) 00:14:30.824 fused_ordering(4) 00:14:30.824 fused_ordering(5) 00:14:30.824 fused_ordering(6) 00:14:30.824 fused_ordering(7) 00:14:30.824 fused_ordering(8) 00:14:30.824 fused_ordering(9) 00:14:30.824 fused_ordering(10) 00:14:30.824 fused_ordering(11) 00:14:30.824 fused_ordering(12) 00:14:30.824 fused_ordering(13) 00:14:30.824 fused_ordering(14) 00:14:30.824 fused_ordering(15) 00:14:30.824 fused_ordering(16) 00:14:30.824 fused_ordering(17) 00:14:30.824 fused_ordering(18) 00:14:30.824 fused_ordering(19) 00:14:30.824 fused_ordering(20) 00:14:30.824 fused_ordering(21) 00:14:30.824 fused_ordering(22) 00:14:30.824 fused_ordering(23) 00:14:30.824 fused_ordering(24) 00:14:30.824 fused_ordering(25) 00:14:30.824 fused_ordering(26) 00:14:30.824 fused_ordering(27) 00:14:30.824 fused_ordering(28) 00:14:30.824 fused_ordering(29) 00:14:30.824 fused_ordering(30) 00:14:30.824 fused_ordering(31) 00:14:30.824 fused_ordering(32) 00:14:30.824 fused_ordering(33) 00:14:30.824 fused_ordering(34) 00:14:30.824 fused_ordering(35) 00:14:30.824 fused_ordering(36) 00:14:30.824 fused_ordering(37) 00:14:30.824 fused_ordering(38) 00:14:30.824 fused_ordering(39) 00:14:30.824 fused_ordering(40) 00:14:30.824 fused_ordering(41) 00:14:30.824 fused_ordering(42) 00:14:30.824 fused_ordering(43) 00:14:30.824 fused_ordering(44) 00:14:30.824 fused_ordering(45) 00:14:30.824 fused_ordering(46) 00:14:30.824 fused_ordering(47) 00:14:30.824 fused_ordering(48) 00:14:30.824 fused_ordering(49) 00:14:30.824 fused_ordering(50) 00:14:30.824 fused_ordering(51) 00:14:30.824 fused_ordering(52) 00:14:30.824 fused_ordering(53) 00:14:30.824 fused_ordering(54) 00:14:30.824 fused_ordering(55) 00:14:30.824 fused_ordering(56) 00:14:30.824 fused_ordering(57) 00:14:30.824 fused_ordering(58) 00:14:30.824 fused_ordering(59) 00:14:30.824 fused_ordering(60) 00:14:30.824 fused_ordering(61) 00:14:30.824 fused_ordering(62) 00:14:30.824 fused_ordering(63) 00:14:30.824 fused_ordering(64) 00:14:30.824 fused_ordering(65) 00:14:30.824 fused_ordering(66) 00:14:30.824 fused_ordering(67) 00:14:30.824 fused_ordering(68) 00:14:30.824 fused_ordering(69) 00:14:30.824 fused_ordering(70) 00:14:30.824 fused_ordering(71) 00:14:30.824 fused_ordering(72) 00:14:30.824 fused_ordering(73) 00:14:30.824 fused_ordering(74) 00:14:30.824 fused_ordering(75) 00:14:30.824 fused_ordering(76) 00:14:30.824 fused_ordering(77) 00:14:30.824 fused_ordering(78) 00:14:30.824 fused_ordering(79) 00:14:30.824 fused_ordering(80) 00:14:30.824 fused_ordering(81) 00:14:30.824 fused_ordering(82) 00:14:30.824 fused_ordering(83) 00:14:30.824 fused_ordering(84) 00:14:30.824 fused_ordering(85) 00:14:30.824 fused_ordering(86) 00:14:30.824 fused_ordering(87) 00:14:30.824 fused_ordering(88) 00:14:30.824 fused_ordering(89) 00:14:30.824 fused_ordering(90) 00:14:30.824 fused_ordering(91) 00:14:30.824 fused_ordering(92) 00:14:30.824 fused_ordering(93) 00:14:30.824 fused_ordering(94) 00:14:30.824 fused_ordering(95) 00:14:30.824 fused_ordering(96) 00:14:30.824 fused_ordering(97) 00:14:30.824 fused_ordering(98) 00:14:30.825 fused_ordering(99) 00:14:30.825 fused_ordering(100) 00:14:30.825 fused_ordering(101) 00:14:30.825 fused_ordering(102) 00:14:30.825 fused_ordering(103) 00:14:30.825 fused_ordering(104) 00:14:30.825 fused_ordering(105) 00:14:30.825 fused_ordering(106) 00:14:30.825 fused_ordering(107) 00:14:30.825 fused_ordering(108) 00:14:30.825 fused_ordering(109) 00:14:30.825 fused_ordering(110) 00:14:30.825 fused_ordering(111) 00:14:30.825 fused_ordering(112) 00:14:30.825 fused_ordering(113) 00:14:30.825 fused_ordering(114) 00:14:30.825 fused_ordering(115) 00:14:30.825 fused_ordering(116) 00:14:30.825 fused_ordering(117) 00:14:30.825 fused_ordering(118) 00:14:30.825 fused_ordering(119) 00:14:30.825 fused_ordering(120) 00:14:30.825 fused_ordering(121) 00:14:30.825 fused_ordering(122) 00:14:30.825 fused_ordering(123) 00:14:30.825 fused_ordering(124) 00:14:30.825 fused_ordering(125) 00:14:30.825 fused_ordering(126) 00:14:30.825 fused_ordering(127) 00:14:30.825 fused_ordering(128) 00:14:30.825 fused_ordering(129) 00:14:30.825 fused_ordering(130) 00:14:30.825 fused_ordering(131) 00:14:30.825 fused_ordering(132) 00:14:30.825 fused_ordering(133) 00:14:30.825 fused_ordering(134) 00:14:30.825 fused_ordering(135) 00:14:30.825 fused_ordering(136) 00:14:30.825 fused_ordering(137) 00:14:30.825 fused_ordering(138) 00:14:30.825 fused_ordering(139) 00:14:30.825 fused_ordering(140) 00:14:30.825 fused_ordering(141) 00:14:30.825 fused_ordering(142) 00:14:30.825 fused_ordering(143) 00:14:30.825 fused_ordering(144) 00:14:30.825 fused_ordering(145) 00:14:30.825 fused_ordering(146) 00:14:30.825 fused_ordering(147) 00:14:30.825 fused_ordering(148) 00:14:30.825 fused_ordering(149) 00:14:30.825 fused_ordering(150) 00:14:30.825 fused_ordering(151) 00:14:30.825 fused_ordering(152) 00:14:30.825 fused_ordering(153) 00:14:30.825 fused_ordering(154) 00:14:30.825 fused_ordering(155) 00:14:30.825 fused_ordering(156) 00:14:30.825 fused_ordering(157) 00:14:30.825 fused_ordering(158) 00:14:30.825 fused_ordering(159) 00:14:30.825 fused_ordering(160) 00:14:30.825 fused_ordering(161) 00:14:30.825 fused_ordering(162) 00:14:30.825 fused_ordering(163) 00:14:30.825 fused_ordering(164) 00:14:30.825 fused_ordering(165) 00:14:30.825 fused_ordering(166) 00:14:30.825 fused_ordering(167) 00:14:30.825 fused_ordering(168) 00:14:30.825 fused_ordering(169) 00:14:30.825 fused_ordering(170) 00:14:30.825 fused_ordering(171) 00:14:30.825 fused_ordering(172) 00:14:30.825 fused_ordering(173) 00:14:30.825 fused_ordering(174) 00:14:30.825 fused_ordering(175) 00:14:30.825 fused_ordering(176) 00:14:30.825 fused_ordering(177) 00:14:30.825 fused_ordering(178) 00:14:30.825 fused_ordering(179) 00:14:30.825 fused_ordering(180) 00:14:30.825 fused_ordering(181) 00:14:30.825 fused_ordering(182) 00:14:30.825 fused_ordering(183) 00:14:30.825 fused_ordering(184) 00:14:30.825 fused_ordering(185) 00:14:30.825 fused_ordering(186) 00:14:30.825 fused_ordering(187) 00:14:30.825 fused_ordering(188) 00:14:30.825 fused_ordering(189) 00:14:30.825 fused_ordering(190) 00:14:30.825 fused_ordering(191) 00:14:30.825 fused_ordering(192) 00:14:30.825 fused_ordering(193) 00:14:30.825 fused_ordering(194) 00:14:30.825 fused_ordering(195) 00:14:30.825 fused_ordering(196) 00:14:30.825 fused_ordering(197) 00:14:30.825 fused_ordering(198) 00:14:30.825 fused_ordering(199) 00:14:30.825 fused_ordering(200) 00:14:30.825 fused_ordering(201) 00:14:30.825 fused_ordering(202) 00:14:30.825 fused_ordering(203) 00:14:30.825 fused_ordering(204) 00:14:30.825 fused_ordering(205) 00:14:31.087 fused_ordering(206) 00:14:31.087 fused_ordering(207) 00:14:31.087 fused_ordering(208) 00:14:31.087 fused_ordering(209) 00:14:31.087 fused_ordering(210) 00:14:31.087 fused_ordering(211) 00:14:31.087 fused_ordering(212) 00:14:31.087 fused_ordering(213) 00:14:31.087 fused_ordering(214) 00:14:31.087 fused_ordering(215) 00:14:31.087 fused_ordering(216) 00:14:31.087 fused_ordering(217) 00:14:31.087 fused_ordering(218) 00:14:31.087 fused_ordering(219) 00:14:31.087 fused_ordering(220) 00:14:31.087 fused_ordering(221) 00:14:31.087 fused_ordering(222) 00:14:31.087 fused_ordering(223) 00:14:31.087 fused_ordering(224) 00:14:31.087 fused_ordering(225) 00:14:31.087 fused_ordering(226) 00:14:31.087 fused_ordering(227) 00:14:31.087 fused_ordering(228) 00:14:31.087 fused_ordering(229) 00:14:31.087 fused_ordering(230) 00:14:31.087 fused_ordering(231) 00:14:31.087 fused_ordering(232) 00:14:31.087 fused_ordering(233) 00:14:31.087 fused_ordering(234) 00:14:31.087 fused_ordering(235) 00:14:31.087 fused_ordering(236) 00:14:31.087 fused_ordering(237) 00:14:31.087 fused_ordering(238) 00:14:31.087 fused_ordering(239) 00:14:31.087 fused_ordering(240) 00:14:31.087 fused_ordering(241) 00:14:31.087 fused_ordering(242) 00:14:31.087 fused_ordering(243) 00:14:31.087 fused_ordering(244) 00:14:31.087 fused_ordering(245) 00:14:31.087 fused_ordering(246) 00:14:31.087 fused_ordering(247) 00:14:31.087 fused_ordering(248) 00:14:31.087 fused_ordering(249) 00:14:31.087 fused_ordering(250) 00:14:31.087 fused_ordering(251) 00:14:31.087 fused_ordering(252) 00:14:31.087 fused_ordering(253) 00:14:31.087 fused_ordering(254) 00:14:31.087 fused_ordering(255) 00:14:31.087 fused_ordering(256) 00:14:31.087 fused_ordering(257) 00:14:31.087 fused_ordering(258) 00:14:31.087 fused_ordering(259) 00:14:31.087 fused_ordering(260) 00:14:31.087 fused_ordering(261) 00:14:31.087 fused_ordering(262) 00:14:31.087 fused_ordering(263) 00:14:31.087 fused_ordering(264) 00:14:31.087 fused_ordering(265) 00:14:31.087 fused_ordering(266) 00:14:31.087 fused_ordering(267) 00:14:31.087 fused_ordering(268) 00:14:31.087 fused_ordering(269) 00:14:31.087 fused_ordering(270) 00:14:31.087 fused_ordering(271) 00:14:31.087 fused_ordering(272) 00:14:31.087 fused_ordering(273) 00:14:31.087 fused_ordering(274) 00:14:31.087 fused_ordering(275) 00:14:31.087 fused_ordering(276) 00:14:31.087 fused_ordering(277) 00:14:31.087 fused_ordering(278) 00:14:31.087 fused_ordering(279) 00:14:31.087 fused_ordering(280) 00:14:31.087 fused_ordering(281) 00:14:31.087 fused_ordering(282) 00:14:31.087 fused_ordering(283) 00:14:31.087 fused_ordering(284) 00:14:31.087 fused_ordering(285) 00:14:31.087 fused_ordering(286) 00:14:31.087 fused_ordering(287) 00:14:31.087 fused_ordering(288) 00:14:31.087 fused_ordering(289) 00:14:31.087 fused_ordering(290) 00:14:31.087 fused_ordering(291) 00:14:31.087 fused_ordering(292) 00:14:31.087 fused_ordering(293) 00:14:31.087 fused_ordering(294) 00:14:31.087 fused_ordering(295) 00:14:31.087 fused_ordering(296) 00:14:31.087 fused_ordering(297) 00:14:31.087 fused_ordering(298) 00:14:31.087 fused_ordering(299) 00:14:31.087 fused_ordering(300) 00:14:31.087 fused_ordering(301) 00:14:31.087 fused_ordering(302) 00:14:31.087 fused_ordering(303) 00:14:31.087 fused_ordering(304) 00:14:31.087 fused_ordering(305) 00:14:31.087 fused_ordering(306) 00:14:31.087 fused_ordering(307) 00:14:31.087 fused_ordering(308) 00:14:31.087 fused_ordering(309) 00:14:31.087 fused_ordering(310) 00:14:31.087 fused_ordering(311) 00:14:31.087 fused_ordering(312) 00:14:31.087 fused_ordering(313) 00:14:31.087 fused_ordering(314) 00:14:31.087 fused_ordering(315) 00:14:31.087 fused_ordering(316) 00:14:31.087 fused_ordering(317) 00:14:31.087 fused_ordering(318) 00:14:31.087 fused_ordering(319) 00:14:31.087 fused_ordering(320) 00:14:31.087 fused_ordering(321) 00:14:31.087 fused_ordering(322) 00:14:31.087 fused_ordering(323) 00:14:31.087 fused_ordering(324) 00:14:31.087 fused_ordering(325) 00:14:31.087 fused_ordering(326) 00:14:31.087 fused_ordering(327) 00:14:31.087 fused_ordering(328) 00:14:31.087 fused_ordering(329) 00:14:31.087 fused_ordering(330) 00:14:31.087 fused_ordering(331) 00:14:31.087 fused_ordering(332) 00:14:31.087 fused_ordering(333) 00:14:31.087 fused_ordering(334) 00:14:31.087 fused_ordering(335) 00:14:31.087 fused_ordering(336) 00:14:31.087 fused_ordering(337) 00:14:31.087 fused_ordering(338) 00:14:31.087 fused_ordering(339) 00:14:31.087 fused_ordering(340) 00:14:31.087 fused_ordering(341) 00:14:31.087 fused_ordering(342) 00:14:31.087 fused_ordering(343) 00:14:31.087 fused_ordering(344) 00:14:31.087 fused_ordering(345) 00:14:31.087 fused_ordering(346) 00:14:31.087 fused_ordering(347) 00:14:31.087 fused_ordering(348) 00:14:31.087 fused_ordering(349) 00:14:31.087 fused_ordering(350) 00:14:31.087 fused_ordering(351) 00:14:31.087 fused_ordering(352) 00:14:31.087 fused_ordering(353) 00:14:31.087 fused_ordering(354) 00:14:31.087 fused_ordering(355) 00:14:31.087 fused_ordering(356) 00:14:31.087 fused_ordering(357) 00:14:31.087 fused_ordering(358) 00:14:31.087 fused_ordering(359) 00:14:31.087 fused_ordering(360) 00:14:31.087 fused_ordering(361) 00:14:31.087 fused_ordering(362) 00:14:31.087 fused_ordering(363) 00:14:31.087 fused_ordering(364) 00:14:31.087 fused_ordering(365) 00:14:31.087 fused_ordering(366) 00:14:31.087 fused_ordering(367) 00:14:31.087 fused_ordering(368) 00:14:31.087 fused_ordering(369) 00:14:31.087 fused_ordering(370) 00:14:31.087 fused_ordering(371) 00:14:31.087 fused_ordering(372) 00:14:31.087 fused_ordering(373) 00:14:31.087 fused_ordering(374) 00:14:31.087 fused_ordering(375) 00:14:31.087 fused_ordering(376) 00:14:31.087 fused_ordering(377) 00:14:31.087 fused_ordering(378) 00:14:31.087 fused_ordering(379) 00:14:31.087 fused_ordering(380) 00:14:31.087 fused_ordering(381) 00:14:31.087 fused_ordering(382) 00:14:31.087 fused_ordering(383) 00:14:31.087 fused_ordering(384) 00:14:31.087 fused_ordering(385) 00:14:31.087 fused_ordering(386) 00:14:31.087 fused_ordering(387) 00:14:31.087 fused_ordering(388) 00:14:31.087 fused_ordering(389) 00:14:31.087 fused_ordering(390) 00:14:31.087 fused_ordering(391) 00:14:31.087 fused_ordering(392) 00:14:31.087 fused_ordering(393) 00:14:31.087 fused_ordering(394) 00:14:31.087 fused_ordering(395) 00:14:31.087 fused_ordering(396) 00:14:31.087 fused_ordering(397) 00:14:31.087 fused_ordering(398) 00:14:31.087 fused_ordering(399) 00:14:31.087 fused_ordering(400) 00:14:31.087 fused_ordering(401) 00:14:31.087 fused_ordering(402) 00:14:31.087 fused_ordering(403) 00:14:31.087 fused_ordering(404) 00:14:31.087 fused_ordering(405) 00:14:31.087 fused_ordering(406) 00:14:31.087 fused_ordering(407) 00:14:31.087 fused_ordering(408) 00:14:31.087 fused_ordering(409) 00:14:31.087 fused_ordering(410) 00:14:31.349 fused_ordering(411) 00:14:31.349 fused_ordering(412) 00:14:31.349 fused_ordering(413) 00:14:31.349 fused_ordering(414) 00:14:31.349 fused_ordering(415) 00:14:31.349 fused_ordering(416) 00:14:31.349 fused_ordering(417) 00:14:31.349 fused_ordering(418) 00:14:31.349 fused_ordering(419) 00:14:31.349 fused_ordering(420) 00:14:31.349 fused_ordering(421) 00:14:31.349 fused_ordering(422) 00:14:31.349 fused_ordering(423) 00:14:31.349 fused_ordering(424) 00:14:31.349 fused_ordering(425) 00:14:31.349 fused_ordering(426) 00:14:31.349 fused_ordering(427) 00:14:31.349 fused_ordering(428) 00:14:31.349 fused_ordering(429) 00:14:31.349 fused_ordering(430) 00:14:31.349 fused_ordering(431) 00:14:31.349 fused_ordering(432) 00:14:31.349 fused_ordering(433) 00:14:31.349 fused_ordering(434) 00:14:31.349 fused_ordering(435) 00:14:31.349 fused_ordering(436) 00:14:31.349 fused_ordering(437) 00:14:31.349 fused_ordering(438) 00:14:31.349 fused_ordering(439) 00:14:31.349 fused_ordering(440) 00:14:31.349 fused_ordering(441) 00:14:31.349 fused_ordering(442) 00:14:31.349 fused_ordering(443) 00:14:31.349 fused_ordering(444) 00:14:31.349 fused_ordering(445) 00:14:31.349 fused_ordering(446) 00:14:31.349 fused_ordering(447) 00:14:31.349 fused_ordering(448) 00:14:31.349 fused_ordering(449) 00:14:31.349 fused_ordering(450) 00:14:31.349 fused_ordering(451) 00:14:31.349 fused_ordering(452) 00:14:31.349 fused_ordering(453) 00:14:31.349 fused_ordering(454) 00:14:31.349 fused_ordering(455) 00:14:31.349 fused_ordering(456) 00:14:31.349 fused_ordering(457) 00:14:31.349 fused_ordering(458) 00:14:31.349 fused_ordering(459) 00:14:31.349 fused_ordering(460) 00:14:31.349 fused_ordering(461) 00:14:31.349 fused_ordering(462) 00:14:31.349 fused_ordering(463) 00:14:31.349 fused_ordering(464) 00:14:31.349 fused_ordering(465) 00:14:31.349 fused_ordering(466) 00:14:31.349 fused_ordering(467) 00:14:31.349 fused_ordering(468) 00:14:31.349 fused_ordering(469) 00:14:31.349 fused_ordering(470) 00:14:31.349 fused_ordering(471) 00:14:31.349 fused_ordering(472) 00:14:31.349 fused_ordering(473) 00:14:31.349 fused_ordering(474) 00:14:31.349 fused_ordering(475) 00:14:31.349 fused_ordering(476) 00:14:31.349 fused_ordering(477) 00:14:31.349 fused_ordering(478) 00:14:31.349 fused_ordering(479) 00:14:31.349 fused_ordering(480) 00:14:31.349 fused_ordering(481) 00:14:31.349 fused_ordering(482) 00:14:31.349 fused_ordering(483) 00:14:31.349 fused_ordering(484) 00:14:31.349 fused_ordering(485) 00:14:31.349 fused_ordering(486) 00:14:31.349 fused_ordering(487) 00:14:31.349 fused_ordering(488) 00:14:31.349 fused_ordering(489) 00:14:31.349 fused_ordering(490) 00:14:31.349 fused_ordering(491) 00:14:31.349 fused_ordering(492) 00:14:31.349 fused_ordering(493) 00:14:31.349 fused_ordering(494) 00:14:31.349 fused_ordering(495) 00:14:31.349 fused_ordering(496) 00:14:31.349 fused_ordering(497) 00:14:31.349 fused_ordering(498) 00:14:31.349 fused_ordering(499) 00:14:31.349 fused_ordering(500) 00:14:31.349 fused_ordering(501) 00:14:31.349 fused_ordering(502) 00:14:31.349 fused_ordering(503) 00:14:31.349 fused_ordering(504) 00:14:31.349 fused_ordering(505) 00:14:31.349 fused_ordering(506) 00:14:31.349 fused_ordering(507) 00:14:31.349 fused_ordering(508) 00:14:31.349 fused_ordering(509) 00:14:31.349 fused_ordering(510) 00:14:31.349 fused_ordering(511) 00:14:31.349 fused_ordering(512) 00:14:31.349 fused_ordering(513) 00:14:31.349 fused_ordering(514) 00:14:31.349 fused_ordering(515) 00:14:31.349 fused_ordering(516) 00:14:31.349 fused_ordering(517) 00:14:31.349 fused_ordering(518) 00:14:31.349 fused_ordering(519) 00:14:31.349 fused_ordering(520) 00:14:31.349 fused_ordering(521) 00:14:31.349 fused_ordering(522) 00:14:31.349 fused_ordering(523) 00:14:31.349 fused_ordering(524) 00:14:31.349 fused_ordering(525) 00:14:31.349 fused_ordering(526) 00:14:31.349 fused_ordering(527) 00:14:31.349 fused_ordering(528) 00:14:31.349 fused_ordering(529) 00:14:31.349 fused_ordering(530) 00:14:31.349 fused_ordering(531) 00:14:31.349 fused_ordering(532) 00:14:31.349 fused_ordering(533) 00:14:31.349 fused_ordering(534) 00:14:31.349 fused_ordering(535) 00:14:31.349 fused_ordering(536) 00:14:31.349 fused_ordering(537) 00:14:31.349 fused_ordering(538) 00:14:31.349 fused_ordering(539) 00:14:31.349 fused_ordering(540) 00:14:31.349 fused_ordering(541) 00:14:31.349 fused_ordering(542) 00:14:31.349 fused_ordering(543) 00:14:31.349 fused_ordering(544) 00:14:31.349 fused_ordering(545) 00:14:31.349 fused_ordering(546) 00:14:31.349 fused_ordering(547) 00:14:31.349 fused_ordering(548) 00:14:31.349 fused_ordering(549) 00:14:31.349 fused_ordering(550) 00:14:31.350 fused_ordering(551) 00:14:31.350 fused_ordering(552) 00:14:31.350 fused_ordering(553) 00:14:31.350 fused_ordering(554) 00:14:31.350 fused_ordering(555) 00:14:31.350 fused_ordering(556) 00:14:31.350 fused_ordering(557) 00:14:31.350 fused_ordering(558) 00:14:31.350 fused_ordering(559) 00:14:31.350 fused_ordering(560) 00:14:31.350 fused_ordering(561) 00:14:31.350 fused_ordering(562) 00:14:31.350 fused_ordering(563) 00:14:31.350 fused_ordering(564) 00:14:31.350 fused_ordering(565) 00:14:31.350 fused_ordering(566) 00:14:31.350 fused_ordering(567) 00:14:31.350 fused_ordering(568) 00:14:31.350 fused_ordering(569) 00:14:31.350 fused_ordering(570) 00:14:31.350 fused_ordering(571) 00:14:31.350 fused_ordering(572) 00:14:31.350 fused_ordering(573) 00:14:31.350 fused_ordering(574) 00:14:31.350 fused_ordering(575) 00:14:31.350 fused_ordering(576) 00:14:31.350 fused_ordering(577) 00:14:31.350 fused_ordering(578) 00:14:31.350 fused_ordering(579) 00:14:31.350 fused_ordering(580) 00:14:31.350 fused_ordering(581) 00:14:31.350 fused_ordering(582) 00:14:31.350 fused_ordering(583) 00:14:31.350 fused_ordering(584) 00:14:31.350 fused_ordering(585) 00:14:31.350 fused_ordering(586) 00:14:31.350 fused_ordering(587) 00:14:31.350 fused_ordering(588) 00:14:31.350 fused_ordering(589) 00:14:31.350 fused_ordering(590) 00:14:31.350 fused_ordering(591) 00:14:31.350 fused_ordering(592) 00:14:31.350 fused_ordering(593) 00:14:31.350 fused_ordering(594) 00:14:31.350 fused_ordering(595) 00:14:31.350 fused_ordering(596) 00:14:31.350 fused_ordering(597) 00:14:31.350 fused_ordering(598) 00:14:31.350 fused_ordering(599) 00:14:31.350 fused_ordering(600) 00:14:31.350 fused_ordering(601) 00:14:31.350 fused_ordering(602) 00:14:31.350 fused_ordering(603) 00:14:31.350 fused_ordering(604) 00:14:31.350 fused_ordering(605) 00:14:31.350 fused_ordering(606) 00:14:31.350 fused_ordering(607) 00:14:31.350 fused_ordering(608) 00:14:31.350 fused_ordering(609) 00:14:31.350 fused_ordering(610) 00:14:31.350 fused_ordering(611) 00:14:31.350 fused_ordering(612) 00:14:31.350 fused_ordering(613) 00:14:31.350 fused_ordering(614) 00:14:31.350 fused_ordering(615) 00:14:31.923 fused_ordering(616) 00:14:31.923 fused_ordering(617) 00:14:31.923 fused_ordering(618) 00:14:31.923 fused_ordering(619) 00:14:31.923 fused_ordering(620) 00:14:31.923 fused_ordering(621) 00:14:31.923 fused_ordering(622) 00:14:31.923 fused_ordering(623) 00:14:31.923 fused_ordering(624) 00:14:31.923 fused_ordering(625) 00:14:31.923 fused_ordering(626) 00:14:31.923 fused_ordering(627) 00:14:31.923 fused_ordering(628) 00:14:31.923 fused_ordering(629) 00:14:31.923 fused_ordering(630) 00:14:31.923 fused_ordering(631) 00:14:31.923 fused_ordering(632) 00:14:31.923 fused_ordering(633) 00:14:31.923 fused_ordering(634) 00:14:31.923 fused_ordering(635) 00:14:31.923 fused_ordering(636) 00:14:31.923 fused_ordering(637) 00:14:31.923 fused_ordering(638) 00:14:31.923 fused_ordering(639) 00:14:31.923 fused_ordering(640) 00:14:31.923 fused_ordering(641) 00:14:31.923 fused_ordering(642) 00:14:31.923 fused_ordering(643) 00:14:31.923 fused_ordering(644) 00:14:31.923 fused_ordering(645) 00:14:31.923 fused_ordering(646) 00:14:31.923 fused_ordering(647) 00:14:31.923 fused_ordering(648) 00:14:31.923 fused_ordering(649) 00:14:31.923 fused_ordering(650) 00:14:31.923 fused_ordering(651) 00:14:31.923 fused_ordering(652) 00:14:31.923 fused_ordering(653) 00:14:31.923 fused_ordering(654) 00:14:31.923 fused_ordering(655) 00:14:31.923 fused_ordering(656) 00:14:31.923 fused_ordering(657) 00:14:31.923 fused_ordering(658) 00:14:31.923 fused_ordering(659) 00:14:31.923 fused_ordering(660) 00:14:31.923 fused_ordering(661) 00:14:31.923 fused_ordering(662) 00:14:31.923 fused_ordering(663) 00:14:31.923 fused_ordering(664) 00:14:31.923 fused_ordering(665) 00:14:31.923 fused_ordering(666) 00:14:31.923 fused_ordering(667) 00:14:31.923 fused_ordering(668) 00:14:31.923 fused_ordering(669) 00:14:31.923 fused_ordering(670) 00:14:31.923 fused_ordering(671) 00:14:31.923 fused_ordering(672) 00:14:31.923 fused_ordering(673) 00:14:31.923 fused_ordering(674) 00:14:31.923 fused_ordering(675) 00:14:31.923 fused_ordering(676) 00:14:31.923 fused_ordering(677) 00:14:31.923 fused_ordering(678) 00:14:31.923 fused_ordering(679) 00:14:31.923 fused_ordering(680) 00:14:31.923 fused_ordering(681) 00:14:31.923 fused_ordering(682) 00:14:31.923 fused_ordering(683) 00:14:31.923 fused_ordering(684) 00:14:31.923 fused_ordering(685) 00:14:31.923 fused_ordering(686) 00:14:31.923 fused_ordering(687) 00:14:31.923 fused_ordering(688) 00:14:31.923 fused_ordering(689) 00:14:31.923 fused_ordering(690) 00:14:31.923 fused_ordering(691) 00:14:31.923 fused_ordering(692) 00:14:31.923 fused_ordering(693) 00:14:31.923 fused_ordering(694) 00:14:31.923 fused_ordering(695) 00:14:31.923 fused_ordering(696) 00:14:31.923 fused_ordering(697) 00:14:31.923 fused_ordering(698) 00:14:31.923 fused_ordering(699) 00:14:31.923 fused_ordering(700) 00:14:31.923 fused_ordering(701) 00:14:31.923 fused_ordering(702) 00:14:31.923 fused_ordering(703) 00:14:31.923 fused_ordering(704) 00:14:31.923 fused_ordering(705) 00:14:31.923 fused_ordering(706) 00:14:31.923 fused_ordering(707) 00:14:31.923 fused_ordering(708) 00:14:31.923 fused_ordering(709) 00:14:31.923 fused_ordering(710) 00:14:31.923 fused_ordering(711) 00:14:31.923 fused_ordering(712) 00:14:31.923 fused_ordering(713) 00:14:31.923 fused_ordering(714) 00:14:31.924 fused_ordering(715) 00:14:31.924 fused_ordering(716) 00:14:31.924 fused_ordering(717) 00:14:31.924 fused_ordering(718) 00:14:31.924 fused_ordering(719) 00:14:31.924 fused_ordering(720) 00:14:31.924 fused_ordering(721) 00:14:31.924 fused_ordering(722) 00:14:31.924 fused_ordering(723) 00:14:31.924 fused_ordering(724) 00:14:31.924 fused_ordering(725) 00:14:31.924 fused_ordering(726) 00:14:31.924 fused_ordering(727) 00:14:31.924 fused_ordering(728) 00:14:31.924 fused_ordering(729) 00:14:31.924 fused_ordering(730) 00:14:31.924 fused_ordering(731) 00:14:31.924 fused_ordering(732) 00:14:31.924 fused_ordering(733) 00:14:31.924 fused_ordering(734) 00:14:31.924 fused_ordering(735) 00:14:31.924 fused_ordering(736) 00:14:31.924 fused_ordering(737) 00:14:31.924 fused_ordering(738) 00:14:31.924 fused_ordering(739) 00:14:31.924 fused_ordering(740) 00:14:31.924 fused_ordering(741) 00:14:31.924 fused_ordering(742) 00:14:31.924 fused_ordering(743) 00:14:31.924 fused_ordering(744) 00:14:31.924 fused_ordering(745) 00:14:31.924 fused_ordering(746) 00:14:31.924 fused_ordering(747) 00:14:31.924 fused_ordering(748) 00:14:31.924 fused_ordering(749) 00:14:31.924 fused_ordering(750) 00:14:31.924 fused_ordering(751) 00:14:31.924 fused_ordering(752) 00:14:31.924 fused_ordering(753) 00:14:31.924 fused_ordering(754) 00:14:31.924 fused_ordering(755) 00:14:31.924 fused_ordering(756) 00:14:31.924 fused_ordering(757) 00:14:31.924 fused_ordering(758) 00:14:31.924 fused_ordering(759) 00:14:31.924 fused_ordering(760) 00:14:31.924 fused_ordering(761) 00:14:31.924 fused_ordering(762) 00:14:31.924 fused_ordering(763) 00:14:31.924 fused_ordering(764) 00:14:31.924 fused_ordering(765) 00:14:31.924 fused_ordering(766) 00:14:31.924 fused_ordering(767) 00:14:31.924 fused_ordering(768) 00:14:31.924 fused_ordering(769) 00:14:31.924 fused_ordering(770) 00:14:31.924 fused_ordering(771) 00:14:31.924 fused_ordering(772) 00:14:31.924 fused_ordering(773) 00:14:31.924 fused_ordering(774) 00:14:31.924 fused_ordering(775) 00:14:31.924 fused_ordering(776) 00:14:31.924 fused_ordering(777) 00:14:31.924 fused_ordering(778) 00:14:31.924 fused_ordering(779) 00:14:31.924 fused_ordering(780) 00:14:31.924 fused_ordering(781) 00:14:31.924 fused_ordering(782) 00:14:31.924 fused_ordering(783) 00:14:31.924 fused_ordering(784) 00:14:31.924 fused_ordering(785) 00:14:31.924 fused_ordering(786) 00:14:31.924 fused_ordering(787) 00:14:31.924 fused_ordering(788) 00:14:31.924 fused_ordering(789) 00:14:31.924 fused_ordering(790) 00:14:31.924 fused_ordering(791) 00:14:31.924 fused_ordering(792) 00:14:31.924 fused_ordering(793) 00:14:31.924 fused_ordering(794) 00:14:31.924 fused_ordering(795) 00:14:31.924 fused_ordering(796) 00:14:31.924 fused_ordering(797) 00:14:31.924 fused_ordering(798) 00:14:31.924 fused_ordering(799) 00:14:31.924 fused_ordering(800) 00:14:31.924 fused_ordering(801) 00:14:31.924 fused_ordering(802) 00:14:31.924 fused_ordering(803) 00:14:31.924 fused_ordering(804) 00:14:31.924 fused_ordering(805) 00:14:31.924 fused_ordering(806) 00:14:31.924 fused_ordering(807) 00:14:31.924 fused_ordering(808) 00:14:31.924 fused_ordering(809) 00:14:31.924 fused_ordering(810) 00:14:31.924 fused_ordering(811) 00:14:31.924 fused_ordering(812) 00:14:31.924 fused_ordering(813) 00:14:31.924 fused_ordering(814) 00:14:31.924 fused_ordering(815) 00:14:31.924 fused_ordering(816) 00:14:31.924 fused_ordering(817) 00:14:31.924 fused_ordering(818) 00:14:31.924 fused_ordering(819) 00:14:31.924 fused_ordering(820) 00:14:32.495 fused_ordering(821) 00:14:32.495 fused_ordering(822) 00:14:32.495 fused_ordering(823) 00:14:32.495 fused_ordering(824) 00:14:32.495 fused_ordering(825) 00:14:32.495 fused_ordering(826) 00:14:32.495 fused_ordering(827) 00:14:32.495 fused_ordering(828) 00:14:32.495 fused_ordering(829) 00:14:32.495 fused_ordering(830) 00:14:32.495 fused_ordering(831) 00:14:32.495 fused_ordering(832) 00:14:32.495 fused_ordering(833) 00:14:32.495 fused_ordering(834) 00:14:32.495 fused_ordering(835) 00:14:32.495 fused_ordering(836) 00:14:32.496 fused_ordering(837) 00:14:32.496 fused_ordering(838) 00:14:32.496 fused_ordering(839) 00:14:32.496 fused_ordering(840) 00:14:32.496 fused_ordering(841) 00:14:32.496 fused_ordering(842) 00:14:32.496 fused_ordering(843) 00:14:32.496 fused_ordering(844) 00:14:32.496 fused_ordering(845) 00:14:32.496 fused_ordering(846) 00:14:32.496 fused_ordering(847) 00:14:32.496 fused_ordering(848) 00:14:32.496 fused_ordering(849) 00:14:32.496 fused_ordering(850) 00:14:32.496 fused_ordering(851) 00:14:32.496 fused_ordering(852) 00:14:32.496 fused_ordering(853) 00:14:32.496 fused_ordering(854) 00:14:32.496 fused_ordering(855) 00:14:32.496 fused_ordering(856) 00:14:32.496 fused_ordering(857) 00:14:32.496 fused_ordering(858) 00:14:32.496 fused_ordering(859) 00:14:32.496 fused_ordering(860) 00:14:32.496 fused_ordering(861) 00:14:32.496 fused_ordering(862) 00:14:32.496 fused_ordering(863) 00:14:32.496 fused_ordering(864) 00:14:32.496 fused_ordering(865) 00:14:32.496 fused_ordering(866) 00:14:32.496 fused_ordering(867) 00:14:32.496 fused_ordering(868) 00:14:32.496 fused_ordering(869) 00:14:32.496 fused_ordering(870) 00:14:32.496 fused_ordering(871) 00:14:32.496 fused_ordering(872) 00:14:32.496 fused_ordering(873) 00:14:32.496 fused_ordering(874) 00:14:32.496 fused_ordering(875) 00:14:32.496 fused_ordering(876) 00:14:32.496 fused_ordering(877) 00:14:32.496 fused_ordering(878) 00:14:32.496 fused_ordering(879) 00:14:32.496 fused_ordering(880) 00:14:32.496 fused_ordering(881) 00:14:32.496 fused_ordering(882) 00:14:32.496 fused_ordering(883) 00:14:32.496 fused_ordering(884) 00:14:32.496 fused_ordering(885) 00:14:32.496 fused_ordering(886) 00:14:32.496 fused_ordering(887) 00:14:32.496 fused_ordering(888) 00:14:32.496 fused_ordering(889) 00:14:32.496 fused_ordering(890) 00:14:32.496 fused_ordering(891) 00:14:32.496 fused_ordering(892) 00:14:32.496 fused_ordering(893) 00:14:32.496 fused_ordering(894) 00:14:32.496 fused_ordering(895) 00:14:32.496 fused_ordering(896) 00:14:32.496 fused_ordering(897) 00:14:32.496 fused_ordering(898) 00:14:32.496 fused_ordering(899) 00:14:32.496 fused_ordering(900) 00:14:32.496 fused_ordering(901) 00:14:32.496 fused_ordering(902) 00:14:32.496 fused_ordering(903) 00:14:32.496 fused_ordering(904) 00:14:32.496 fused_ordering(905) 00:14:32.496 fused_ordering(906) 00:14:32.496 fused_ordering(907) 00:14:32.496 fused_ordering(908) 00:14:32.496 fused_ordering(909) 00:14:32.496 fused_ordering(910) 00:14:32.496 fused_ordering(911) 00:14:32.496 fused_ordering(912) 00:14:32.496 fused_ordering(913) 00:14:32.496 fused_ordering(914) 00:14:32.496 fused_ordering(915) 00:14:32.496 fused_ordering(916) 00:14:32.496 fused_ordering(917) 00:14:32.496 fused_ordering(918) 00:14:32.496 fused_ordering(919) 00:14:32.496 fused_ordering(920) 00:14:32.496 fused_ordering(921) 00:14:32.496 fused_ordering(922) 00:14:32.496 fused_ordering(923) 00:14:32.496 fused_ordering(924) 00:14:32.496 fused_ordering(925) 00:14:32.496 fused_ordering(926) 00:14:32.496 fused_ordering(927) 00:14:32.496 fused_ordering(928) 00:14:32.496 fused_ordering(929) 00:14:32.496 fused_ordering(930) 00:14:32.496 fused_ordering(931) 00:14:32.496 fused_ordering(932) 00:14:32.496 fused_ordering(933) 00:14:32.496 fused_ordering(934) 00:14:32.496 fused_ordering(935) 00:14:32.496 fused_ordering(936) 00:14:32.496 fused_ordering(937) 00:14:32.496 fused_ordering(938) 00:14:32.496 fused_ordering(939) 00:14:32.496 fused_ordering(940) 00:14:32.496 fused_ordering(941) 00:14:32.496 fused_ordering(942) 00:14:32.496 fused_ordering(943) 00:14:32.496 fused_ordering(944) 00:14:32.496 fused_ordering(945) 00:14:32.496 fused_ordering(946) 00:14:32.496 fused_ordering(947) 00:14:32.496 fused_ordering(948) 00:14:32.496 fused_ordering(949) 00:14:32.496 fused_ordering(950) 00:14:32.496 fused_ordering(951) 00:14:32.496 fused_ordering(952) 00:14:32.496 fused_ordering(953) 00:14:32.496 fused_ordering(954) 00:14:32.496 fused_ordering(955) 00:14:32.496 fused_ordering(956) 00:14:32.496 fused_ordering(957) 00:14:32.496 fused_ordering(958) 00:14:32.496 fused_ordering(959) 00:14:32.496 fused_ordering(960) 00:14:32.496 fused_ordering(961) 00:14:32.496 fused_ordering(962) 00:14:32.496 fused_ordering(963) 00:14:32.496 fused_ordering(964) 00:14:32.496 fused_ordering(965) 00:14:32.496 fused_ordering(966) 00:14:32.496 fused_ordering(967) 00:14:32.496 fused_ordering(968) 00:14:32.496 fused_ordering(969) 00:14:32.496 fused_ordering(970) 00:14:32.496 fused_ordering(971) 00:14:32.496 fused_ordering(972) 00:14:32.496 fused_ordering(973) 00:14:32.496 fused_ordering(974) 00:14:32.496 fused_ordering(975) 00:14:32.496 fused_ordering(976) 00:14:32.496 fused_ordering(977) 00:14:32.496 fused_ordering(978) 00:14:32.496 fused_ordering(979) 00:14:32.496 fused_ordering(980) 00:14:32.496 fused_ordering(981) 00:14:32.496 fused_ordering(982) 00:14:32.496 fused_ordering(983) 00:14:32.496 fused_ordering(984) 00:14:32.496 fused_ordering(985) 00:14:32.496 fused_ordering(986) 00:14:32.496 fused_ordering(987) 00:14:32.496 fused_ordering(988) 00:14:32.496 fused_ordering(989) 00:14:32.496 fused_ordering(990) 00:14:32.496 fused_ordering(991) 00:14:32.496 fused_ordering(992) 00:14:32.496 fused_ordering(993) 00:14:32.496 fused_ordering(994) 00:14:32.496 fused_ordering(995) 00:14:32.496 fused_ordering(996) 00:14:32.496 fused_ordering(997) 00:14:32.496 fused_ordering(998) 00:14:32.496 fused_ordering(999) 00:14:32.496 fused_ordering(1000) 00:14:32.496 fused_ordering(1001) 00:14:32.496 fused_ordering(1002) 00:14:32.496 fused_ordering(1003) 00:14:32.496 fused_ordering(1004) 00:14:32.496 fused_ordering(1005) 00:14:32.496 fused_ordering(1006) 00:14:32.496 fused_ordering(1007) 00:14:32.496 fused_ordering(1008) 00:14:32.496 fused_ordering(1009) 00:14:32.496 fused_ordering(1010) 00:14:32.496 fused_ordering(1011) 00:14:32.496 fused_ordering(1012) 00:14:32.496 fused_ordering(1013) 00:14:32.496 fused_ordering(1014) 00:14:32.496 fused_ordering(1015) 00:14:32.496 fused_ordering(1016) 00:14:32.496 fused_ordering(1017) 00:14:32.496 fused_ordering(1018) 00:14:32.496 fused_ordering(1019) 00:14:32.496 fused_ordering(1020) 00:14:32.496 fused_ordering(1021) 00:14:32.496 fused_ordering(1022) 00:14:32.496 fused_ordering(1023) 00:14:32.496 12:57:37 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:14:32.496 12:57:37 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:14:32.496 12:57:37 -- nvmf/common.sh@477 -- # nvmfcleanup 00:14:32.497 12:57:37 -- nvmf/common.sh@117 -- # sync 00:14:32.497 12:57:37 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:32.497 12:57:37 -- nvmf/common.sh@120 -- # set +e 00:14:32.497 12:57:37 -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:32.497 12:57:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:32.497 rmmod nvme_tcp 00:14:32.497 rmmod nvme_fabrics 00:14:32.497 rmmod nvme_keyring 00:14:32.497 12:57:37 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:32.497 12:57:37 -- nvmf/common.sh@124 -- # set -e 00:14:32.497 12:57:37 -- nvmf/common.sh@125 -- # return 0 00:14:32.497 12:57:37 -- nvmf/common.sh@478 -- # '[' -n 3905388 ']' 00:14:32.497 12:57:37 -- nvmf/common.sh@479 -- # killprocess 3905388 00:14:32.497 12:57:37 -- common/autotest_common.sh@936 -- # '[' -z 3905388 ']' 00:14:32.497 12:57:37 -- common/autotest_common.sh@940 -- # kill -0 3905388 00:14:32.497 12:57:37 -- common/autotest_common.sh@941 -- # uname 00:14:32.497 12:57:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:32.497 12:57:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3905388 00:14:32.757 12:57:37 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:32.757 12:57:37 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:32.757 12:57:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3905388' 00:14:32.757 killing process with pid 3905388 00:14:32.757 12:57:37 -- common/autotest_common.sh@955 -- # kill 3905388 00:14:32.757 12:57:37 -- common/autotest_common.sh@960 -- # wait 3905388 00:14:32.757 12:57:37 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:14:32.757 12:57:37 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:14:32.757 12:57:37 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:14:32.757 12:57:37 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:32.757 12:57:37 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:32.757 12:57:37 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:32.757 12:57:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:32.757 12:57:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:35.306 12:57:39 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:35.306 00:14:35.306 real 0m12.522s 00:14:35.306 user 0m6.788s 00:14:35.306 sys 0m6.437s 00:14:35.306 12:57:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:35.306 12:57:39 -- common/autotest_common.sh@10 -- # set +x 00:14:35.306 ************************************ 00:14:35.306 END TEST nvmf_fused_ordering 00:14:35.306 ************************************ 00:14:35.306 12:57:39 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:14:35.306 12:57:39 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:35.306 12:57:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:35.306 12:57:39 -- common/autotest_common.sh@10 -- # set +x 00:14:35.306 ************************************ 00:14:35.306 START TEST nvmf_delete_subsystem 00:14:35.306 ************************************ 00:14:35.306 12:57:39 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:14:35.306 * Looking for test storage... 00:14:35.306 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:35.306 12:57:40 -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:35.306 12:57:40 -- nvmf/common.sh@7 -- # uname -s 00:14:35.306 12:57:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:35.306 12:57:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:35.306 12:57:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:35.306 12:57:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:35.306 12:57:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:35.306 12:57:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:35.306 12:57:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:35.306 12:57:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:35.306 12:57:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:35.306 12:57:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:35.306 12:57:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:35.306 12:57:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:35.306 12:57:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:35.306 12:57:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:35.306 12:57:40 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:35.306 12:57:40 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:35.306 12:57:40 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:35.306 12:57:40 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:35.306 12:57:40 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:35.306 12:57:40 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:35.306 12:57:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.306 12:57:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.306 12:57:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.306 12:57:40 -- paths/export.sh@5 -- # export PATH 00:14:35.306 12:57:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.306 12:57:40 -- nvmf/common.sh@47 -- # : 0 00:14:35.306 12:57:40 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:35.306 12:57:40 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:35.306 12:57:40 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:35.306 12:57:40 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:35.306 12:57:40 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:35.306 12:57:40 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:35.306 12:57:40 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:35.306 12:57:40 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:35.306 12:57:40 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:14:35.306 12:57:40 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:14:35.306 12:57:40 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:35.306 12:57:40 -- nvmf/common.sh@437 -- # prepare_net_devs 00:14:35.306 12:57:40 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:14:35.306 12:57:40 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:14:35.306 12:57:40 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:35.306 12:57:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:35.306 12:57:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:35.306 12:57:40 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:14:35.306 12:57:40 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:14:35.306 12:57:40 -- nvmf/common.sh@285 -- # xtrace_disable 00:14:35.306 12:57:40 -- common/autotest_common.sh@10 -- # set +x 00:14:41.898 12:57:46 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:41.898 12:57:46 -- nvmf/common.sh@291 -- # pci_devs=() 00:14:41.898 12:57:46 -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:41.898 12:57:46 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:41.898 12:57:46 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:41.898 12:57:46 -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:41.898 12:57:46 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:41.898 12:57:46 -- nvmf/common.sh@295 -- # net_devs=() 00:14:41.898 12:57:46 -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:41.898 12:57:46 -- nvmf/common.sh@296 -- # e810=() 00:14:41.898 12:57:46 -- nvmf/common.sh@296 -- # local -ga e810 00:14:41.898 12:57:46 -- nvmf/common.sh@297 -- # x722=() 00:14:41.898 12:57:46 -- nvmf/common.sh@297 -- # local -ga x722 00:14:41.898 12:57:46 -- nvmf/common.sh@298 -- # mlx=() 00:14:41.898 12:57:46 -- nvmf/common.sh@298 -- # local -ga mlx 00:14:41.898 12:57:46 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:41.898 12:57:46 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:41.899 12:57:46 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:41.899 12:57:46 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:41.899 12:57:46 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:41.899 12:57:46 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:41.899 12:57:46 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:41.899 12:57:46 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:41.899 12:57:46 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:41.899 12:57:46 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:41.899 12:57:46 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:41.899 12:57:46 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:41.899 12:57:46 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:41.899 12:57:46 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:41.899 12:57:46 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:41.899 12:57:46 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:41.899 12:57:46 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:41.899 12:57:46 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:41.899 12:57:46 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:14:41.899 Found 0000:31:00.0 (0x8086 - 0x159b) 00:14:41.899 12:57:46 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:41.899 12:57:46 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:41.899 12:57:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:41.899 12:57:46 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:41.899 12:57:46 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:41.899 12:57:46 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:41.899 12:57:46 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:14:41.899 Found 0000:31:00.1 (0x8086 - 0x159b) 00:14:41.899 12:57:46 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:41.899 12:57:46 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:41.899 12:57:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:41.899 12:57:46 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:41.899 12:57:46 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:41.899 12:57:46 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:41.899 12:57:46 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:41.899 12:57:46 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:41.899 12:57:46 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:41.899 12:57:46 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:41.899 12:57:46 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:41.899 12:57:46 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:41.899 12:57:46 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:14:41.899 Found net devices under 0000:31:00.0: cvl_0_0 00:14:41.899 12:57:46 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:41.899 12:57:46 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:41.899 12:57:46 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:41.899 12:57:46 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:41.899 12:57:46 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:41.899 12:57:46 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:14:41.899 Found net devices under 0000:31:00.1: cvl_0_1 00:14:41.899 12:57:46 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:41.899 12:57:46 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:14:42.160 12:57:46 -- nvmf/common.sh@403 -- # is_hw=yes 00:14:42.160 12:57:46 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:14:42.160 12:57:46 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:14:42.160 12:57:46 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:14:42.160 12:57:46 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:42.160 12:57:46 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:42.160 12:57:46 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:42.160 12:57:46 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:42.160 12:57:46 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:42.160 12:57:46 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:42.160 12:57:46 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:42.160 12:57:46 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:42.160 12:57:46 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:42.160 12:57:46 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:42.160 12:57:46 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:42.160 12:57:46 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:42.161 12:57:46 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:42.161 12:57:47 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:42.161 12:57:47 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:42.161 12:57:47 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:42.161 12:57:47 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:42.422 12:57:47 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:42.422 12:57:47 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:42.422 12:57:47 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:42.422 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:42.422 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.525 ms 00:14:42.422 00:14:42.422 --- 10.0.0.2 ping statistics --- 00:14:42.422 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:42.422 rtt min/avg/max/mdev = 0.525/0.525/0.525/0.000 ms 00:14:42.422 12:57:47 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:42.422 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:42.422 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.323 ms 00:14:42.422 00:14:42.422 --- 10.0.0.1 ping statistics --- 00:14:42.422 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:42.422 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:14:42.422 12:57:47 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:42.422 12:57:47 -- nvmf/common.sh@411 -- # return 0 00:14:42.422 12:57:47 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:14:42.422 12:57:47 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:42.422 12:57:47 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:14:42.422 12:57:47 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:14:42.422 12:57:47 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:42.422 12:57:47 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:14:42.422 12:57:47 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:14:42.422 12:57:47 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:14:42.422 12:57:47 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:42.422 12:57:47 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:42.422 12:57:47 -- common/autotest_common.sh@10 -- # set +x 00:14:42.422 12:57:47 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:14:42.422 12:57:47 -- nvmf/common.sh@470 -- # nvmfpid=3910403 00:14:42.422 12:57:47 -- nvmf/common.sh@471 -- # waitforlisten 3910403 00:14:42.422 12:57:47 -- common/autotest_common.sh@817 -- # '[' -z 3910403 ']' 00:14:42.422 12:57:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:42.422 12:57:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:42.422 12:57:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:42.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:42.422 12:57:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:42.422 12:57:47 -- common/autotest_common.sh@10 -- # set +x 00:14:42.422 [2024-04-26 12:57:47.360174] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:14:42.422 [2024-04-26 12:57:47.360233] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:42.422 EAL: No free 2048 kB hugepages reported on node 1 00:14:42.422 [2024-04-26 12:57:47.429031] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:42.683 [2024-04-26 12:57:47.493338] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:42.683 [2024-04-26 12:57:47.493376] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:42.683 [2024-04-26 12:57:47.493384] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:42.683 [2024-04-26 12:57:47.493390] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:42.683 [2024-04-26 12:57:47.493396] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:42.683 [2024-04-26 12:57:47.493457] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:42.683 [2024-04-26 12:57:47.493459] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:43.256 12:57:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:43.256 12:57:48 -- common/autotest_common.sh@850 -- # return 0 00:14:43.256 12:57:48 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:43.256 12:57:48 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:43.256 12:57:48 -- common/autotest_common.sh@10 -- # set +x 00:14:43.256 12:57:48 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:43.256 12:57:48 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:43.256 12:57:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:43.256 12:57:48 -- common/autotest_common.sh@10 -- # set +x 00:14:43.256 [2024-04-26 12:57:48.196650] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:43.256 12:57:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:43.256 12:57:48 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:43.256 12:57:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:43.256 12:57:48 -- common/autotest_common.sh@10 -- # set +x 00:14:43.256 12:57:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:43.256 12:57:48 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:43.256 12:57:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:43.257 12:57:48 -- common/autotest_common.sh@10 -- # set +x 00:14:43.257 [2024-04-26 12:57:48.220810] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:43.257 12:57:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:43.257 12:57:48 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:43.257 12:57:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:43.257 12:57:48 -- common/autotest_common.sh@10 -- # set +x 00:14:43.257 NULL1 00:14:43.257 12:57:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:43.257 12:57:48 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:43.257 12:57:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:43.257 12:57:48 -- common/autotest_common.sh@10 -- # set +x 00:14:43.257 Delay0 00:14:43.257 12:57:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:43.257 12:57:48 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:43.257 12:57:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:43.257 12:57:48 -- common/autotest_common.sh@10 -- # set +x 00:14:43.257 12:57:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:43.257 12:57:48 -- target/delete_subsystem.sh@28 -- # perf_pid=3910499 00:14:43.257 12:57:48 -- target/delete_subsystem.sh@30 -- # sleep 2 00:14:43.257 12:57:48 -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:14:43.257 EAL: No free 2048 kB hugepages reported on node 1 00:14:43.517 [2024-04-26 12:57:48.317452] subsystem.c:1435:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:45.431 12:57:50 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:45.431 12:57:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:45.431 12:57:50 -- common/autotest_common.sh@10 -- # set +x 00:14:45.431 Write completed with error (sct=0, sc=8) 00:14:45.431 Write completed with error (sct=0, sc=8) 00:14:45.431 Read completed with error (sct=0, sc=8) 00:14:45.431 Write completed with error (sct=0, sc=8) 00:14:45.431 starting I/O failed: -6 00:14:45.431 Read completed with error (sct=0, sc=8) 00:14:45.431 Read completed with error (sct=0, sc=8) 00:14:45.431 Read completed with error (sct=0, sc=8) 00:14:45.431 Write completed with error (sct=0, sc=8) 00:14:45.431 starting I/O failed: -6 00:14:45.431 Write completed with error (sct=0, sc=8) 00:14:45.431 Read completed with error (sct=0, sc=8) 00:14:45.431 Read completed with error (sct=0, sc=8) 00:14:45.431 Read completed with error (sct=0, sc=8) 00:14:45.431 starting I/O failed: -6 00:14:45.431 Read completed with error (sct=0, sc=8) 00:14:45.431 Write completed with error (sct=0, sc=8) 00:14:45.431 Read completed with error (sct=0, sc=8) 00:14:45.431 Read completed with error (sct=0, sc=8) 00:14:45.431 starting I/O failed: -6 00:14:45.431 Read completed with error (sct=0, sc=8) 00:14:45.431 Read completed with error (sct=0, sc=8) 00:14:45.431 Read completed with error (sct=0, sc=8) 00:14:45.431 Read completed with error (sct=0, sc=8) 00:14:45.431 starting I/O failed: -6 00:14:45.431 Write completed with error (sct=0, sc=8) 00:14:45.431 Read completed with error (sct=0, sc=8) 00:14:45.431 Write completed with error (sct=0, sc=8) 00:14:45.431 Read completed with error (sct=0, sc=8) 00:14:45.431 starting I/O failed: -6 00:14:45.431 Read completed with error (sct=0, sc=8) 00:14:45.431 Read completed with error (sct=0, sc=8) 00:14:45.431 Read completed with error (sct=0, sc=8) 00:14:45.431 Read completed with error (sct=0, sc=8) 00:14:45.431 starting I/O failed: -6 00:14:45.431 Read completed with error (sct=0, sc=8) 00:14:45.431 Read completed with error (sct=0, sc=8) 00:14:45.431 Read completed with error (sct=0, sc=8) 00:14:45.431 Write completed with error (sct=0, sc=8) 00:14:45.431 starting I/O failed: -6 00:14:45.431 Read completed with error (sct=0, sc=8) 00:14:45.431 Read completed with error (sct=0, sc=8) 00:14:45.431 Read completed with error (sct=0, sc=8) 00:14:45.431 Read completed with error (sct=0, sc=8) 00:14:45.431 starting I/O failed: -6 00:14:45.431 Read completed with error (sct=0, sc=8) 00:14:45.431 Write completed with error (sct=0, sc=8) 00:14:45.431 Read completed with error (sct=0, sc=8) 00:14:45.431 Read completed with error (sct=0, sc=8) 00:14:45.431 starting I/O failed: -6 00:14:45.431 Read completed with error (sct=0, sc=8) 00:14:45.431 Read completed with error (sct=0, sc=8) 00:14:45.431 Read completed with error (sct=0, sc=8) 00:14:45.431 [2024-04-26 12:57:50.360876] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0b40 is same with the state(5) to be set 00:14:45.431 Read completed with error (sct=0, sc=8) 00:14:45.431 Read completed with error (sct=0, sc=8) 00:14:45.431 Read completed with error (sct=0, sc=8) 00:14:45.431 Write completed with error (sct=0, sc=8) 00:14:45.431 Read completed with error (sct=0, sc=8) 00:14:45.431 Read completed with error (sct=0, sc=8) 00:14:45.431 Read completed with error (sct=0, sc=8) 00:14:45.431 Read completed with error (sct=0, sc=8) 00:14:45.431 Read completed with error (sct=0, sc=8) 00:14:45.431 Read completed with error (sct=0, sc=8) 00:14:45.431 Write completed with error (sct=0, sc=8) 00:14:45.431 Read completed with error (sct=0, sc=8) 00:14:45.431 Read completed with error (sct=0, sc=8) 00:14:45.431 Write completed with error (sct=0, sc=8) 00:14:45.431 Write completed with error (sct=0, sc=8) 00:14:45.431 Write completed with error (sct=0, sc=8) 00:14:45.431 Write completed with error (sct=0, sc=8) 00:14:45.431 Read completed with error (sct=0, sc=8) 00:14:45.431 Read completed with error (sct=0, sc=8) 00:14:45.431 Read completed with error (sct=0, sc=8) 00:14:45.431 Write completed with error (sct=0, sc=8) 00:14:45.431 Read completed with error (sct=0, sc=8) 00:14:45.431 Read completed with error (sct=0, sc=8) 00:14:45.431 Read completed with error (sct=0, sc=8) 00:14:45.431 Read completed with error (sct=0, sc=8) 00:14:45.431 Read completed with error (sct=0, sc=8) 00:14:45.431 Read completed with error (sct=0, sc=8) 00:14:45.431 Write completed with error (sct=0, sc=8) 00:14:45.431 Read completed with error (sct=0, sc=8) 00:14:45.431 Write completed with error (sct=0, sc=8) 00:14:45.431 Write completed with error (sct=0, sc=8) 00:14:45.431 Read completed with error (sct=0, sc=8) 00:14:45.431 Write completed with error (sct=0, sc=8) 00:14:45.431 Read completed with error (sct=0, sc=8) 00:14:45.431 Read completed with error (sct=0, sc=8) 00:14:45.431 Read completed with error (sct=0, sc=8) 00:14:45.431 Write completed with error (sct=0, sc=8) 00:14:45.431 Read completed with error (sct=0, sc=8) 00:14:45.431 Read completed with error (sct=0, sc=8) 00:14:45.431 Read completed with error (sct=0, sc=8) 00:14:45.431 Write completed with error (sct=0, sc=8) 00:14:45.431 Read completed with error (sct=0, sc=8) 00:14:45.431 Write completed with error (sct=0, sc=8) 00:14:45.431 Write completed with error (sct=0, sc=8) 00:14:45.431 Write completed with error (sct=0, sc=8) 00:14:45.431 Write completed with error (sct=0, sc=8) 00:14:45.431 Write completed with error (sct=0, sc=8) 00:14:45.431 Read completed with error (sct=0, sc=8) 00:14:45.431 Read completed with error (sct=0, sc=8) 00:14:45.431 Read completed with error (sct=0, sc=8) 00:14:45.431 Write completed with error (sct=0, sc=8) 00:14:45.431 Read completed with error (sct=0, sc=8) 00:14:45.431 Read completed with error (sct=0, sc=8) 00:14:45.431 Write completed with error (sct=0, sc=8) 00:14:45.431 [2024-04-26 12:57:50.361939] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0780 is same with the state(5) to be set 00:14:45.431 Read completed with error (sct=0, sc=8) 00:14:45.431 Read completed with error (sct=0, sc=8) 00:14:45.431 Read completed with error (sct=0, sc=8) 00:14:45.431 starting I/O failed: -6 00:14:45.431 Read completed with error (sct=0, sc=8) 00:14:45.431 Read completed with error (sct=0, sc=8) 00:14:45.431 Read completed with error (sct=0, sc=8) 00:14:45.431 Write completed with error (sct=0, sc=8) 00:14:45.431 starting I/O failed: -6 00:14:45.431 Read completed with error (sct=0, sc=8) 00:14:45.431 Read completed with error (sct=0, sc=8) 00:14:45.431 Read completed with error (sct=0, sc=8) 00:14:45.431 Read completed with error (sct=0, sc=8) 00:14:45.431 starting I/O failed: -6 00:14:45.431 Read completed with error (sct=0, sc=8) 00:14:45.431 Write completed with error (sct=0, sc=8) 00:14:45.431 Read completed with error (sct=0, sc=8) 00:14:45.431 Write completed with error (sct=0, sc=8) 00:14:45.431 starting I/O failed: -6 00:14:45.431 Read completed with error (sct=0, sc=8) 00:14:45.431 Read completed with error (sct=0, sc=8) 00:14:45.431 Read completed with error (sct=0, sc=8) 00:14:45.431 Read completed with error (sct=0, sc=8) 00:14:45.431 starting I/O failed: -6 00:14:45.431 Read completed with error (sct=0, sc=8) 00:14:45.431 Write completed with error (sct=0, sc=8) 00:14:45.431 Read completed with error (sct=0, sc=8) 00:14:45.431 Read completed with error (sct=0, sc=8) 00:14:45.431 starting I/O failed: -6 00:14:45.431 Read completed with error (sct=0, sc=8) 00:14:45.431 Read completed with error (sct=0, sc=8) 00:14:45.431 Read completed with error (sct=0, sc=8) 00:14:45.431 Read completed with error (sct=0, sc=8) 00:14:45.431 starting I/O failed: -6 00:14:45.431 Read completed with error (sct=0, sc=8) 00:14:45.431 Write completed with error (sct=0, sc=8) 00:14:45.431 Read completed with error (sct=0, sc=8) 00:14:45.431 Write completed with error (sct=0, sc=8) 00:14:45.431 starting I/O failed: -6 00:14:45.431 Read completed with error (sct=0, sc=8) 00:14:45.431 Read completed with error (sct=0, sc=8) 00:14:45.431 Read completed with error (sct=0, sc=8) 00:14:45.431 Read completed with error (sct=0, sc=8) 00:14:45.431 starting I/O failed: -6 00:14:45.431 Read completed with error (sct=0, sc=8) 00:14:45.431 Write completed with error (sct=0, sc=8) 00:14:45.431 Read completed with error (sct=0, sc=8) 00:14:45.431 Read completed with error (sct=0, sc=8) 00:14:45.431 starting I/O failed: -6 00:14:45.431 Read completed with error (sct=0, sc=8) 00:14:45.431 Write completed with error (sct=0, sc=8) 00:14:45.431 [2024-04-26 12:57:50.366119] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f37c4000c00 is same with the state(5) to be set 00:14:45.432 Write completed with error (sct=0, sc=8) 00:14:45.432 Read completed with error (sct=0, sc=8) 00:14:45.432 Read completed with error (sct=0, sc=8) 00:14:45.432 Read completed with error (sct=0, sc=8) 00:14:45.432 Read completed with error (sct=0, sc=8) 00:14:45.432 Read completed with error (sct=0, sc=8) 00:14:45.432 Write completed with error (sct=0, sc=8) 00:14:45.432 Read completed with error (sct=0, sc=8) 00:14:45.432 Read completed with error (sct=0, sc=8) 00:14:45.432 Read completed with error (sct=0, sc=8) 00:14:45.432 Read completed with error (sct=0, sc=8) 00:14:45.432 Read completed with error (sct=0, sc=8) 00:14:45.432 Write completed with error (sct=0, sc=8) 00:14:45.432 Read completed with error (sct=0, sc=8) 00:14:45.432 Read completed with error (sct=0, sc=8) 00:14:45.432 Read completed with error (sct=0, sc=8) 00:14:45.432 Write completed with error (sct=0, sc=8) 00:14:45.432 Read completed with error (sct=0, sc=8) 00:14:45.432 Read completed with error (sct=0, sc=8) 00:14:45.432 Read completed with error (sct=0, sc=8) 00:14:45.432 Write completed with error (sct=0, sc=8) 00:14:45.432 Read completed with error (sct=0, sc=8) 00:14:45.432 Write completed with error (sct=0, sc=8) 00:14:45.432 Write completed with error (sct=0, sc=8) 00:14:45.432 Read completed with error (sct=0, sc=8) 00:14:45.432 Read completed with error (sct=0, sc=8) 00:14:45.432 Write completed with error (sct=0, sc=8) 00:14:45.432 Write completed with error (sct=0, sc=8) 00:14:45.432 Write completed with error (sct=0, sc=8) 00:14:45.432 Write completed with error (sct=0, sc=8) 00:14:45.432 Write completed with error (sct=0, sc=8) 00:14:45.432 Read completed with error (sct=0, sc=8) 00:14:45.432 Read completed with error (sct=0, sc=8) 00:14:45.432 Write completed with error (sct=0, sc=8) 00:14:45.432 Read completed with error (sct=0, sc=8) 00:14:45.432 Read completed with error (sct=0, sc=8) 00:14:45.432 Read completed with error (sct=0, sc=8) 00:14:45.432 Read completed with error (sct=0, sc=8) 00:14:45.432 Read completed with error (sct=0, sc=8) 00:14:45.432 Read completed with error (sct=0, sc=8) 00:14:45.432 Write completed with error (sct=0, sc=8) 00:14:45.432 Read completed with error (sct=0, sc=8) 00:14:45.432 Read completed with error (sct=0, sc=8) 00:14:45.432 Read completed with error (sct=0, sc=8) 00:14:45.432 Read completed with error (sct=0, sc=8) 00:14:45.432 Read completed with error (sct=0, sc=8) 00:14:45.432 Read completed with error (sct=0, sc=8) 00:14:45.432 Read completed with error (sct=0, sc=8) 00:14:45.432 Read completed with error (sct=0, sc=8) 00:14:45.432 Read completed with error (sct=0, sc=8) 00:14:45.432 Read completed with error (sct=0, sc=8) 00:14:46.374 [2024-04-26 12:57:51.334002] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec6c40 is same with the state(5) to be set 00:14:46.374 Read completed with error (sct=0, sc=8) 00:14:46.374 Write completed with error (sct=0, sc=8) 00:14:46.374 Read completed with error (sct=0, sc=8) 00:14:46.374 Read completed with error (sct=0, sc=8) 00:14:46.374 Write completed with error (sct=0, sc=8) 00:14:46.374 Read completed with error (sct=0, sc=8) 00:14:46.374 Write completed with error (sct=0, sc=8) 00:14:46.374 Read completed with error (sct=0, sc=8) 00:14:46.374 Read completed with error (sct=0, sc=8) 00:14:46.374 Write completed with error (sct=0, sc=8) 00:14:46.374 Write completed with error (sct=0, sc=8) 00:14:46.374 Read completed with error (sct=0, sc=8) 00:14:46.374 Write completed with error (sct=0, sc=8) 00:14:46.374 Write completed with error (sct=0, sc=8) 00:14:46.374 Read completed with error (sct=0, sc=8) 00:14:46.374 Write completed with error (sct=0, sc=8) 00:14:46.374 Read completed with error (sct=0, sc=8) 00:14:46.374 Read completed with error (sct=0, sc=8) 00:14:46.374 Read completed with error (sct=0, sc=8) 00:14:46.374 Read completed with error (sct=0, sc=8) 00:14:46.374 Read completed with error (sct=0, sc=8) 00:14:46.374 [2024-04-26 12:57:51.364646] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0cd0 is same with the state(5) to be set 00:14:46.374 Read completed with error (sct=0, sc=8) 00:14:46.374 Read completed with error (sct=0, sc=8) 00:14:46.375 Read completed with error (sct=0, sc=8) 00:14:46.375 Write completed with error (sct=0, sc=8) 00:14:46.375 Read completed with error (sct=0, sc=8) 00:14:46.375 Write completed with error (sct=0, sc=8) 00:14:46.375 Read completed with error (sct=0, sc=8) 00:14:46.375 Read completed with error (sct=0, sc=8) 00:14:46.375 Write completed with error (sct=0, sc=8) 00:14:46.375 Read completed with error (sct=0, sc=8) 00:14:46.375 Read completed with error (sct=0, sc=8) 00:14:46.375 Write completed with error (sct=0, sc=8) 00:14:46.375 Read completed with error (sct=0, sc=8) 00:14:46.375 Write completed with error (sct=0, sc=8) 00:14:46.375 Read completed with error (sct=0, sc=8) 00:14:46.375 Read completed with error (sct=0, sc=8) 00:14:46.375 Read completed with error (sct=0, sc=8) 00:14:46.375 Read completed with error (sct=0, sc=8) 00:14:46.375 Write completed with error (sct=0, sc=8) 00:14:46.375 Read completed with error (sct=0, sc=8) 00:14:46.375 Read completed with error (sct=0, sc=8) 00:14:46.375 Read completed with error (sct=0, sc=8) 00:14:46.375 [2024-04-26 12:57:51.364731] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0910 is same with the state(5) to be set 00:14:46.375 Read completed with error (sct=0, sc=8) 00:14:46.375 Read completed with error (sct=0, sc=8) 00:14:46.375 Read completed with error (sct=0, sc=8) 00:14:46.375 Write completed with error (sct=0, sc=8) 00:14:46.375 Read completed with error (sct=0, sc=8) 00:14:46.375 Read completed with error (sct=0, sc=8) 00:14:46.375 Write completed with error (sct=0, sc=8) 00:14:46.375 Read completed with error (sct=0, sc=8) 00:14:46.375 Read completed with error (sct=0, sc=8) 00:14:46.375 Read completed with error (sct=0, sc=8) 00:14:46.375 Read completed with error (sct=0, sc=8) 00:14:46.375 Read completed with error (sct=0, sc=8) 00:14:46.375 Read completed with error (sct=0, sc=8) 00:14:46.375 Write completed with error (sct=0, sc=8) 00:14:46.375 Read completed with error (sct=0, sc=8) 00:14:46.375 Write completed with error (sct=0, sc=8) 00:14:46.375 Read completed with error (sct=0, sc=8) 00:14:46.375 Read completed with error (sct=0, sc=8) 00:14:46.375 Write completed with error (sct=0, sc=8) 00:14:46.375 [2024-04-26 12:57:51.368503] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f37c400bf90 is same with the state(5) to be set 00:14:46.375 Read completed with error (sct=0, sc=8) 00:14:46.375 Read completed with error (sct=0, sc=8) 00:14:46.375 Read completed with error (sct=0, sc=8) 00:14:46.375 Read completed with error (sct=0, sc=8) 00:14:46.375 Read completed with error (sct=0, sc=8) 00:14:46.375 Read completed with error (sct=0, sc=8) 00:14:46.375 Read completed with error (sct=0, sc=8) 00:14:46.375 Read completed with error (sct=0, sc=8) 00:14:46.375 Read completed with error (sct=0, sc=8) 00:14:46.375 Write completed with error (sct=0, sc=8) 00:14:46.375 Read completed with error (sct=0, sc=8) 00:14:46.375 Read completed with error (sct=0, sc=8) 00:14:46.375 Write completed with error (sct=0, sc=8) 00:14:46.375 Write completed with error (sct=0, sc=8) 00:14:46.375 Read completed with error (sct=0, sc=8) 00:14:46.375 Read completed with error (sct=0, sc=8) 00:14:46.375 Read completed with error (sct=0, sc=8) 00:14:46.375 Read completed with error (sct=0, sc=8) 00:14:46.375 [2024-04-26 12:57:51.368574] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f37c400c690 is same with the state(5) to be set 00:14:46.375 [2024-04-26 12:57:51.369089] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec6c40 (9): Bad file descriptor 00:14:46.375 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:14:46.375 12:57:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:46.375 12:57:51 -- target/delete_subsystem.sh@34 -- # delay=0 00:14:46.375 Initializing NVMe Controllers 00:14:46.375 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:46.375 Controller IO queue size 128, less than required. 00:14:46.375 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:46.375 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:14:46.375 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:14:46.375 Initialization complete. Launching workers. 00:14:46.375 ======================================================== 00:14:46.375 Latency(us) 00:14:46.375 Device Information : IOPS MiB/s Average min max 00:14:46.375 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 165.37 0.08 904751.23 351.21 1045491.63 00:14:46.375 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 159.90 0.08 917520.13 274.63 1010535.98 00:14:46.375 ======================================================== 00:14:46.375 Total : 325.27 0.16 911028.13 274.63 1045491.63 00:14:46.375 00:14:46.375 12:57:51 -- target/delete_subsystem.sh@35 -- # kill -0 3910499 00:14:46.375 12:57:51 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:14:46.945 12:57:51 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:14:46.945 12:57:51 -- target/delete_subsystem.sh@35 -- # kill -0 3910499 00:14:46.945 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3910499) - No such process 00:14:46.945 12:57:51 -- target/delete_subsystem.sh@45 -- # NOT wait 3910499 00:14:46.945 12:57:51 -- common/autotest_common.sh@638 -- # local es=0 00:14:46.945 12:57:51 -- common/autotest_common.sh@640 -- # valid_exec_arg wait 3910499 00:14:46.945 12:57:51 -- common/autotest_common.sh@626 -- # local arg=wait 00:14:46.945 12:57:51 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:46.945 12:57:51 -- common/autotest_common.sh@630 -- # type -t wait 00:14:46.945 12:57:51 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:46.945 12:57:51 -- common/autotest_common.sh@641 -- # wait 3910499 00:14:46.945 12:57:51 -- common/autotest_common.sh@641 -- # es=1 00:14:46.945 12:57:51 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:14:46.945 12:57:51 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:14:46.945 12:57:51 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:14:46.945 12:57:51 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:46.945 12:57:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:46.945 12:57:51 -- common/autotest_common.sh@10 -- # set +x 00:14:46.945 12:57:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:46.945 12:57:51 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:46.945 12:57:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:46.945 12:57:51 -- common/autotest_common.sh@10 -- # set +x 00:14:46.945 [2024-04-26 12:57:51.901227] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:46.946 12:57:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:46.946 12:57:51 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:46.946 12:57:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:46.946 12:57:51 -- common/autotest_common.sh@10 -- # set +x 00:14:46.946 12:57:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:46.946 12:57:51 -- target/delete_subsystem.sh@54 -- # perf_pid=3911189 00:14:46.946 12:57:51 -- target/delete_subsystem.sh@56 -- # delay=0 00:14:46.946 12:57:51 -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:14:46.946 12:57:51 -- target/delete_subsystem.sh@57 -- # kill -0 3911189 00:14:46.946 12:57:51 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:46.946 EAL: No free 2048 kB hugepages reported on node 1 00:14:46.946 [2024-04-26 12:57:51.967983] subsystem.c:1435:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:47.518 12:57:52 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:47.518 12:57:52 -- target/delete_subsystem.sh@57 -- # kill -0 3911189 00:14:47.518 12:57:52 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:48.088 12:57:52 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:48.088 12:57:52 -- target/delete_subsystem.sh@57 -- # kill -0 3911189 00:14:48.089 12:57:52 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:48.660 12:57:53 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:48.660 12:57:53 -- target/delete_subsystem.sh@57 -- # kill -0 3911189 00:14:48.660 12:57:53 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:48.920 12:57:53 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:48.920 12:57:53 -- target/delete_subsystem.sh@57 -- # kill -0 3911189 00:14:48.920 12:57:53 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:49.491 12:57:54 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:49.491 12:57:54 -- target/delete_subsystem.sh@57 -- # kill -0 3911189 00:14:49.491 12:57:54 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:50.062 12:57:54 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:50.062 12:57:54 -- target/delete_subsystem.sh@57 -- # kill -0 3911189 00:14:50.062 12:57:54 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:50.322 Initializing NVMe Controllers 00:14:50.322 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:50.322 Controller IO queue size 128, less than required. 00:14:50.322 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:50.322 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:14:50.322 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:14:50.322 Initialization complete. Launching workers. 00:14:50.322 ======================================================== 00:14:50.322 Latency(us) 00:14:50.322 Device Information : IOPS MiB/s Average min max 00:14:50.322 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002946.25 1000142.78 1042504.81 00:14:50.322 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003536.90 1000159.04 1042376.04 00:14:50.322 ======================================================== 00:14:50.322 Total : 256.00 0.12 1003241.57 1000142.78 1042504.81 00:14:50.322 00:14:50.582 12:57:55 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:50.582 12:57:55 -- target/delete_subsystem.sh@57 -- # kill -0 3911189 00:14:50.582 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3911189) - No such process 00:14:50.582 12:57:55 -- target/delete_subsystem.sh@67 -- # wait 3911189 00:14:50.582 12:57:55 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:14:50.582 12:57:55 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:14:50.582 12:57:55 -- nvmf/common.sh@477 -- # nvmfcleanup 00:14:50.582 12:57:55 -- nvmf/common.sh@117 -- # sync 00:14:50.582 12:57:55 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:50.582 12:57:55 -- nvmf/common.sh@120 -- # set +e 00:14:50.582 12:57:55 -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:50.582 12:57:55 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:50.582 rmmod nvme_tcp 00:14:50.582 rmmod nvme_fabrics 00:14:50.582 rmmod nvme_keyring 00:14:50.582 12:57:55 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:50.582 12:57:55 -- nvmf/common.sh@124 -- # set -e 00:14:50.582 12:57:55 -- nvmf/common.sh@125 -- # return 0 00:14:50.582 12:57:55 -- nvmf/common.sh@478 -- # '[' -n 3910403 ']' 00:14:50.582 12:57:55 -- nvmf/common.sh@479 -- # killprocess 3910403 00:14:50.582 12:57:55 -- common/autotest_common.sh@936 -- # '[' -z 3910403 ']' 00:14:50.582 12:57:55 -- common/autotest_common.sh@940 -- # kill -0 3910403 00:14:50.582 12:57:55 -- common/autotest_common.sh@941 -- # uname 00:14:50.582 12:57:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:50.582 12:57:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3910403 00:14:50.582 12:57:55 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:50.582 12:57:55 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:50.582 12:57:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3910403' 00:14:50.582 killing process with pid 3910403 00:14:50.582 12:57:55 -- common/autotest_common.sh@955 -- # kill 3910403 00:14:50.582 12:57:55 -- common/autotest_common.sh@960 -- # wait 3910403 00:14:50.844 12:57:55 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:14:50.844 12:57:55 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:14:50.844 12:57:55 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:14:50.844 12:57:55 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:50.844 12:57:55 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:50.844 12:57:55 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:50.844 12:57:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:50.844 12:57:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:52.757 12:57:57 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:52.757 00:14:52.757 real 0m17.868s 00:14:52.757 user 0m30.549s 00:14:52.757 sys 0m6.285s 00:14:52.757 12:57:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:52.757 12:57:57 -- common/autotest_common.sh@10 -- # set +x 00:14:52.757 ************************************ 00:14:52.757 END TEST nvmf_delete_subsystem 00:14:52.757 ************************************ 00:14:53.018 12:57:57 -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:14:53.018 12:57:57 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:53.018 12:57:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:53.018 12:57:57 -- common/autotest_common.sh@10 -- # set +x 00:14:53.018 ************************************ 00:14:53.018 START TEST nvmf_ns_masking 00:14:53.018 ************************************ 00:14:53.018 12:57:57 -- common/autotest_common.sh@1111 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:14:53.018 * Looking for test storage... 00:14:53.279 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:53.279 12:57:58 -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:53.279 12:57:58 -- nvmf/common.sh@7 -- # uname -s 00:14:53.279 12:57:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:53.279 12:57:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:53.279 12:57:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:53.279 12:57:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:53.279 12:57:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:53.279 12:57:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:53.279 12:57:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:53.279 12:57:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:53.279 12:57:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:53.279 12:57:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:53.279 12:57:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:53.279 12:57:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:53.279 12:57:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:53.279 12:57:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:53.279 12:57:58 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:53.279 12:57:58 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:53.279 12:57:58 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:53.279 12:57:58 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:53.279 12:57:58 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:53.279 12:57:58 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:53.279 12:57:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:53.279 12:57:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:53.279 12:57:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:53.279 12:57:58 -- paths/export.sh@5 -- # export PATH 00:14:53.279 12:57:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:53.279 12:57:58 -- nvmf/common.sh@47 -- # : 0 00:14:53.279 12:57:58 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:53.279 12:57:58 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:53.279 12:57:58 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:53.279 12:57:58 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:53.279 12:57:58 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:53.279 12:57:58 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:53.279 12:57:58 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:53.279 12:57:58 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:53.279 12:57:58 -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:53.279 12:57:58 -- target/ns_masking.sh@11 -- # loops=5 00:14:53.279 12:57:58 -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:14:53.279 12:57:58 -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:14:53.279 12:57:58 -- target/ns_masking.sh@15 -- # uuidgen 00:14:53.279 12:57:58 -- target/ns_masking.sh@15 -- # HOSTID=a8799b1e-3ac1-484a-a5ea-ba2deab047c1 00:14:53.279 12:57:58 -- target/ns_masking.sh@44 -- # nvmftestinit 00:14:53.279 12:57:58 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:14:53.279 12:57:58 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:53.279 12:57:58 -- nvmf/common.sh@437 -- # prepare_net_devs 00:14:53.279 12:57:58 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:14:53.279 12:57:58 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:14:53.279 12:57:58 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:53.279 12:57:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:53.279 12:57:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:53.279 12:57:58 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:14:53.279 12:57:58 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:14:53.279 12:57:58 -- nvmf/common.sh@285 -- # xtrace_disable 00:14:53.279 12:57:58 -- common/autotest_common.sh@10 -- # set +x 00:14:59.869 12:58:04 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:59.869 12:58:04 -- nvmf/common.sh@291 -- # pci_devs=() 00:14:59.869 12:58:04 -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:59.869 12:58:04 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:59.869 12:58:04 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:59.869 12:58:04 -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:59.869 12:58:04 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:59.869 12:58:04 -- nvmf/common.sh@295 -- # net_devs=() 00:14:59.869 12:58:04 -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:59.869 12:58:04 -- nvmf/common.sh@296 -- # e810=() 00:14:59.869 12:58:04 -- nvmf/common.sh@296 -- # local -ga e810 00:14:59.869 12:58:04 -- nvmf/common.sh@297 -- # x722=() 00:14:59.869 12:58:04 -- nvmf/common.sh@297 -- # local -ga x722 00:14:59.869 12:58:04 -- nvmf/common.sh@298 -- # mlx=() 00:14:59.869 12:58:04 -- nvmf/common.sh@298 -- # local -ga mlx 00:14:59.869 12:58:04 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:59.869 12:58:04 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:59.869 12:58:04 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:59.869 12:58:04 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:59.869 12:58:04 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:59.869 12:58:04 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:59.869 12:58:04 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:59.869 12:58:04 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:59.869 12:58:04 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:59.869 12:58:04 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:59.869 12:58:04 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:59.869 12:58:04 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:59.869 12:58:04 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:59.869 12:58:04 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:59.869 12:58:04 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:59.869 12:58:04 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:59.869 12:58:04 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:59.869 12:58:04 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:59.869 12:58:04 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:14:59.869 Found 0000:31:00.0 (0x8086 - 0x159b) 00:14:59.869 12:58:04 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:59.869 12:58:04 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:59.869 12:58:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:59.869 12:58:04 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:59.869 12:58:04 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:59.869 12:58:04 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:59.869 12:58:04 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:14:59.869 Found 0000:31:00.1 (0x8086 - 0x159b) 00:14:59.869 12:58:04 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:59.869 12:58:04 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:59.869 12:58:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:59.869 12:58:04 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:59.869 12:58:04 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:59.869 12:58:04 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:59.869 12:58:04 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:59.869 12:58:04 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:59.869 12:58:04 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:59.869 12:58:04 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:59.869 12:58:04 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:59.869 12:58:04 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:59.869 12:58:04 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:14:59.869 Found net devices under 0000:31:00.0: cvl_0_0 00:14:59.869 12:58:04 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:59.869 12:58:04 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:59.869 12:58:04 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:59.869 12:58:04 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:59.869 12:58:04 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:59.869 12:58:04 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:14:59.869 Found net devices under 0000:31:00.1: cvl_0_1 00:14:59.869 12:58:04 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:59.869 12:58:04 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:14:59.869 12:58:04 -- nvmf/common.sh@403 -- # is_hw=yes 00:14:59.869 12:58:04 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:14:59.869 12:58:04 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:14:59.869 12:58:04 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:14:59.869 12:58:04 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:59.869 12:58:04 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:59.869 12:58:04 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:59.869 12:58:04 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:59.869 12:58:04 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:59.869 12:58:04 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:59.869 12:58:04 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:59.869 12:58:04 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:59.869 12:58:04 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:59.869 12:58:04 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:59.869 12:58:04 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:59.869 12:58:04 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:59.869 12:58:04 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:00.130 12:58:05 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:00.130 12:58:05 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:00.130 12:58:05 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:00.130 12:58:05 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:00.130 12:58:05 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:00.130 12:58:05 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:00.391 12:58:05 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:00.391 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:00.391 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.579 ms 00:15:00.391 00:15:00.391 --- 10.0.0.2 ping statistics --- 00:15:00.391 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:00.391 rtt min/avg/max/mdev = 0.579/0.579/0.579/0.000 ms 00:15:00.391 12:58:05 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:00.391 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:00.391 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.308 ms 00:15:00.391 00:15:00.391 --- 10.0.0.1 ping statistics --- 00:15:00.391 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:00.391 rtt min/avg/max/mdev = 0.308/0.308/0.308/0.000 ms 00:15:00.391 12:58:05 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:00.391 12:58:05 -- nvmf/common.sh@411 -- # return 0 00:15:00.391 12:58:05 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:00.391 12:58:05 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:00.391 12:58:05 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:15:00.391 12:58:05 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:15:00.391 12:58:05 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:00.391 12:58:05 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:15:00.391 12:58:05 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:15:00.391 12:58:05 -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:15:00.391 12:58:05 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:00.391 12:58:05 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:00.391 12:58:05 -- common/autotest_common.sh@10 -- # set +x 00:15:00.391 12:58:05 -- nvmf/common.sh@470 -- # nvmfpid=3916243 00:15:00.391 12:58:05 -- nvmf/common.sh@471 -- # waitforlisten 3916243 00:15:00.391 12:58:05 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:00.391 12:58:05 -- common/autotest_common.sh@817 -- # '[' -z 3916243 ']' 00:15:00.391 12:58:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:00.391 12:58:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:00.391 12:58:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:00.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:00.391 12:58:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:00.391 12:58:05 -- common/autotest_common.sh@10 -- # set +x 00:15:00.391 [2024-04-26 12:58:05.281473] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:15:00.391 [2024-04-26 12:58:05.281522] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:00.391 EAL: No free 2048 kB hugepages reported on node 1 00:15:00.391 [2024-04-26 12:58:05.346004] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:00.391 [2024-04-26 12:58:05.412175] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:00.391 [2024-04-26 12:58:05.412210] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:00.391 [2024-04-26 12:58:05.412218] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:00.391 [2024-04-26 12:58:05.412226] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:00.391 [2024-04-26 12:58:05.412233] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:00.391 [2024-04-26 12:58:05.412302] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:00.391 [2024-04-26 12:58:05.412438] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:00.391 [2024-04-26 12:58:05.412455] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:00.391 [2024-04-26 12:58:05.412461] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:01.338 12:58:06 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:01.338 12:58:06 -- common/autotest_common.sh@850 -- # return 0 00:15:01.338 12:58:06 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:01.338 12:58:06 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:01.338 12:58:06 -- common/autotest_common.sh@10 -- # set +x 00:15:01.338 12:58:06 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:01.338 12:58:06 -- target/ns_masking.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:01.338 [2024-04-26 12:58:06.257967] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:01.338 12:58:06 -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:15:01.338 12:58:06 -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:15:01.338 12:58:06 -- target/ns_masking.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:01.599 Malloc1 00:15:01.599 12:58:06 -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:01.599 Malloc2 00:15:01.599 12:58:06 -- target/ns_masking.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:01.859 12:58:06 -- target/ns_masking.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:15:02.119 12:58:06 -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:02.119 [2024-04-26 12:58:07.098164] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:02.119 12:58:07 -- target/ns_masking.sh@61 -- # connect 00:15:02.119 12:58:07 -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I a8799b1e-3ac1-484a-a5ea-ba2deab047c1 -a 10.0.0.2 -s 4420 -i 4 00:15:02.379 12:58:07 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:15:02.379 12:58:07 -- common/autotest_common.sh@1184 -- # local i=0 00:15:02.379 12:58:07 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:15:02.379 12:58:07 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:15:02.379 12:58:07 -- common/autotest_common.sh@1191 -- # sleep 2 00:15:04.355 12:58:09 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:15:04.355 12:58:09 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:15:04.355 12:58:09 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:15:04.355 12:58:09 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:15:04.355 12:58:09 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:15:04.355 12:58:09 -- common/autotest_common.sh@1194 -- # return 0 00:15:04.355 12:58:09 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:15:04.355 12:58:09 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:04.355 12:58:09 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:15:04.355 12:58:09 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:15:04.355 12:58:09 -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:15:04.355 12:58:09 -- target/ns_masking.sh@39 -- # grep 0x1 00:15:04.355 12:58:09 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:04.625 [ 0]:0x1 00:15:04.625 12:58:09 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:04.625 12:58:09 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:04.625 12:58:09 -- target/ns_masking.sh@40 -- # nguid=886e047204344a099654ece002e782b2 00:15:04.625 12:58:09 -- target/ns_masking.sh@41 -- # [[ 886e047204344a099654ece002e782b2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:04.625 12:58:09 -- target/ns_masking.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:15:04.625 12:58:09 -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:15:04.625 12:58:09 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:04.625 12:58:09 -- target/ns_masking.sh@39 -- # grep 0x1 00:15:04.625 [ 0]:0x1 00:15:04.625 12:58:09 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:04.625 12:58:09 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:04.625 12:58:09 -- target/ns_masking.sh@40 -- # nguid=886e047204344a099654ece002e782b2 00:15:04.625 12:58:09 -- target/ns_masking.sh@41 -- # [[ 886e047204344a099654ece002e782b2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:04.625 12:58:09 -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:15:04.625 12:58:09 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:04.625 12:58:09 -- target/ns_masking.sh@39 -- # grep 0x2 00:15:04.625 [ 1]:0x2 00:15:04.625 12:58:09 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:04.625 12:58:09 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:04.625 12:58:09 -- target/ns_masking.sh@40 -- # nguid=0467aacf78cf4bb09c95ab88a7f86d89 00:15:04.625 12:58:09 -- target/ns_masking.sh@41 -- # [[ 0467aacf78cf4bb09c95ab88a7f86d89 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:04.625 12:58:09 -- target/ns_masking.sh@69 -- # disconnect 00:15:04.625 12:58:09 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:04.887 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:04.887 12:58:09 -- target/ns_masking.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:05.148 12:58:10 -- target/ns_masking.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:15:05.408 12:58:10 -- target/ns_masking.sh@77 -- # connect 1 00:15:05.408 12:58:10 -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I a8799b1e-3ac1-484a-a5ea-ba2deab047c1 -a 10.0.0.2 -s 4420 -i 4 00:15:05.408 12:58:10 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:15:05.408 12:58:10 -- common/autotest_common.sh@1184 -- # local i=0 00:15:05.408 12:58:10 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:15:05.408 12:58:10 -- common/autotest_common.sh@1186 -- # [[ -n 1 ]] 00:15:05.408 12:58:10 -- common/autotest_common.sh@1187 -- # nvme_device_counter=1 00:15:05.408 12:58:10 -- common/autotest_common.sh@1191 -- # sleep 2 00:15:07.319 12:58:12 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:15:07.579 12:58:12 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:15:07.579 12:58:12 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:15:07.579 12:58:12 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:15:07.579 12:58:12 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:15:07.579 12:58:12 -- common/autotest_common.sh@1194 -- # return 0 00:15:07.579 12:58:12 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:15:07.579 12:58:12 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:07.579 12:58:12 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:15:07.579 12:58:12 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:15:07.579 12:58:12 -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:15:07.579 12:58:12 -- common/autotest_common.sh@638 -- # local es=0 00:15:07.579 12:58:12 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:15:07.579 12:58:12 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:15:07.579 12:58:12 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:07.579 12:58:12 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:15:07.579 12:58:12 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:07.579 12:58:12 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:15:07.579 12:58:12 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:07.579 12:58:12 -- target/ns_masking.sh@39 -- # grep 0x1 00:15:07.579 12:58:12 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:07.579 12:58:12 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:07.579 12:58:12 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:15:07.579 12:58:12 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:07.579 12:58:12 -- common/autotest_common.sh@641 -- # es=1 00:15:07.579 12:58:12 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:15:07.579 12:58:12 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:15:07.579 12:58:12 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:15:07.579 12:58:12 -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:15:07.579 12:58:12 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:07.579 12:58:12 -- target/ns_masking.sh@39 -- # grep 0x2 00:15:07.579 [ 0]:0x2 00:15:07.579 12:58:12 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:07.579 12:58:12 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:07.839 12:58:12 -- target/ns_masking.sh@40 -- # nguid=0467aacf78cf4bb09c95ab88a7f86d89 00:15:07.839 12:58:12 -- target/ns_masking.sh@41 -- # [[ 0467aacf78cf4bb09c95ab88a7f86d89 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:07.839 12:58:12 -- target/ns_masking.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:07.839 12:58:12 -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:15:07.839 12:58:12 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:07.839 12:58:12 -- target/ns_masking.sh@39 -- # grep 0x1 00:15:07.839 [ 0]:0x1 00:15:07.839 12:58:12 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:07.839 12:58:12 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:07.839 12:58:12 -- target/ns_masking.sh@40 -- # nguid=886e047204344a099654ece002e782b2 00:15:07.839 12:58:12 -- target/ns_masking.sh@41 -- # [[ 886e047204344a099654ece002e782b2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:07.839 12:58:12 -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:15:07.839 12:58:12 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:07.839 12:58:12 -- target/ns_masking.sh@39 -- # grep 0x2 00:15:08.099 [ 1]:0x2 00:15:08.099 12:58:12 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:08.099 12:58:12 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:08.099 12:58:12 -- target/ns_masking.sh@40 -- # nguid=0467aacf78cf4bb09c95ab88a7f86d89 00:15:08.099 12:58:12 -- target/ns_masking.sh@41 -- # [[ 0467aacf78cf4bb09c95ab88a7f86d89 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:08.099 12:58:12 -- target/ns_masking.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:08.099 12:58:13 -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:15:08.099 12:58:13 -- common/autotest_common.sh@638 -- # local es=0 00:15:08.099 12:58:13 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:15:08.099 12:58:13 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:15:08.099 12:58:13 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:08.099 12:58:13 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:15:08.099 12:58:13 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:08.099 12:58:13 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:15:08.099 12:58:13 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:08.099 12:58:13 -- target/ns_masking.sh@39 -- # grep 0x1 00:15:08.360 12:58:13 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:08.360 12:58:13 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:08.360 12:58:13 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:15:08.360 12:58:13 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:08.360 12:58:13 -- common/autotest_common.sh@641 -- # es=1 00:15:08.360 12:58:13 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:15:08.360 12:58:13 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:15:08.360 12:58:13 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:15:08.360 12:58:13 -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:15:08.360 12:58:13 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:08.360 12:58:13 -- target/ns_masking.sh@39 -- # grep 0x2 00:15:08.360 [ 0]:0x2 00:15:08.360 12:58:13 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:08.360 12:58:13 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:08.360 12:58:13 -- target/ns_masking.sh@40 -- # nguid=0467aacf78cf4bb09c95ab88a7f86d89 00:15:08.360 12:58:13 -- target/ns_masking.sh@41 -- # [[ 0467aacf78cf4bb09c95ab88a7f86d89 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:08.360 12:58:13 -- target/ns_masking.sh@91 -- # disconnect 00:15:08.360 12:58:13 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:08.360 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:08.360 12:58:13 -- target/ns_masking.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:08.620 12:58:13 -- target/ns_masking.sh@95 -- # connect 2 00:15:08.620 12:58:13 -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I a8799b1e-3ac1-484a-a5ea-ba2deab047c1 -a 10.0.0.2 -s 4420 -i 4 00:15:08.620 12:58:13 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:15:08.620 12:58:13 -- common/autotest_common.sh@1184 -- # local i=0 00:15:08.620 12:58:13 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:15:08.620 12:58:13 -- common/autotest_common.sh@1186 -- # [[ -n 2 ]] 00:15:08.620 12:58:13 -- common/autotest_common.sh@1187 -- # nvme_device_counter=2 00:15:08.620 12:58:13 -- common/autotest_common.sh@1191 -- # sleep 2 00:15:11.164 12:58:15 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:15:11.164 12:58:15 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:15:11.164 12:58:15 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:15:11.164 12:58:15 -- common/autotest_common.sh@1193 -- # nvme_devices=2 00:15:11.164 12:58:15 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:15:11.164 12:58:15 -- common/autotest_common.sh@1194 -- # return 0 00:15:11.164 12:58:15 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:15:11.164 12:58:15 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:11.164 12:58:15 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:15:11.164 12:58:15 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:15:11.164 12:58:15 -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:15:11.164 12:58:15 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:11.164 12:58:15 -- target/ns_masking.sh@39 -- # grep 0x1 00:15:11.164 [ 0]:0x1 00:15:11.164 12:58:15 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:11.164 12:58:15 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:11.164 12:58:15 -- target/ns_masking.sh@40 -- # nguid=886e047204344a099654ece002e782b2 00:15:11.164 12:58:15 -- target/ns_masking.sh@41 -- # [[ 886e047204344a099654ece002e782b2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:11.164 12:58:15 -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:15:11.164 12:58:15 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:11.164 12:58:15 -- target/ns_masking.sh@39 -- # grep 0x2 00:15:11.164 [ 1]:0x2 00:15:11.164 12:58:15 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:11.164 12:58:15 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:11.164 12:58:15 -- target/ns_masking.sh@40 -- # nguid=0467aacf78cf4bb09c95ab88a7f86d89 00:15:11.164 12:58:15 -- target/ns_masking.sh@41 -- # [[ 0467aacf78cf4bb09c95ab88a7f86d89 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:11.164 12:58:15 -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:11.164 12:58:16 -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:15:11.164 12:58:16 -- common/autotest_common.sh@638 -- # local es=0 00:15:11.164 12:58:16 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:15:11.164 12:58:16 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:15:11.164 12:58:16 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:11.164 12:58:16 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:15:11.164 12:58:16 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:11.164 12:58:16 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:15:11.164 12:58:16 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:11.164 12:58:16 -- target/ns_masking.sh@39 -- # grep 0x1 00:15:11.164 12:58:16 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:11.164 12:58:16 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:11.164 12:58:16 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:15:11.164 12:58:16 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:11.164 12:58:16 -- common/autotest_common.sh@641 -- # es=1 00:15:11.164 12:58:16 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:15:11.164 12:58:16 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:15:11.164 12:58:16 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:15:11.164 12:58:16 -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:15:11.164 12:58:16 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:11.164 12:58:16 -- target/ns_masking.sh@39 -- # grep 0x2 00:15:11.164 [ 0]:0x2 00:15:11.164 12:58:16 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:11.164 12:58:16 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:11.424 12:58:16 -- target/ns_masking.sh@40 -- # nguid=0467aacf78cf4bb09c95ab88a7f86d89 00:15:11.424 12:58:16 -- target/ns_masking.sh@41 -- # [[ 0467aacf78cf4bb09c95ab88a7f86d89 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:11.424 12:58:16 -- target/ns_masking.sh@105 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:11.424 12:58:16 -- common/autotest_common.sh@638 -- # local es=0 00:15:11.424 12:58:16 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:11.424 12:58:16 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:11.424 12:58:16 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:11.424 12:58:16 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:11.424 12:58:16 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:11.424 12:58:16 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:11.424 12:58:16 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:11.424 12:58:16 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:11.424 12:58:16 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:11.424 12:58:16 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:11.424 [2024-04-26 12:58:16.386906] nvmf_rpc.c:1779:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:15:11.424 request: 00:15:11.424 { 00:15:11.424 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:11.424 "nsid": 2, 00:15:11.424 "host": "nqn.2016-06.io.spdk:host1", 00:15:11.424 "method": "nvmf_ns_remove_host", 00:15:11.424 "req_id": 1 00:15:11.424 } 00:15:11.424 Got JSON-RPC error response 00:15:11.424 response: 00:15:11.424 { 00:15:11.424 "code": -32602, 00:15:11.424 "message": "Invalid parameters" 00:15:11.424 } 00:15:11.424 12:58:16 -- common/autotest_common.sh@641 -- # es=1 00:15:11.424 12:58:16 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:15:11.424 12:58:16 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:15:11.424 12:58:16 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:15:11.424 12:58:16 -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:15:11.424 12:58:16 -- common/autotest_common.sh@638 -- # local es=0 00:15:11.424 12:58:16 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:15:11.424 12:58:16 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:15:11.424 12:58:16 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:11.424 12:58:16 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:15:11.424 12:58:16 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:11.424 12:58:16 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:15:11.424 12:58:16 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:11.424 12:58:16 -- target/ns_masking.sh@39 -- # grep 0x1 00:15:11.424 12:58:16 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:11.424 12:58:16 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:11.424 12:58:16 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:15:11.424 12:58:16 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:11.424 12:58:16 -- common/autotest_common.sh@641 -- # es=1 00:15:11.424 12:58:16 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:15:11.684 12:58:16 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:15:11.684 12:58:16 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:15:11.684 12:58:16 -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:15:11.684 12:58:16 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:11.684 12:58:16 -- target/ns_masking.sh@39 -- # grep 0x2 00:15:11.684 [ 0]:0x2 00:15:11.684 12:58:16 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:11.684 12:58:16 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:11.684 12:58:16 -- target/ns_masking.sh@40 -- # nguid=0467aacf78cf4bb09c95ab88a7f86d89 00:15:11.684 12:58:16 -- target/ns_masking.sh@41 -- # [[ 0467aacf78cf4bb09c95ab88a7f86d89 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:11.684 12:58:16 -- target/ns_masking.sh@108 -- # disconnect 00:15:11.684 12:58:16 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:11.684 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:11.684 12:58:16 -- target/ns_masking.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:11.945 12:58:16 -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:15:11.945 12:58:16 -- target/ns_masking.sh@114 -- # nvmftestfini 00:15:11.945 12:58:16 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:11.945 12:58:16 -- nvmf/common.sh@117 -- # sync 00:15:11.945 12:58:16 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:11.945 12:58:16 -- nvmf/common.sh@120 -- # set +e 00:15:11.945 12:58:16 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:11.945 12:58:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:11.945 rmmod nvme_tcp 00:15:11.945 rmmod nvme_fabrics 00:15:11.945 rmmod nvme_keyring 00:15:11.945 12:58:16 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:11.945 12:58:16 -- nvmf/common.sh@124 -- # set -e 00:15:11.945 12:58:16 -- nvmf/common.sh@125 -- # return 0 00:15:11.945 12:58:16 -- nvmf/common.sh@478 -- # '[' -n 3916243 ']' 00:15:11.945 12:58:16 -- nvmf/common.sh@479 -- # killprocess 3916243 00:15:11.945 12:58:16 -- common/autotest_common.sh@936 -- # '[' -z 3916243 ']' 00:15:11.945 12:58:16 -- common/autotest_common.sh@940 -- # kill -0 3916243 00:15:11.945 12:58:16 -- common/autotest_common.sh@941 -- # uname 00:15:11.945 12:58:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:11.945 12:58:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3916243 00:15:12.205 12:58:17 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:12.205 12:58:17 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:12.205 12:58:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3916243' 00:15:12.205 killing process with pid 3916243 00:15:12.205 12:58:17 -- common/autotest_common.sh@955 -- # kill 3916243 00:15:12.205 12:58:17 -- common/autotest_common.sh@960 -- # wait 3916243 00:15:12.205 12:58:17 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:12.205 12:58:17 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:12.205 12:58:17 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:12.205 12:58:17 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:12.205 12:58:17 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:12.205 12:58:17 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:12.205 12:58:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:12.205 12:58:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:14.746 12:58:19 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:14.746 00:15:14.746 real 0m21.260s 00:15:14.746 user 0m51.513s 00:15:14.746 sys 0m6.732s 00:15:14.746 12:58:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:14.746 12:58:19 -- common/autotest_common.sh@10 -- # set +x 00:15:14.746 ************************************ 00:15:14.746 END TEST nvmf_ns_masking 00:15:14.746 ************************************ 00:15:14.746 12:58:19 -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:15:14.746 12:58:19 -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:14.746 12:58:19 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:14.746 12:58:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:14.746 12:58:19 -- common/autotest_common.sh@10 -- # set +x 00:15:14.746 ************************************ 00:15:14.746 START TEST nvmf_nvme_cli 00:15:14.746 ************************************ 00:15:14.746 12:58:19 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:14.746 * Looking for test storage... 00:15:14.746 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:14.746 12:58:19 -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:14.746 12:58:19 -- nvmf/common.sh@7 -- # uname -s 00:15:14.746 12:58:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:14.746 12:58:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:14.746 12:58:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:14.746 12:58:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:14.746 12:58:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:14.746 12:58:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:14.746 12:58:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:14.746 12:58:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:14.746 12:58:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:14.746 12:58:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:14.746 12:58:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:14.746 12:58:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:14.746 12:58:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:14.746 12:58:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:14.746 12:58:19 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:14.746 12:58:19 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:14.746 12:58:19 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:14.746 12:58:19 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:14.746 12:58:19 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:14.746 12:58:19 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:14.746 12:58:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:14.746 12:58:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:14.746 12:58:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:14.746 12:58:19 -- paths/export.sh@5 -- # export PATH 00:15:14.746 12:58:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:14.746 12:58:19 -- nvmf/common.sh@47 -- # : 0 00:15:14.746 12:58:19 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:14.746 12:58:19 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:14.746 12:58:19 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:14.746 12:58:19 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:14.746 12:58:19 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:14.746 12:58:19 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:14.746 12:58:19 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:14.746 12:58:19 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:14.746 12:58:19 -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:14.746 12:58:19 -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:14.746 12:58:19 -- target/nvme_cli.sh@14 -- # devs=() 00:15:14.746 12:58:19 -- target/nvme_cli.sh@16 -- # nvmftestinit 00:15:14.746 12:58:19 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:15:14.746 12:58:19 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:14.746 12:58:19 -- nvmf/common.sh@437 -- # prepare_net_devs 00:15:14.746 12:58:19 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:15:14.746 12:58:19 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:15:14.746 12:58:19 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:14.746 12:58:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:14.746 12:58:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:14.746 12:58:19 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:15:14.746 12:58:19 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:15:14.746 12:58:19 -- nvmf/common.sh@285 -- # xtrace_disable 00:15:14.746 12:58:19 -- common/autotest_common.sh@10 -- # set +x 00:15:22.883 12:58:26 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:22.883 12:58:26 -- nvmf/common.sh@291 -- # pci_devs=() 00:15:22.883 12:58:26 -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:22.883 12:58:26 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:22.883 12:58:26 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:22.883 12:58:26 -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:22.883 12:58:26 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:22.883 12:58:26 -- nvmf/common.sh@295 -- # net_devs=() 00:15:22.883 12:58:26 -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:22.883 12:58:26 -- nvmf/common.sh@296 -- # e810=() 00:15:22.883 12:58:26 -- nvmf/common.sh@296 -- # local -ga e810 00:15:22.883 12:58:26 -- nvmf/common.sh@297 -- # x722=() 00:15:22.883 12:58:26 -- nvmf/common.sh@297 -- # local -ga x722 00:15:22.883 12:58:26 -- nvmf/common.sh@298 -- # mlx=() 00:15:22.883 12:58:26 -- nvmf/common.sh@298 -- # local -ga mlx 00:15:22.883 12:58:26 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:22.883 12:58:26 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:22.883 12:58:26 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:22.883 12:58:26 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:22.883 12:58:26 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:22.883 12:58:26 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:22.883 12:58:26 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:22.883 12:58:26 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:22.883 12:58:26 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:22.883 12:58:26 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:22.883 12:58:26 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:22.883 12:58:26 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:22.883 12:58:26 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:22.883 12:58:26 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:22.883 12:58:26 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:22.883 12:58:26 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:22.883 12:58:26 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:22.883 12:58:26 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:22.883 12:58:26 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:15:22.883 Found 0000:31:00.0 (0x8086 - 0x159b) 00:15:22.883 12:58:26 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:22.883 12:58:26 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:22.883 12:58:26 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:22.883 12:58:26 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:22.883 12:58:26 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:22.883 12:58:26 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:22.883 12:58:26 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:15:22.883 Found 0000:31:00.1 (0x8086 - 0x159b) 00:15:22.883 12:58:26 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:22.883 12:58:26 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:22.883 12:58:26 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:22.883 12:58:26 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:22.883 12:58:26 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:22.883 12:58:26 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:22.883 12:58:26 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:22.883 12:58:26 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:22.883 12:58:26 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:22.883 12:58:26 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:22.883 12:58:26 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:22.883 12:58:26 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:22.883 12:58:26 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:15:22.883 Found net devices under 0000:31:00.0: cvl_0_0 00:15:22.883 12:58:26 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:22.883 12:58:26 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:22.883 12:58:26 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:22.883 12:58:26 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:22.883 12:58:26 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:22.883 12:58:26 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:15:22.883 Found net devices under 0000:31:00.1: cvl_0_1 00:15:22.884 12:58:26 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:22.884 12:58:26 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:15:22.884 12:58:26 -- nvmf/common.sh@403 -- # is_hw=yes 00:15:22.884 12:58:26 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:15:22.884 12:58:26 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:15:22.884 12:58:26 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:15:22.884 12:58:26 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:22.884 12:58:26 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:22.884 12:58:26 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:22.884 12:58:26 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:22.884 12:58:26 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:22.884 12:58:26 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:22.884 12:58:26 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:22.884 12:58:26 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:22.884 12:58:26 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:22.884 12:58:26 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:22.884 12:58:26 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:22.884 12:58:26 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:22.884 12:58:26 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:22.884 12:58:26 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:22.884 12:58:26 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:22.884 12:58:26 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:22.884 12:58:26 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:22.884 12:58:26 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:22.884 12:58:26 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:22.884 12:58:26 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:22.884 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:22.884 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.576 ms 00:15:22.884 00:15:22.884 --- 10.0.0.2 ping statistics --- 00:15:22.884 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:22.884 rtt min/avg/max/mdev = 0.576/0.576/0.576/0.000 ms 00:15:22.884 12:58:26 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:22.884 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:22.884 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.282 ms 00:15:22.884 00:15:22.884 --- 10.0.0.1 ping statistics --- 00:15:22.884 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:22.884 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:15:22.884 12:58:26 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:22.884 12:58:26 -- nvmf/common.sh@411 -- # return 0 00:15:22.884 12:58:26 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:22.884 12:58:26 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:22.884 12:58:26 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:15:22.884 12:58:26 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:15:22.884 12:58:26 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:22.884 12:58:26 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:15:22.884 12:58:26 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:15:22.884 12:58:26 -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:15:22.884 12:58:26 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:22.884 12:58:26 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:22.884 12:58:26 -- common/autotest_common.sh@10 -- # set +x 00:15:22.884 12:58:26 -- nvmf/common.sh@470 -- # nvmfpid=3922823 00:15:22.884 12:58:26 -- nvmf/common.sh@471 -- # waitforlisten 3922823 00:15:22.884 12:58:26 -- common/autotest_common.sh@817 -- # '[' -z 3922823 ']' 00:15:22.884 12:58:26 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:22.884 12:58:26 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:22.884 12:58:26 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:22.884 12:58:26 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:22.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:22.884 12:58:26 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:22.884 12:58:26 -- common/autotest_common.sh@10 -- # set +x 00:15:22.884 [2024-04-26 12:58:26.857362] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:15:22.884 [2024-04-26 12:58:26.857425] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:22.884 EAL: No free 2048 kB hugepages reported on node 1 00:15:22.884 [2024-04-26 12:58:26.931331] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:22.884 [2024-04-26 12:58:27.005903] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:22.884 [2024-04-26 12:58:27.005945] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:22.884 [2024-04-26 12:58:27.005954] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:22.884 [2024-04-26 12:58:27.005961] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:22.884 [2024-04-26 12:58:27.005967] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:22.884 [2024-04-26 12:58:27.006180] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:22.884 [2024-04-26 12:58:27.006267] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:22.884 [2024-04-26 12:58:27.006424] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:22.884 [2024-04-26 12:58:27.006424] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:22.884 12:58:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:22.884 12:58:27 -- common/autotest_common.sh@850 -- # return 0 00:15:22.884 12:58:27 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:22.884 12:58:27 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:22.884 12:58:27 -- common/autotest_common.sh@10 -- # set +x 00:15:22.884 12:58:27 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:22.884 12:58:27 -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:22.884 12:58:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:22.884 12:58:27 -- common/autotest_common.sh@10 -- # set +x 00:15:22.884 [2024-04-26 12:58:27.685388] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:22.884 12:58:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:22.884 12:58:27 -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:22.884 12:58:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:22.884 12:58:27 -- common/autotest_common.sh@10 -- # set +x 00:15:22.884 Malloc0 00:15:22.884 12:58:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:22.884 12:58:27 -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:22.884 12:58:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:22.884 12:58:27 -- common/autotest_common.sh@10 -- # set +x 00:15:22.884 Malloc1 00:15:22.884 12:58:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:22.884 12:58:27 -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:15:22.884 12:58:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:22.884 12:58:27 -- common/autotest_common.sh@10 -- # set +x 00:15:22.884 12:58:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:22.884 12:58:27 -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:22.884 12:58:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:22.884 12:58:27 -- common/autotest_common.sh@10 -- # set +x 00:15:22.884 12:58:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:22.884 12:58:27 -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:22.884 12:58:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:22.884 12:58:27 -- common/autotest_common.sh@10 -- # set +x 00:15:22.884 12:58:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:22.884 12:58:27 -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:22.884 12:58:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:22.884 12:58:27 -- common/autotest_common.sh@10 -- # set +x 00:15:22.884 [2024-04-26 12:58:27.775286] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:22.884 12:58:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:22.884 12:58:27 -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:22.884 12:58:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:22.884 12:58:27 -- common/autotest_common.sh@10 -- # set +x 00:15:22.884 12:58:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:22.884 12:58:27 -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 4420 00:15:22.884 00:15:22.884 Discovery Log Number of Records 2, Generation counter 2 00:15:22.884 =====Discovery Log Entry 0====== 00:15:22.884 trtype: tcp 00:15:22.884 adrfam: ipv4 00:15:22.884 subtype: current discovery subsystem 00:15:22.884 treq: not required 00:15:22.884 portid: 0 00:15:22.884 trsvcid: 4420 00:15:22.884 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:15:22.884 traddr: 10.0.0.2 00:15:22.884 eflags: explicit discovery connections, duplicate discovery information 00:15:22.884 sectype: none 00:15:22.884 =====Discovery Log Entry 1====== 00:15:22.884 trtype: tcp 00:15:22.884 adrfam: ipv4 00:15:22.884 subtype: nvme subsystem 00:15:22.884 treq: not required 00:15:22.884 portid: 0 00:15:22.884 trsvcid: 4420 00:15:22.884 subnqn: nqn.2016-06.io.spdk:cnode1 00:15:22.884 traddr: 10.0.0.2 00:15:22.884 eflags: none 00:15:22.884 sectype: none 00:15:22.884 12:58:27 -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:15:22.884 12:58:27 -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:15:22.884 12:58:27 -- nvmf/common.sh@511 -- # local dev _ 00:15:22.884 12:58:27 -- nvmf/common.sh@513 -- # read -r dev _ 00:15:22.884 12:58:27 -- nvmf/common.sh@510 -- # nvme list 00:15:22.885 12:58:27 -- nvmf/common.sh@514 -- # [[ Node == /dev/nvme* ]] 00:15:22.885 12:58:27 -- nvmf/common.sh@513 -- # read -r dev _ 00:15:22.885 12:58:27 -- nvmf/common.sh@514 -- # [[ --------------------- == /dev/nvme* ]] 00:15:22.885 12:58:27 -- nvmf/common.sh@513 -- # read -r dev _ 00:15:22.885 12:58:27 -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:15:22.885 12:58:27 -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:24.800 12:58:29 -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:15:24.801 12:58:29 -- common/autotest_common.sh@1184 -- # local i=0 00:15:24.801 12:58:29 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:15:24.801 12:58:29 -- common/autotest_common.sh@1186 -- # [[ -n 2 ]] 00:15:24.801 12:58:29 -- common/autotest_common.sh@1187 -- # nvme_device_counter=2 00:15:24.801 12:58:29 -- common/autotest_common.sh@1191 -- # sleep 2 00:15:26.728 12:58:31 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:15:26.728 12:58:31 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:15:26.728 12:58:31 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:15:26.728 12:58:31 -- common/autotest_common.sh@1193 -- # nvme_devices=2 00:15:26.728 12:58:31 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:15:26.728 12:58:31 -- common/autotest_common.sh@1194 -- # return 0 00:15:26.728 12:58:31 -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:15:26.728 12:58:31 -- nvmf/common.sh@511 -- # local dev _ 00:15:26.728 12:58:31 -- nvmf/common.sh@513 -- # read -r dev _ 00:15:26.728 12:58:31 -- nvmf/common.sh@510 -- # nvme list 00:15:26.728 12:58:31 -- nvmf/common.sh@514 -- # [[ Node == /dev/nvme* ]] 00:15:26.728 12:58:31 -- nvmf/common.sh@513 -- # read -r dev _ 00:15:26.728 12:58:31 -- nvmf/common.sh@514 -- # [[ --------------------- == /dev/nvme* ]] 00:15:26.728 12:58:31 -- nvmf/common.sh@513 -- # read -r dev _ 00:15:26.728 12:58:31 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:26.728 12:58:31 -- nvmf/common.sh@515 -- # echo /dev/nvme0n2 00:15:26.728 12:58:31 -- nvmf/common.sh@513 -- # read -r dev _ 00:15:26.728 12:58:31 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:26.728 12:58:31 -- nvmf/common.sh@515 -- # echo /dev/nvme0n1 00:15:26.728 12:58:31 -- nvmf/common.sh@513 -- # read -r dev _ 00:15:26.728 12:58:31 -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:15:26.728 /dev/nvme0n1 ]] 00:15:26.728 12:58:31 -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:15:26.728 12:58:31 -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:15:26.728 12:58:31 -- nvmf/common.sh@511 -- # local dev _ 00:15:26.728 12:58:31 -- nvmf/common.sh@513 -- # read -r dev _ 00:15:26.728 12:58:31 -- nvmf/common.sh@510 -- # nvme list 00:15:26.728 12:58:31 -- nvmf/common.sh@514 -- # [[ Node == /dev/nvme* ]] 00:15:26.728 12:58:31 -- nvmf/common.sh@513 -- # read -r dev _ 00:15:26.728 12:58:31 -- nvmf/common.sh@514 -- # [[ --------------------- == /dev/nvme* ]] 00:15:26.728 12:58:31 -- nvmf/common.sh@513 -- # read -r dev _ 00:15:26.728 12:58:31 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:26.728 12:58:31 -- nvmf/common.sh@515 -- # echo /dev/nvme0n2 00:15:26.728 12:58:31 -- nvmf/common.sh@513 -- # read -r dev _ 00:15:26.728 12:58:31 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:26.728 12:58:31 -- nvmf/common.sh@515 -- # echo /dev/nvme0n1 00:15:26.728 12:58:31 -- nvmf/common.sh@513 -- # read -r dev _ 00:15:26.728 12:58:31 -- target/nvme_cli.sh@59 -- # nvme_num=2 00:15:26.728 12:58:31 -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:26.728 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:26.728 12:58:31 -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:26.728 12:58:31 -- common/autotest_common.sh@1205 -- # local i=0 00:15:26.728 12:58:31 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:15:26.728 12:58:31 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:26.728 12:58:31 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:15:26.728 12:58:31 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:26.728 12:58:31 -- common/autotest_common.sh@1217 -- # return 0 00:15:26.728 12:58:31 -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:15:26.728 12:58:31 -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:26.728 12:58:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:26.728 12:58:31 -- common/autotest_common.sh@10 -- # set +x 00:15:26.728 12:58:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:26.728 12:58:31 -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:15:26.728 12:58:31 -- target/nvme_cli.sh@70 -- # nvmftestfini 00:15:26.728 12:58:31 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:26.728 12:58:31 -- nvmf/common.sh@117 -- # sync 00:15:26.728 12:58:31 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:26.728 12:58:31 -- nvmf/common.sh@120 -- # set +e 00:15:26.728 12:58:31 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:26.728 12:58:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:26.728 rmmod nvme_tcp 00:15:26.728 rmmod nvme_fabrics 00:15:26.728 rmmod nvme_keyring 00:15:26.728 12:58:31 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:26.728 12:58:31 -- nvmf/common.sh@124 -- # set -e 00:15:26.728 12:58:31 -- nvmf/common.sh@125 -- # return 0 00:15:26.728 12:58:31 -- nvmf/common.sh@478 -- # '[' -n 3922823 ']' 00:15:26.728 12:58:31 -- nvmf/common.sh@479 -- # killprocess 3922823 00:15:26.728 12:58:31 -- common/autotest_common.sh@936 -- # '[' -z 3922823 ']' 00:15:26.728 12:58:31 -- common/autotest_common.sh@940 -- # kill -0 3922823 00:15:26.728 12:58:31 -- common/autotest_common.sh@941 -- # uname 00:15:26.728 12:58:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:26.728 12:58:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3922823 00:15:26.990 12:58:31 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:26.990 12:58:31 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:26.990 12:58:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3922823' 00:15:26.990 killing process with pid 3922823 00:15:26.990 12:58:31 -- common/autotest_common.sh@955 -- # kill 3922823 00:15:26.990 12:58:31 -- common/autotest_common.sh@960 -- # wait 3922823 00:15:26.990 12:58:31 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:26.990 12:58:31 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:26.990 12:58:31 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:26.990 12:58:31 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:26.990 12:58:31 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:26.990 12:58:31 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:26.990 12:58:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:26.990 12:58:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:29.536 12:58:34 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:29.536 00:15:29.536 real 0m14.652s 00:15:29.536 user 0m21.979s 00:15:29.536 sys 0m5.931s 00:15:29.536 12:58:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:29.536 12:58:34 -- common/autotest_common.sh@10 -- # set +x 00:15:29.536 ************************************ 00:15:29.536 END TEST nvmf_nvme_cli 00:15:29.536 ************************************ 00:15:29.536 12:58:34 -- nvmf/nvmf.sh@40 -- # [[ 0 -eq 1 ]] 00:15:29.536 12:58:34 -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:15:29.536 12:58:34 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:29.536 12:58:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:29.536 12:58:34 -- common/autotest_common.sh@10 -- # set +x 00:15:29.536 ************************************ 00:15:29.536 START TEST nvmf_host_management 00:15:29.536 ************************************ 00:15:29.536 12:58:34 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:15:29.536 * Looking for test storage... 00:15:29.536 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:29.536 12:58:34 -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:29.536 12:58:34 -- nvmf/common.sh@7 -- # uname -s 00:15:29.536 12:58:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:29.536 12:58:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:29.536 12:58:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:29.536 12:58:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:29.536 12:58:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:29.536 12:58:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:29.536 12:58:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:29.536 12:58:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:29.536 12:58:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:29.536 12:58:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:29.536 12:58:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:29.536 12:58:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:29.536 12:58:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:29.536 12:58:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:29.536 12:58:34 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:29.536 12:58:34 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:29.537 12:58:34 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:29.537 12:58:34 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:29.537 12:58:34 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:29.537 12:58:34 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:29.537 12:58:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.537 12:58:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.537 12:58:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.537 12:58:34 -- paths/export.sh@5 -- # export PATH 00:15:29.537 12:58:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.537 12:58:34 -- nvmf/common.sh@47 -- # : 0 00:15:29.537 12:58:34 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:29.537 12:58:34 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:29.537 12:58:34 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:29.537 12:58:34 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:29.537 12:58:34 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:29.537 12:58:34 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:29.537 12:58:34 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:29.537 12:58:34 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:29.537 12:58:34 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:29.537 12:58:34 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:29.537 12:58:34 -- target/host_management.sh@105 -- # nvmftestinit 00:15:29.537 12:58:34 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:15:29.537 12:58:34 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:29.537 12:58:34 -- nvmf/common.sh@437 -- # prepare_net_devs 00:15:29.537 12:58:34 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:15:29.537 12:58:34 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:15:29.537 12:58:34 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:29.537 12:58:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:29.537 12:58:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:29.537 12:58:34 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:15:29.537 12:58:34 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:15:29.537 12:58:34 -- nvmf/common.sh@285 -- # xtrace_disable 00:15:29.537 12:58:34 -- common/autotest_common.sh@10 -- # set +x 00:15:37.687 12:58:41 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:37.688 12:58:41 -- nvmf/common.sh@291 -- # pci_devs=() 00:15:37.688 12:58:41 -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:37.688 12:58:41 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:37.688 12:58:41 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:37.688 12:58:41 -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:37.688 12:58:41 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:37.688 12:58:41 -- nvmf/common.sh@295 -- # net_devs=() 00:15:37.688 12:58:41 -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:37.688 12:58:41 -- nvmf/common.sh@296 -- # e810=() 00:15:37.688 12:58:41 -- nvmf/common.sh@296 -- # local -ga e810 00:15:37.688 12:58:41 -- nvmf/common.sh@297 -- # x722=() 00:15:37.688 12:58:41 -- nvmf/common.sh@297 -- # local -ga x722 00:15:37.688 12:58:41 -- nvmf/common.sh@298 -- # mlx=() 00:15:37.688 12:58:41 -- nvmf/common.sh@298 -- # local -ga mlx 00:15:37.688 12:58:41 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:37.688 12:58:41 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:37.688 12:58:41 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:37.688 12:58:41 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:37.688 12:58:41 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:37.688 12:58:41 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:37.688 12:58:41 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:37.688 12:58:41 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:37.688 12:58:41 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:37.688 12:58:41 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:37.688 12:58:41 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:37.688 12:58:41 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:37.688 12:58:41 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:37.688 12:58:41 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:37.688 12:58:41 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:37.688 12:58:41 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:37.688 12:58:41 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:37.688 12:58:41 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:37.688 12:58:41 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:15:37.688 Found 0000:31:00.0 (0x8086 - 0x159b) 00:15:37.688 12:58:41 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:37.688 12:58:41 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:37.688 12:58:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:37.688 12:58:41 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:37.688 12:58:41 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:37.688 12:58:41 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:37.688 12:58:41 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:15:37.688 Found 0000:31:00.1 (0x8086 - 0x159b) 00:15:37.688 12:58:41 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:37.688 12:58:41 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:37.688 12:58:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:37.688 12:58:41 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:37.688 12:58:41 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:37.688 12:58:41 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:37.688 12:58:41 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:37.688 12:58:41 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:37.688 12:58:41 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:37.688 12:58:41 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:37.688 12:58:41 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:37.688 12:58:41 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:37.688 12:58:41 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:15:37.688 Found net devices under 0000:31:00.0: cvl_0_0 00:15:37.688 12:58:41 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:37.688 12:58:41 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:37.688 12:58:41 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:37.688 12:58:41 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:37.688 12:58:41 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:37.688 12:58:41 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:15:37.688 Found net devices under 0000:31:00.1: cvl_0_1 00:15:37.688 12:58:41 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:37.688 12:58:41 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:15:37.688 12:58:41 -- nvmf/common.sh@403 -- # is_hw=yes 00:15:37.688 12:58:41 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:15:37.688 12:58:41 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:15:37.688 12:58:41 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:15:37.688 12:58:41 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:37.688 12:58:41 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:37.688 12:58:41 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:37.688 12:58:41 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:37.688 12:58:41 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:37.688 12:58:41 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:37.688 12:58:41 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:37.688 12:58:41 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:37.688 12:58:41 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:37.688 12:58:41 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:37.688 12:58:41 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:37.688 12:58:41 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:37.688 12:58:41 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:37.688 12:58:41 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:37.688 12:58:41 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:37.688 12:58:41 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:37.688 12:58:41 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:37.688 12:58:41 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:37.688 12:58:41 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:37.688 12:58:41 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:37.688 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:37.688 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.709 ms 00:15:37.688 00:15:37.688 --- 10.0.0.2 ping statistics --- 00:15:37.688 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:37.688 rtt min/avg/max/mdev = 0.709/0.709/0.709/0.000 ms 00:15:37.688 12:58:41 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:37.688 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:37.688 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.287 ms 00:15:37.688 00:15:37.688 --- 10.0.0.1 ping statistics --- 00:15:37.688 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:37.688 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:15:37.688 12:58:41 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:37.688 12:58:41 -- nvmf/common.sh@411 -- # return 0 00:15:37.688 12:58:41 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:37.688 12:58:41 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:37.688 12:58:41 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:15:37.688 12:58:41 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:15:37.688 12:58:41 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:37.688 12:58:41 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:15:37.688 12:58:41 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:15:37.688 12:58:41 -- target/host_management.sh@107 -- # run_test nvmf_host_management nvmf_host_management 00:15:37.688 12:58:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:37.688 12:58:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:37.688 12:58:41 -- common/autotest_common.sh@10 -- # set +x 00:15:37.688 ************************************ 00:15:37.688 START TEST nvmf_host_management 00:15:37.688 ************************************ 00:15:37.688 12:58:41 -- common/autotest_common.sh@1111 -- # nvmf_host_management 00:15:37.688 12:58:41 -- target/host_management.sh@69 -- # starttarget 00:15:37.688 12:58:41 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:15:37.688 12:58:41 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:37.688 12:58:41 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:37.688 12:58:41 -- common/autotest_common.sh@10 -- # set +x 00:15:37.688 12:58:41 -- nvmf/common.sh@470 -- # nvmfpid=3928270 00:15:37.688 12:58:41 -- nvmf/common.sh@471 -- # waitforlisten 3928270 00:15:37.688 12:58:41 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:15:37.688 12:58:41 -- common/autotest_common.sh@817 -- # '[' -z 3928270 ']' 00:15:37.688 12:58:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:37.688 12:58:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:37.688 12:58:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:37.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:37.688 12:58:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:37.688 12:58:41 -- common/autotest_common.sh@10 -- # set +x 00:15:37.688 [2024-04-26 12:58:41.803944] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:15:37.688 [2024-04-26 12:58:41.803997] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:37.688 EAL: No free 2048 kB hugepages reported on node 1 00:15:37.688 [2024-04-26 12:58:41.882346] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:37.688 [2024-04-26 12:58:41.977256] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:37.689 [2024-04-26 12:58:41.977313] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:37.689 [2024-04-26 12:58:41.977321] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:37.689 [2024-04-26 12:58:41.977327] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:37.689 [2024-04-26 12:58:41.977334] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:37.689 [2024-04-26 12:58:41.977472] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:37.689 [2024-04-26 12:58:41.977637] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:37.689 [2024-04-26 12:58:41.977779] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:37.689 [2024-04-26 12:58:41.977780] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:15:37.689 12:58:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:37.689 12:58:42 -- common/autotest_common.sh@850 -- # return 0 00:15:37.689 12:58:42 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:37.689 12:58:42 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:37.689 12:58:42 -- common/autotest_common.sh@10 -- # set +x 00:15:37.689 12:58:42 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:37.689 12:58:42 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:37.689 12:58:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:37.689 12:58:42 -- common/autotest_common.sh@10 -- # set +x 00:15:37.689 [2024-04-26 12:58:42.633304] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:37.689 12:58:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:37.689 12:58:42 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:15:37.689 12:58:42 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:37.689 12:58:42 -- common/autotest_common.sh@10 -- # set +x 00:15:37.689 12:58:42 -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:15:37.689 12:58:42 -- target/host_management.sh@23 -- # cat 00:15:37.689 12:58:42 -- target/host_management.sh@30 -- # rpc_cmd 00:15:37.689 12:58:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:37.689 12:58:42 -- common/autotest_common.sh@10 -- # set +x 00:15:37.689 Malloc0 00:15:37.689 [2024-04-26 12:58:42.692506] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:37.689 12:58:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:37.689 12:58:42 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:15:37.689 12:58:42 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:37.689 12:58:42 -- common/autotest_common.sh@10 -- # set +x 00:15:37.689 12:58:42 -- target/host_management.sh@73 -- # perfpid=3928342 00:15:37.689 12:58:42 -- target/host_management.sh@74 -- # waitforlisten 3928342 /var/tmp/bdevperf.sock 00:15:37.689 12:58:42 -- common/autotest_common.sh@817 -- # '[' -z 3928342 ']' 00:15:37.689 12:58:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:37.689 12:58:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:37.689 12:58:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:37.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:37.689 12:58:42 -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:15:37.689 12:58:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:37.689 12:58:42 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:15:37.689 12:58:42 -- common/autotest_common.sh@10 -- # set +x 00:15:37.689 12:58:42 -- nvmf/common.sh@521 -- # config=() 00:15:37.953 12:58:42 -- nvmf/common.sh@521 -- # local subsystem config 00:15:37.953 12:58:42 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:15:37.953 12:58:42 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:15:37.953 { 00:15:37.953 "params": { 00:15:37.953 "name": "Nvme$subsystem", 00:15:37.953 "trtype": "$TEST_TRANSPORT", 00:15:37.953 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:37.953 "adrfam": "ipv4", 00:15:37.953 "trsvcid": "$NVMF_PORT", 00:15:37.953 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:37.953 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:37.953 "hdgst": ${hdgst:-false}, 00:15:37.953 "ddgst": ${ddgst:-false} 00:15:37.953 }, 00:15:37.953 "method": "bdev_nvme_attach_controller" 00:15:37.953 } 00:15:37.953 EOF 00:15:37.953 )") 00:15:37.953 12:58:42 -- nvmf/common.sh@543 -- # cat 00:15:37.953 12:58:42 -- nvmf/common.sh@545 -- # jq . 00:15:37.953 12:58:42 -- nvmf/common.sh@546 -- # IFS=, 00:15:37.953 12:58:42 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:15:37.953 "params": { 00:15:37.953 "name": "Nvme0", 00:15:37.953 "trtype": "tcp", 00:15:37.953 "traddr": "10.0.0.2", 00:15:37.953 "adrfam": "ipv4", 00:15:37.953 "trsvcid": "4420", 00:15:37.953 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:37.953 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:15:37.953 "hdgst": false, 00:15:37.953 "ddgst": false 00:15:37.953 }, 00:15:37.953 "method": "bdev_nvme_attach_controller" 00:15:37.953 }' 00:15:37.953 [2024-04-26 12:58:42.787346] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:15:37.953 [2024-04-26 12:58:42.787396] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3928342 ] 00:15:37.953 EAL: No free 2048 kB hugepages reported on node 1 00:15:37.953 [2024-04-26 12:58:42.847125] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:37.953 [2024-04-26 12:58:42.910353] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:38.214 Running I/O for 10 seconds... 00:15:38.806 12:58:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:38.806 12:58:43 -- common/autotest_common.sh@850 -- # return 0 00:15:38.806 12:58:43 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:15:38.806 12:58:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:38.806 12:58:43 -- common/autotest_common.sh@10 -- # set +x 00:15:38.806 12:58:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:38.806 12:58:43 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:38.806 12:58:43 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:15:38.806 12:58:43 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:15:38.806 12:58:43 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:15:38.806 12:58:43 -- target/host_management.sh@52 -- # local ret=1 00:15:38.806 12:58:43 -- target/host_management.sh@53 -- # local i 00:15:38.806 12:58:43 -- target/host_management.sh@54 -- # (( i = 10 )) 00:15:38.806 12:58:43 -- target/host_management.sh@54 -- # (( i != 0 )) 00:15:38.806 12:58:43 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:15:38.806 12:58:43 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:15:38.806 12:58:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:38.806 12:58:43 -- common/autotest_common.sh@10 -- # set +x 00:15:38.806 12:58:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:38.806 12:58:43 -- target/host_management.sh@55 -- # read_io_count=835 00:15:38.806 12:58:43 -- target/host_management.sh@58 -- # '[' 835 -ge 100 ']' 00:15:38.806 12:58:43 -- target/host_management.sh@59 -- # ret=0 00:15:38.806 12:58:43 -- target/host_management.sh@60 -- # break 00:15:38.806 12:58:43 -- target/host_management.sh@64 -- # return 0 00:15:38.806 12:58:43 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:15:38.806 12:58:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:38.806 12:58:43 -- common/autotest_common.sh@10 -- # set +x 00:15:38.806 [2024-04-26 12:58:43.638269] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:38.806 [2024-04-26 12:58:43.638311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.806 [2024-04-26 12:58:43.638322] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:38.806 [2024-04-26 12:58:43.638329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.806 [2024-04-26 12:58:43.638337] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:38.806 [2024-04-26 12:58:43.638344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.806 [2024-04-26 12:58:43.638352] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:38.806 [2024-04-26 12:58:43.638359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.806 [2024-04-26 12:58:43.638366] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157e560 is same with the state(5) to be set 00:15:38.806 12:58:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:38.806 12:58:43 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:15:38.806 12:58:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:38.806 12:58:43 -- common/autotest_common.sh@10 -- # set +x 00:15:38.807 [2024-04-26 12:58:43.649959] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x157e560 (9): Bad file descriptor 00:15:38.807 [2024-04-26 12:58:43.650033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.807 [2024-04-26 12:58:43.650044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.807 [2024-04-26 12:58:43.650060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:123008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.807 [2024-04-26 12:58:43.650067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.807 [2024-04-26 12:58:43.650077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:123136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.807 [2024-04-26 12:58:43.650085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.807 [2024-04-26 12:58:43.650094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:123264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.807 [2024-04-26 12:58:43.650101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.807 [2024-04-26 12:58:43.650110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:123392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.807 [2024-04-26 12:58:43.650117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.807 [2024-04-26 12:58:43.650132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:123520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.807 [2024-04-26 12:58:43.650140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.807 [2024-04-26 12:58:43.650149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:123648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.807 [2024-04-26 12:58:43.650156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.807 [2024-04-26 12:58:43.650165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:123776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.807 [2024-04-26 12:58:43.650172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.807 [2024-04-26 12:58:43.650181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:123904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.807 [2024-04-26 12:58:43.650188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.807 [2024-04-26 12:58:43.650197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:124032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.807 [2024-04-26 12:58:43.650204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.807 [2024-04-26 12:58:43.650213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:124160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.807 [2024-04-26 12:58:43.650220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.807 [2024-04-26 12:58:43.650229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:124288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.807 [2024-04-26 12:58:43.650236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.807 [2024-04-26 12:58:43.650245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:124416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.807 [2024-04-26 12:58:43.650252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.807 [2024-04-26 12:58:43.650261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:124544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.807 [2024-04-26 12:58:43.650268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.807 [2024-04-26 12:58:43.650277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:124672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.807 [2024-04-26 12:58:43.650284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.807 [2024-04-26 12:58:43.650293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:124800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.807 [2024-04-26 12:58:43.650300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.807 [2024-04-26 12:58:43.650309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:124928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.807 [2024-04-26 12:58:43.650316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.807 [2024-04-26 12:58:43.650325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:125056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.807 [2024-04-26 12:58:43.650333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.807 [2024-04-26 12:58:43.650342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:125184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.807 [2024-04-26 12:58:43.650349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.807 [2024-04-26 12:58:43.650358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:125312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.807 [2024-04-26 12:58:43.650365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.807 [2024-04-26 12:58:43.650374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:125440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.807 [2024-04-26 12:58:43.650381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.807 [2024-04-26 12:58:43.650390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:125568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.807 [2024-04-26 12:58:43.650397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.807 [2024-04-26 12:58:43.650406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:125696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.807 [2024-04-26 12:58:43.650413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.807 [2024-04-26 12:58:43.650421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:125824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.807 [2024-04-26 12:58:43.650428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.807 [2024-04-26 12:58:43.650437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:125952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.807 [2024-04-26 12:58:43.650444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.807 [2024-04-26 12:58:43.650453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:126080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.807 [2024-04-26 12:58:43.650460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.807 [2024-04-26 12:58:43.650469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:126208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.807 [2024-04-26 12:58:43.650476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.807 [2024-04-26 12:58:43.650485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:126336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.807 [2024-04-26 12:58:43.650492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.807 [2024-04-26 12:58:43.650501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:126464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.807 [2024-04-26 12:58:43.650508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.807 [2024-04-26 12:58:43.650517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:126592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.807 [2024-04-26 12:58:43.650525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.807 [2024-04-26 12:58:43.650534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:126720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.807 [2024-04-26 12:58:43.650541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.807 [2024-04-26 12:58:43.650550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:126848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.807 [2024-04-26 12:58:43.650557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.807 [2024-04-26 12:58:43.650566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:126976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.807 [2024-04-26 12:58:43.650573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.807 [2024-04-26 12:58:43.650582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:127104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.807 [2024-04-26 12:58:43.650589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.807 [2024-04-26 12:58:43.650598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:127232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.807 [2024-04-26 12:58:43.650605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.807 [2024-04-26 12:58:43.650614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:127360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.808 [2024-04-26 12:58:43.650621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.808 [2024-04-26 12:58:43.650631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:127488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.808 [2024-04-26 12:58:43.650638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.808 [2024-04-26 12:58:43.650647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:127616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.808 [2024-04-26 12:58:43.650654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.808 [2024-04-26 12:58:43.650663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:127744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.808 [2024-04-26 12:58:43.650671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.808 [2024-04-26 12:58:43.650680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:127872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.808 [2024-04-26 12:58:43.650687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.808 [2024-04-26 12:58:43.650696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:128000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.808 [2024-04-26 12:58:43.650703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.808 [2024-04-26 12:58:43.650712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:128128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.808 [2024-04-26 12:58:43.650720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.808 [2024-04-26 12:58:43.650730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:128256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.808 [2024-04-26 12:58:43.650737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.808 [2024-04-26 12:58:43.650746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:128384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.808 [2024-04-26 12:58:43.650753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.808 [2024-04-26 12:58:43.650762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:128512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.808 [2024-04-26 12:58:43.650769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.808 [2024-04-26 12:58:43.650778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:128640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.808 [2024-04-26 12:58:43.650784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.808 [2024-04-26 12:58:43.650793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:128768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.808 [2024-04-26 12:58:43.650801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.808 [2024-04-26 12:58:43.650809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:128896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.808 [2024-04-26 12:58:43.650816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.808 [2024-04-26 12:58:43.650825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:129024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.808 [2024-04-26 12:58:43.650832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.808 [2024-04-26 12:58:43.650846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:129152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.808 [2024-04-26 12:58:43.650854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.808 [2024-04-26 12:58:43.650862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:129280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.808 [2024-04-26 12:58:43.650870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.808 [2024-04-26 12:58:43.650879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:129408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.808 [2024-04-26 12:58:43.650886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.808 [2024-04-26 12:58:43.650895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:129536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.808 [2024-04-26 12:58:43.650901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.808 [2024-04-26 12:58:43.650910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:129664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.808 [2024-04-26 12:58:43.650917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.808 [2024-04-26 12:58:43.650926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:129792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.808 [2024-04-26 12:58:43.650934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.808 [2024-04-26 12:58:43.650944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:129920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.808 [2024-04-26 12:58:43.650951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.808 [2024-04-26 12:58:43.650959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:130048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.808 [2024-04-26 12:58:43.650966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.808 [2024-04-26 12:58:43.650976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:130176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.808 [2024-04-26 12:58:43.650983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.808 [2024-04-26 12:58:43.650992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:130304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.808 [2024-04-26 12:58:43.650999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.808 [2024-04-26 12:58:43.651008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:130432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.808 [2024-04-26 12:58:43.651015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.808 [2024-04-26 12:58:43.651023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:130560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.808 [2024-04-26 12:58:43.651031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.808 [2024-04-26 12:58:43.651039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:130688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.808 [2024-04-26 12:58:43.651046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.808 [2024-04-26 12:58:43.651055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:130816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.808 [2024-04-26 12:58:43.651062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.808 [2024-04-26 12:58:43.651071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:130944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.808 [2024-04-26 12:58:43.651077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.808 [2024-04-26 12:58:43.651125] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x198ed60 was disconnected and freed. reset controller. 00:15:38.808 [2024-04-26 12:58:43.652301] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:15:38.808 12:58:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:38.808 12:58:43 -- target/host_management.sh@87 -- # sleep 1 00:15:38.808 task offset: 122880 on job bdev=Nvme0n1 fails 00:15:38.808 00:15:38.808 Latency(us) 00:15:38.808 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:38.808 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:15:38.808 Job: Nvme0n1 ended in about 0.59 seconds with error 00:15:38.808 Verification LBA range: start 0x0 length 0x400 00:15:38.808 Nvme0n1 : 0.59 1623.46 101.47 108.23 0.00 35785.24 1481.39 39976.96 00:15:38.808 =================================================================================================================== 00:15:38.808 Total : 1623.46 101.47 108.23 0.00 35785.24 1481.39 39976.96 00:15:38.808 [2024-04-26 12:58:43.654273] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:38.808 [2024-04-26 12:58:43.659708] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:39.752 12:58:44 -- target/host_management.sh@91 -- # kill -9 3928342 00:15:39.752 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3928342) - No such process 00:15:39.752 12:58:44 -- target/host_management.sh@91 -- # true 00:15:39.752 12:58:44 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:15:39.752 12:58:44 -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:15:39.752 12:58:44 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:15:39.752 12:58:44 -- nvmf/common.sh@521 -- # config=() 00:15:39.752 12:58:44 -- nvmf/common.sh@521 -- # local subsystem config 00:15:39.752 12:58:44 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:15:39.752 12:58:44 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:15:39.752 { 00:15:39.752 "params": { 00:15:39.752 "name": "Nvme$subsystem", 00:15:39.752 "trtype": "$TEST_TRANSPORT", 00:15:39.752 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:39.752 "adrfam": "ipv4", 00:15:39.752 "trsvcid": "$NVMF_PORT", 00:15:39.752 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:39.752 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:39.752 "hdgst": ${hdgst:-false}, 00:15:39.752 "ddgst": ${ddgst:-false} 00:15:39.752 }, 00:15:39.752 "method": "bdev_nvme_attach_controller" 00:15:39.752 } 00:15:39.752 EOF 00:15:39.752 )") 00:15:39.752 12:58:44 -- nvmf/common.sh@543 -- # cat 00:15:39.752 12:58:44 -- nvmf/common.sh@545 -- # jq . 00:15:39.752 12:58:44 -- nvmf/common.sh@546 -- # IFS=, 00:15:39.752 12:58:44 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:15:39.752 "params": { 00:15:39.752 "name": "Nvme0", 00:15:39.752 "trtype": "tcp", 00:15:39.752 "traddr": "10.0.0.2", 00:15:39.752 "adrfam": "ipv4", 00:15:39.752 "trsvcid": "4420", 00:15:39.752 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:39.752 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:15:39.752 "hdgst": false, 00:15:39.752 "ddgst": false 00:15:39.752 }, 00:15:39.752 "method": "bdev_nvme_attach_controller" 00:15:39.752 }' 00:15:39.752 [2024-04-26 12:58:44.706256] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:15:39.752 [2024-04-26 12:58:44.706308] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3928778 ] 00:15:39.752 EAL: No free 2048 kB hugepages reported on node 1 00:15:39.752 [2024-04-26 12:58:44.766199] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:40.014 [2024-04-26 12:58:44.828405] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:40.275 Running I/O for 1 seconds... 00:15:41.218 00:15:41.218 Latency(us) 00:15:41.218 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:41.218 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:15:41.218 Verification LBA range: start 0x0 length 0x400 00:15:41.218 Nvme0n1 : 1.01 2210.29 138.14 0.00 0.00 28319.03 1597.44 27852.80 00:15:41.218 =================================================================================================================== 00:15:41.218 Total : 2210.29 138.14 0.00 0.00 28319.03 1597.44 27852.80 00:15:41.218 12:58:46 -- target/host_management.sh@102 -- # stoptarget 00:15:41.218 12:58:46 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:15:41.478 12:58:46 -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:15:41.478 12:58:46 -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:15:41.478 12:58:46 -- target/host_management.sh@40 -- # nvmftestfini 00:15:41.478 12:58:46 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:41.478 12:58:46 -- nvmf/common.sh@117 -- # sync 00:15:41.478 12:58:46 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:41.478 12:58:46 -- nvmf/common.sh@120 -- # set +e 00:15:41.478 12:58:46 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:41.478 12:58:46 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:41.478 rmmod nvme_tcp 00:15:41.478 rmmod nvme_fabrics 00:15:41.478 rmmod nvme_keyring 00:15:41.478 12:58:46 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:41.478 12:58:46 -- nvmf/common.sh@124 -- # set -e 00:15:41.478 12:58:46 -- nvmf/common.sh@125 -- # return 0 00:15:41.478 12:58:46 -- nvmf/common.sh@478 -- # '[' -n 3928270 ']' 00:15:41.478 12:58:46 -- nvmf/common.sh@479 -- # killprocess 3928270 00:15:41.478 12:58:46 -- common/autotest_common.sh@936 -- # '[' -z 3928270 ']' 00:15:41.478 12:58:46 -- common/autotest_common.sh@940 -- # kill -0 3928270 00:15:41.478 12:58:46 -- common/autotest_common.sh@941 -- # uname 00:15:41.478 12:58:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:41.478 12:58:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3928270 00:15:41.478 12:58:46 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:41.478 12:58:46 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:41.479 12:58:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3928270' 00:15:41.479 killing process with pid 3928270 00:15:41.479 12:58:46 -- common/autotest_common.sh@955 -- # kill 3928270 00:15:41.479 12:58:46 -- common/autotest_common.sh@960 -- # wait 3928270 00:15:41.479 [2024-04-26 12:58:46.520210] app.c: 630:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:15:41.479 12:58:46 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:41.479 12:58:46 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:41.739 12:58:46 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:41.739 12:58:46 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:41.739 12:58:46 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:41.739 12:58:46 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:41.739 12:58:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:41.739 12:58:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:43.652 12:58:48 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:43.652 00:15:43.652 real 0m6.866s 00:15:43.652 user 0m20.713s 00:15:43.652 sys 0m1.068s 00:15:43.652 12:58:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:43.652 12:58:48 -- common/autotest_common.sh@10 -- # set +x 00:15:43.652 ************************************ 00:15:43.652 END TEST nvmf_host_management 00:15:43.652 ************************************ 00:15:43.652 12:58:48 -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:15:43.652 00:15:43.652 real 0m14.428s 00:15:43.652 user 0m22.816s 00:15:43.652 sys 0m6.443s 00:15:43.652 12:58:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:43.652 12:58:48 -- common/autotest_common.sh@10 -- # set +x 00:15:43.652 ************************************ 00:15:43.652 END TEST nvmf_host_management 00:15:43.652 ************************************ 00:15:43.652 12:58:48 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:15:43.652 12:58:48 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:43.652 12:58:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:43.652 12:58:48 -- common/autotest_common.sh@10 -- # set +x 00:15:43.912 ************************************ 00:15:43.912 START TEST nvmf_lvol 00:15:43.912 ************************************ 00:15:43.912 12:58:48 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:15:43.912 * Looking for test storage... 00:15:43.912 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:43.912 12:58:48 -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:43.912 12:58:48 -- nvmf/common.sh@7 -- # uname -s 00:15:43.912 12:58:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:43.912 12:58:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:43.912 12:58:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:43.912 12:58:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:43.912 12:58:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:43.912 12:58:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:43.912 12:58:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:43.912 12:58:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:43.912 12:58:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:43.912 12:58:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:44.173 12:58:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:44.173 12:58:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:44.173 12:58:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:44.173 12:58:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:44.173 12:58:48 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:44.173 12:58:48 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:44.173 12:58:48 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:44.173 12:58:48 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:44.173 12:58:48 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:44.173 12:58:48 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:44.173 12:58:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:44.173 12:58:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:44.173 12:58:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:44.173 12:58:48 -- paths/export.sh@5 -- # export PATH 00:15:44.173 12:58:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:44.173 12:58:48 -- nvmf/common.sh@47 -- # : 0 00:15:44.173 12:58:48 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:44.173 12:58:48 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:44.173 12:58:48 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:44.173 12:58:48 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:44.173 12:58:48 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:44.173 12:58:48 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:44.173 12:58:48 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:44.173 12:58:48 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:44.173 12:58:48 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:44.173 12:58:48 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:44.173 12:58:48 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:15:44.173 12:58:48 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:15:44.173 12:58:48 -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:44.173 12:58:48 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:15:44.173 12:58:48 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:15:44.173 12:58:48 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:44.173 12:58:48 -- nvmf/common.sh@437 -- # prepare_net_devs 00:15:44.173 12:58:48 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:15:44.173 12:58:48 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:15:44.173 12:58:48 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:44.173 12:58:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:44.173 12:58:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:44.173 12:58:48 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:15:44.173 12:58:48 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:15:44.173 12:58:48 -- nvmf/common.sh@285 -- # xtrace_disable 00:15:44.173 12:58:48 -- common/autotest_common.sh@10 -- # set +x 00:15:52.407 12:58:55 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:52.407 12:58:55 -- nvmf/common.sh@291 -- # pci_devs=() 00:15:52.407 12:58:55 -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:52.407 12:58:55 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:52.407 12:58:55 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:52.407 12:58:55 -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:52.407 12:58:55 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:52.407 12:58:55 -- nvmf/common.sh@295 -- # net_devs=() 00:15:52.407 12:58:55 -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:52.407 12:58:55 -- nvmf/common.sh@296 -- # e810=() 00:15:52.407 12:58:55 -- nvmf/common.sh@296 -- # local -ga e810 00:15:52.407 12:58:55 -- nvmf/common.sh@297 -- # x722=() 00:15:52.407 12:58:55 -- nvmf/common.sh@297 -- # local -ga x722 00:15:52.407 12:58:55 -- nvmf/common.sh@298 -- # mlx=() 00:15:52.407 12:58:55 -- nvmf/common.sh@298 -- # local -ga mlx 00:15:52.407 12:58:55 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:52.407 12:58:55 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:52.407 12:58:55 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:52.407 12:58:55 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:52.407 12:58:55 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:52.407 12:58:55 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:52.407 12:58:55 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:52.407 12:58:55 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:52.407 12:58:55 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:52.407 12:58:55 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:52.407 12:58:55 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:52.407 12:58:55 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:52.407 12:58:55 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:52.407 12:58:55 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:52.408 12:58:55 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:52.408 12:58:55 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:52.408 12:58:55 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:52.408 12:58:55 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:52.408 12:58:55 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:15:52.408 Found 0000:31:00.0 (0x8086 - 0x159b) 00:15:52.408 12:58:55 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:52.408 12:58:55 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:52.408 12:58:55 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:52.408 12:58:55 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:52.408 12:58:55 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:52.408 12:58:55 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:52.408 12:58:55 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:15:52.408 Found 0000:31:00.1 (0x8086 - 0x159b) 00:15:52.408 12:58:55 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:52.408 12:58:55 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:52.408 12:58:55 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:52.408 12:58:55 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:52.408 12:58:55 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:52.408 12:58:55 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:52.408 12:58:55 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:52.408 12:58:55 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:52.408 12:58:55 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:52.408 12:58:55 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:52.408 12:58:55 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:52.408 12:58:55 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:52.408 12:58:55 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:15:52.408 Found net devices under 0000:31:00.0: cvl_0_0 00:15:52.408 12:58:55 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:52.408 12:58:55 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:52.408 12:58:55 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:52.408 12:58:55 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:52.408 12:58:55 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:52.408 12:58:55 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:15:52.408 Found net devices under 0000:31:00.1: cvl_0_1 00:15:52.408 12:58:55 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:52.408 12:58:55 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:15:52.408 12:58:55 -- nvmf/common.sh@403 -- # is_hw=yes 00:15:52.408 12:58:55 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:15:52.408 12:58:55 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:15:52.408 12:58:55 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:15:52.408 12:58:55 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:52.408 12:58:55 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:52.408 12:58:55 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:52.408 12:58:55 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:52.408 12:58:55 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:52.408 12:58:55 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:52.408 12:58:55 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:52.408 12:58:55 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:52.408 12:58:55 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:52.408 12:58:55 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:52.408 12:58:55 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:52.408 12:58:55 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:52.408 12:58:55 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:52.408 12:58:56 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:52.408 12:58:56 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:52.408 12:58:56 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:52.408 12:58:56 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:52.408 12:58:56 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:52.408 12:58:56 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:52.408 12:58:56 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:52.408 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:52.408 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.508 ms 00:15:52.408 00:15:52.408 --- 10.0.0.2 ping statistics --- 00:15:52.408 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:52.408 rtt min/avg/max/mdev = 0.508/0.508/0.508/0.000 ms 00:15:52.408 12:58:56 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:52.408 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:52.408 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:15:52.408 00:15:52.408 --- 10.0.0.1 ping statistics --- 00:15:52.408 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:52.408 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:15:52.408 12:58:56 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:52.408 12:58:56 -- nvmf/common.sh@411 -- # return 0 00:15:52.408 12:58:56 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:52.408 12:58:56 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:52.408 12:58:56 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:15:52.408 12:58:56 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:15:52.408 12:58:56 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:52.408 12:58:56 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:15:52.408 12:58:56 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:15:52.408 12:58:56 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:15:52.408 12:58:56 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:52.408 12:58:56 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:52.408 12:58:56 -- common/autotest_common.sh@10 -- # set +x 00:15:52.408 12:58:56 -- nvmf/common.sh@470 -- # nvmfpid=3933425 00:15:52.408 12:58:56 -- nvmf/common.sh@471 -- # waitforlisten 3933425 00:15:52.408 12:58:56 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:15:52.408 12:58:56 -- common/autotest_common.sh@817 -- # '[' -z 3933425 ']' 00:15:52.408 12:58:56 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:52.408 12:58:56 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:52.408 12:58:56 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:52.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:52.408 12:58:56 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:52.408 12:58:56 -- common/autotest_common.sh@10 -- # set +x 00:15:52.408 [2024-04-26 12:58:56.408352] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:15:52.408 [2024-04-26 12:58:56.408413] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:52.408 EAL: No free 2048 kB hugepages reported on node 1 00:15:52.408 [2024-04-26 12:58:56.480401] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:52.408 [2024-04-26 12:58:56.552783] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:52.408 [2024-04-26 12:58:56.552825] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:52.408 [2024-04-26 12:58:56.552833] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:52.408 [2024-04-26 12:58:56.552845] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:52.408 [2024-04-26 12:58:56.552850] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:52.408 [2024-04-26 12:58:56.552978] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:52.408 [2024-04-26 12:58:56.553154] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:52.408 [2024-04-26 12:58:56.553158] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:52.408 12:58:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:52.408 12:58:57 -- common/autotest_common.sh@850 -- # return 0 00:15:52.408 12:58:57 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:52.408 12:58:57 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:52.408 12:58:57 -- common/autotest_common.sh@10 -- # set +x 00:15:52.408 12:58:57 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:52.408 12:58:57 -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:52.408 [2024-04-26 12:58:57.353592] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:52.409 12:58:57 -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:52.668 12:58:57 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:15:52.668 12:58:57 -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:52.928 12:58:57 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:15:52.928 12:58:57 -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:15:52.928 12:58:57 -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:15:53.186 12:58:58 -- target/nvmf_lvol.sh@29 -- # lvs=2ef998f9-55e5-4230-ad4c-2937a087e3db 00:15:53.186 12:58:58 -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 2ef998f9-55e5-4230-ad4c-2937a087e3db lvol 20 00:15:53.446 12:58:58 -- target/nvmf_lvol.sh@32 -- # lvol=6d5b96ad-34ed-4b49-b779-557eef273d82 00:15:53.446 12:58:58 -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:15:53.446 12:58:58 -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 6d5b96ad-34ed-4b49-b779-557eef273d82 00:15:53.706 12:58:58 -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:15:53.706 [2024-04-26 12:58:58.732601] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:53.706 12:58:58 -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:53.965 12:58:58 -- target/nvmf_lvol.sh@42 -- # perf_pid=3934039 00:15:53.965 12:58:58 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:15:53.965 12:58:58 -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:15:53.965 EAL: No free 2048 kB hugepages reported on node 1 00:15:54.902 12:58:59 -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 6d5b96ad-34ed-4b49-b779-557eef273d82 MY_SNAPSHOT 00:15:55.162 12:59:00 -- target/nvmf_lvol.sh@47 -- # snapshot=4a93194c-ca1e-4099-9237-071cd97e154a 00:15:55.162 12:59:00 -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 6d5b96ad-34ed-4b49-b779-557eef273d82 30 00:15:55.422 12:59:00 -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 4a93194c-ca1e-4099-9237-071cd97e154a MY_CLONE 00:15:55.683 12:59:00 -- target/nvmf_lvol.sh@49 -- # clone=3c011c57-188b-42ae-88dd-facd1380cef6 00:15:55.683 12:59:00 -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 3c011c57-188b-42ae-88dd-facd1380cef6 00:15:55.943 12:59:00 -- target/nvmf_lvol.sh@53 -- # wait 3934039 00:16:05.947 Initializing NVMe Controllers 00:16:05.947 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:16:05.947 Controller IO queue size 128, less than required. 00:16:05.947 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:05.947 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:16:05.947 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:16:05.947 Initialization complete. Launching workers. 00:16:05.947 ======================================================== 00:16:05.947 Latency(us) 00:16:05.947 Device Information : IOPS MiB/s Average min max 00:16:05.947 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 11904.96 46.50 10754.54 1573.59 60020.55 00:16:05.947 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 16943.81 66.19 7554.65 1026.65 61399.05 00:16:05.947 ======================================================== 00:16:05.947 Total : 28848.77 112.69 8875.14 1026.65 61399.05 00:16:05.947 00:16:05.947 12:59:09 -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:16:05.947 12:59:09 -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 6d5b96ad-34ed-4b49-b779-557eef273d82 00:16:05.947 12:59:09 -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 2ef998f9-55e5-4230-ad4c-2937a087e3db 00:16:05.947 12:59:09 -- target/nvmf_lvol.sh@60 -- # rm -f 00:16:05.947 12:59:09 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:16:05.947 12:59:09 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:16:05.947 12:59:09 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:05.947 12:59:09 -- nvmf/common.sh@117 -- # sync 00:16:05.947 12:59:09 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:05.947 12:59:09 -- nvmf/common.sh@120 -- # set +e 00:16:05.947 12:59:09 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:05.947 12:59:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:05.947 rmmod nvme_tcp 00:16:05.947 rmmod nvme_fabrics 00:16:05.947 rmmod nvme_keyring 00:16:05.947 12:59:09 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:05.947 12:59:09 -- nvmf/common.sh@124 -- # set -e 00:16:05.947 12:59:09 -- nvmf/common.sh@125 -- # return 0 00:16:05.947 12:59:09 -- nvmf/common.sh@478 -- # '[' -n 3933425 ']' 00:16:05.947 12:59:09 -- nvmf/common.sh@479 -- # killprocess 3933425 00:16:05.947 12:59:09 -- common/autotest_common.sh@936 -- # '[' -z 3933425 ']' 00:16:05.947 12:59:09 -- common/autotest_common.sh@940 -- # kill -0 3933425 00:16:05.947 12:59:09 -- common/autotest_common.sh@941 -- # uname 00:16:05.947 12:59:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:05.947 12:59:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3933425 00:16:05.947 12:59:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:05.947 12:59:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:05.947 12:59:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3933425' 00:16:05.947 killing process with pid 3933425 00:16:05.947 12:59:09 -- common/autotest_common.sh@955 -- # kill 3933425 00:16:05.947 12:59:09 -- common/autotest_common.sh@960 -- # wait 3933425 00:16:05.947 12:59:10 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:05.947 12:59:10 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:16:05.947 12:59:10 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:16:05.947 12:59:10 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:05.947 12:59:10 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:05.947 12:59:10 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:05.947 12:59:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:05.947 12:59:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:07.335 12:59:12 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:07.335 00:16:07.335 real 0m23.284s 00:16:07.335 user 1m3.776s 00:16:07.335 sys 0m7.648s 00:16:07.335 12:59:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:07.335 12:59:12 -- common/autotest_common.sh@10 -- # set +x 00:16:07.335 ************************************ 00:16:07.335 END TEST nvmf_lvol 00:16:07.335 ************************************ 00:16:07.335 12:59:12 -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:16:07.335 12:59:12 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:07.335 12:59:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:07.335 12:59:12 -- common/autotest_common.sh@10 -- # set +x 00:16:07.335 ************************************ 00:16:07.335 START TEST nvmf_lvs_grow 00:16:07.335 ************************************ 00:16:07.335 12:59:12 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:16:07.597 * Looking for test storage... 00:16:07.597 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:07.597 12:59:12 -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:07.597 12:59:12 -- nvmf/common.sh@7 -- # uname -s 00:16:07.597 12:59:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:07.597 12:59:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:07.597 12:59:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:07.597 12:59:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:07.597 12:59:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:07.597 12:59:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:07.597 12:59:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:07.597 12:59:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:07.597 12:59:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:07.597 12:59:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:07.597 12:59:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:07.597 12:59:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:07.597 12:59:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:07.597 12:59:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:07.597 12:59:12 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:07.597 12:59:12 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:07.597 12:59:12 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:07.597 12:59:12 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:07.597 12:59:12 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:07.597 12:59:12 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:07.597 12:59:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:07.597 12:59:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:07.598 12:59:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:07.598 12:59:12 -- paths/export.sh@5 -- # export PATH 00:16:07.598 12:59:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:07.598 12:59:12 -- nvmf/common.sh@47 -- # : 0 00:16:07.598 12:59:12 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:07.598 12:59:12 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:07.598 12:59:12 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:07.598 12:59:12 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:07.598 12:59:12 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:07.598 12:59:12 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:07.598 12:59:12 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:07.598 12:59:12 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:07.598 12:59:12 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:07.598 12:59:12 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:07.598 12:59:12 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:16:07.598 12:59:12 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:16:07.598 12:59:12 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:07.598 12:59:12 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:07.598 12:59:12 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:07.598 12:59:12 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:07.598 12:59:12 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:07.598 12:59:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:07.598 12:59:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:07.598 12:59:12 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:16:07.598 12:59:12 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:16:07.598 12:59:12 -- nvmf/common.sh@285 -- # xtrace_disable 00:16:07.598 12:59:12 -- common/autotest_common.sh@10 -- # set +x 00:16:15.744 12:59:19 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:15.744 12:59:19 -- nvmf/common.sh@291 -- # pci_devs=() 00:16:15.744 12:59:19 -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:15.744 12:59:19 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:15.744 12:59:19 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:15.744 12:59:19 -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:15.744 12:59:19 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:15.744 12:59:19 -- nvmf/common.sh@295 -- # net_devs=() 00:16:15.744 12:59:19 -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:15.744 12:59:19 -- nvmf/common.sh@296 -- # e810=() 00:16:15.744 12:59:19 -- nvmf/common.sh@296 -- # local -ga e810 00:16:15.744 12:59:19 -- nvmf/common.sh@297 -- # x722=() 00:16:15.744 12:59:19 -- nvmf/common.sh@297 -- # local -ga x722 00:16:15.744 12:59:19 -- nvmf/common.sh@298 -- # mlx=() 00:16:15.744 12:59:19 -- nvmf/common.sh@298 -- # local -ga mlx 00:16:15.744 12:59:19 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:15.744 12:59:19 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:15.744 12:59:19 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:15.744 12:59:19 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:15.744 12:59:19 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:15.744 12:59:19 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:15.744 12:59:19 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:15.744 12:59:19 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:15.744 12:59:19 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:15.744 12:59:19 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:15.744 12:59:19 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:15.744 12:59:19 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:15.744 12:59:19 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:15.744 12:59:19 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:15.744 12:59:19 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:15.744 12:59:19 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:15.744 12:59:19 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:15.744 12:59:19 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:15.745 12:59:19 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:16:15.745 Found 0000:31:00.0 (0x8086 - 0x159b) 00:16:15.745 12:59:19 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:15.745 12:59:19 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:15.745 12:59:19 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:15.745 12:59:19 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:15.745 12:59:19 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:15.745 12:59:19 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:15.745 12:59:19 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:16:15.745 Found 0000:31:00.1 (0x8086 - 0x159b) 00:16:15.745 12:59:19 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:15.745 12:59:19 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:15.745 12:59:19 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:15.745 12:59:19 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:15.745 12:59:19 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:15.745 12:59:19 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:15.745 12:59:19 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:15.745 12:59:19 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:15.745 12:59:19 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:15.745 12:59:19 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:15.745 12:59:19 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:15.745 12:59:19 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:15.745 12:59:19 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:16:15.745 Found net devices under 0000:31:00.0: cvl_0_0 00:16:15.745 12:59:19 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:15.745 12:59:19 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:15.745 12:59:19 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:15.745 12:59:19 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:15.745 12:59:19 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:15.745 12:59:19 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:16:15.745 Found net devices under 0000:31:00.1: cvl_0_1 00:16:15.745 12:59:19 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:15.745 12:59:19 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:16:15.745 12:59:19 -- nvmf/common.sh@403 -- # is_hw=yes 00:16:15.745 12:59:19 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:16:15.745 12:59:19 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:16:15.745 12:59:19 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:16:15.745 12:59:19 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:15.745 12:59:19 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:15.745 12:59:19 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:15.745 12:59:19 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:15.745 12:59:19 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:15.745 12:59:19 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:15.745 12:59:19 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:15.745 12:59:19 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:15.745 12:59:19 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:15.745 12:59:19 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:15.745 12:59:19 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:15.745 12:59:19 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:15.745 12:59:19 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:15.745 12:59:19 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:15.745 12:59:19 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:15.745 12:59:19 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:15.745 12:59:19 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:15.745 12:59:19 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:15.745 12:59:19 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:15.745 12:59:19 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:15.745 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:15.745 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.501 ms 00:16:15.745 00:16:15.745 --- 10.0.0.2 ping statistics --- 00:16:15.745 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:15.745 rtt min/avg/max/mdev = 0.501/0.501/0.501/0.000 ms 00:16:15.745 12:59:19 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:15.745 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:15.745 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:16:15.745 00:16:15.745 --- 10.0.0.1 ping statistics --- 00:16:15.745 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:15.745 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:16:15.745 12:59:19 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:15.745 12:59:19 -- nvmf/common.sh@411 -- # return 0 00:16:15.745 12:59:19 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:15.745 12:59:19 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:15.745 12:59:19 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:16:15.745 12:59:19 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:16:15.745 12:59:19 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:15.745 12:59:19 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:16:15.745 12:59:19 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:16:15.745 12:59:19 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:16:15.745 12:59:19 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:15.745 12:59:19 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:15.745 12:59:19 -- common/autotest_common.sh@10 -- # set +x 00:16:15.745 12:59:19 -- nvmf/common.sh@470 -- # nvmfpid=3940495 00:16:15.745 12:59:19 -- nvmf/common.sh@471 -- # waitforlisten 3940495 00:16:15.745 12:59:19 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:15.745 12:59:19 -- common/autotest_common.sh@817 -- # '[' -z 3940495 ']' 00:16:15.745 12:59:19 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:15.745 12:59:19 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:15.745 12:59:19 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:15.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:15.745 12:59:19 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:15.745 12:59:19 -- common/autotest_common.sh@10 -- # set +x 00:16:15.745 [2024-04-26 12:59:19.694512] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:16:15.745 [2024-04-26 12:59:19.694561] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:15.745 EAL: No free 2048 kB hugepages reported on node 1 00:16:15.745 [2024-04-26 12:59:19.759495] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:15.745 [2024-04-26 12:59:19.822540] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:15.745 [2024-04-26 12:59:19.822580] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:15.745 [2024-04-26 12:59:19.822588] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:15.745 [2024-04-26 12:59:19.822594] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:15.745 [2024-04-26 12:59:19.822600] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:15.745 [2024-04-26 12:59:19.822617] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:15.745 12:59:20 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:15.745 12:59:20 -- common/autotest_common.sh@850 -- # return 0 00:16:15.745 12:59:20 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:15.745 12:59:20 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:15.745 12:59:20 -- common/autotest_common.sh@10 -- # set +x 00:16:15.745 12:59:20 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:15.745 12:59:20 -- target/nvmf_lvs_grow.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:15.745 [2024-04-26 12:59:20.649695] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:15.745 12:59:20 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:16:15.745 12:59:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:15.745 12:59:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:15.745 12:59:20 -- common/autotest_common.sh@10 -- # set +x 00:16:16.006 ************************************ 00:16:16.006 START TEST lvs_grow_clean 00:16:16.006 ************************************ 00:16:16.006 12:59:20 -- common/autotest_common.sh@1111 -- # lvs_grow 00:16:16.006 12:59:20 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:16:16.006 12:59:20 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:16:16.006 12:59:20 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:16:16.006 12:59:20 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:16:16.006 12:59:20 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:16:16.006 12:59:20 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:16:16.006 12:59:20 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:16.006 12:59:20 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:16.007 12:59:20 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:16.007 12:59:21 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:16:16.007 12:59:21 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:16:16.269 12:59:21 -- target/nvmf_lvs_grow.sh@28 -- # lvs=db09f2d9-aab2-4630-9aa3-e5414b732de2 00:16:16.269 12:59:21 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u db09f2d9-aab2-4630-9aa3-e5414b732de2 00:16:16.269 12:59:21 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:16:16.530 12:59:21 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:16:16.530 12:59:21 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:16:16.530 12:59:21 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u db09f2d9-aab2-4630-9aa3-e5414b732de2 lvol 150 00:16:16.530 12:59:21 -- target/nvmf_lvs_grow.sh@33 -- # lvol=b1282e6f-c974-4f9e-994b-4f21d8760c12 00:16:16.530 12:59:21 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:16.530 12:59:21 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:16:16.791 [2024-04-26 12:59:21.643845] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:16:16.791 [2024-04-26 12:59:21.643900] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:16:16.791 true 00:16:16.791 12:59:21 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u db09f2d9-aab2-4630-9aa3-e5414b732de2 00:16:16.791 12:59:21 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:16:16.791 12:59:21 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:16:16.791 12:59:21 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:17.051 12:59:21 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 b1282e6f-c974-4f9e-994b-4f21d8760c12 00:16:17.051 12:59:22 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:17.311 [2024-04-26 12:59:22.225620] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:17.311 12:59:22 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:17.571 12:59:22 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3940921 00:16:17.571 12:59:22 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:17.572 12:59:22 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:16:17.572 12:59:22 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3940921 /var/tmp/bdevperf.sock 00:16:17.572 12:59:22 -- common/autotest_common.sh@817 -- # '[' -z 3940921 ']' 00:16:17.572 12:59:22 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:17.572 12:59:22 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:17.572 12:59:22 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:17.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:17.572 12:59:22 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:17.572 12:59:22 -- common/autotest_common.sh@10 -- # set +x 00:16:17.572 [2024-04-26 12:59:22.422313] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:16:17.572 [2024-04-26 12:59:22.422361] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3940921 ] 00:16:17.572 EAL: No free 2048 kB hugepages reported on node 1 00:16:17.572 [2024-04-26 12:59:22.499168] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:17.572 [2024-04-26 12:59:22.551287] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:18.142 12:59:23 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:18.142 12:59:23 -- common/autotest_common.sh@850 -- # return 0 00:16:18.142 12:59:23 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:16:18.403 Nvme0n1 00:16:18.403 12:59:23 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:16:18.663 [ 00:16:18.663 { 00:16:18.663 "name": "Nvme0n1", 00:16:18.663 "aliases": [ 00:16:18.663 "b1282e6f-c974-4f9e-994b-4f21d8760c12" 00:16:18.663 ], 00:16:18.663 "product_name": "NVMe disk", 00:16:18.663 "block_size": 4096, 00:16:18.663 "num_blocks": 38912, 00:16:18.663 "uuid": "b1282e6f-c974-4f9e-994b-4f21d8760c12", 00:16:18.663 "assigned_rate_limits": { 00:16:18.663 "rw_ios_per_sec": 0, 00:16:18.663 "rw_mbytes_per_sec": 0, 00:16:18.663 "r_mbytes_per_sec": 0, 00:16:18.663 "w_mbytes_per_sec": 0 00:16:18.663 }, 00:16:18.663 "claimed": false, 00:16:18.663 "zoned": false, 00:16:18.663 "supported_io_types": { 00:16:18.663 "read": true, 00:16:18.663 "write": true, 00:16:18.663 "unmap": true, 00:16:18.663 "write_zeroes": true, 00:16:18.663 "flush": true, 00:16:18.663 "reset": true, 00:16:18.663 "compare": true, 00:16:18.663 "compare_and_write": true, 00:16:18.663 "abort": true, 00:16:18.663 "nvme_admin": true, 00:16:18.663 "nvme_io": true 00:16:18.663 }, 00:16:18.663 "memory_domains": [ 00:16:18.663 { 00:16:18.663 "dma_device_id": "system", 00:16:18.663 "dma_device_type": 1 00:16:18.663 } 00:16:18.663 ], 00:16:18.663 "driver_specific": { 00:16:18.663 "nvme": [ 00:16:18.663 { 00:16:18.663 "trid": { 00:16:18.663 "trtype": "TCP", 00:16:18.663 "adrfam": "IPv4", 00:16:18.663 "traddr": "10.0.0.2", 00:16:18.663 "trsvcid": "4420", 00:16:18.663 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:16:18.663 }, 00:16:18.663 "ctrlr_data": { 00:16:18.663 "cntlid": 1, 00:16:18.663 "vendor_id": "0x8086", 00:16:18.663 "model_number": "SPDK bdev Controller", 00:16:18.663 "serial_number": "SPDK0", 00:16:18.663 "firmware_revision": "24.05", 00:16:18.663 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:18.663 "oacs": { 00:16:18.663 "security": 0, 00:16:18.663 "format": 0, 00:16:18.663 "firmware": 0, 00:16:18.663 "ns_manage": 0 00:16:18.663 }, 00:16:18.663 "multi_ctrlr": true, 00:16:18.663 "ana_reporting": false 00:16:18.663 }, 00:16:18.663 "vs": { 00:16:18.663 "nvme_version": "1.3" 00:16:18.663 }, 00:16:18.663 "ns_data": { 00:16:18.663 "id": 1, 00:16:18.663 "can_share": true 00:16:18.663 } 00:16:18.663 } 00:16:18.663 ], 00:16:18.663 "mp_policy": "active_passive" 00:16:18.663 } 00:16:18.663 } 00:16:18.663 ] 00:16:18.663 12:59:23 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3941254 00:16:18.663 12:59:23 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:16:18.663 12:59:23 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:18.663 Running I/O for 10 seconds... 00:16:19.603 Latency(us) 00:16:19.603 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:19.603 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:19.603 Nvme0n1 : 1.00 17470.00 68.24 0.00 0.00 0.00 0.00 0.00 00:16:19.603 =================================================================================================================== 00:16:19.603 Total : 17470.00 68.24 0.00 0.00 0.00 0.00 0.00 00:16:19.603 00:16:20.544 12:59:25 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u db09f2d9-aab2-4630-9aa3-e5414b732de2 00:16:20.804 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:20.804 Nvme0n1 : 2.00 17627.50 68.86 0.00 0.00 0.00 0.00 0.00 00:16:20.804 =================================================================================================================== 00:16:20.804 Total : 17627.50 68.86 0.00 0.00 0.00 0.00 0.00 00:16:20.804 00:16:20.804 true 00:16:20.804 12:59:25 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u db09f2d9-aab2-4630-9aa3-e5414b732de2 00:16:20.804 12:59:25 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:16:21.065 12:59:25 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:16:21.065 12:59:25 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:16:21.065 12:59:25 -- target/nvmf_lvs_grow.sh@65 -- # wait 3941254 00:16:21.635 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:21.635 Nvme0n1 : 3.00 17677.67 69.05 0.00 0.00 0.00 0.00 0.00 00:16:21.635 =================================================================================================================== 00:16:21.635 Total : 17677.67 69.05 0.00 0.00 0.00 0.00 0.00 00:16:21.635 00:16:23.020 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:23.020 Nvme0n1 : 4.00 17718.75 69.21 0.00 0.00 0.00 0.00 0.00 00:16:23.020 =================================================================================================================== 00:16:23.020 Total : 17718.75 69.21 0.00 0.00 0.00 0.00 0.00 00:16:23.020 00:16:23.964 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:23.964 Nvme0n1 : 5.00 17744.20 69.31 0.00 0.00 0.00 0.00 0.00 00:16:23.964 =================================================================================================================== 00:16:23.964 Total : 17744.20 69.31 0.00 0.00 0.00 0.00 0.00 00:16:23.964 00:16:24.905 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:24.905 Nvme0n1 : 6.00 17761.50 69.38 0.00 0.00 0.00 0.00 0.00 00:16:24.905 =================================================================================================================== 00:16:24.905 Total : 17761.50 69.38 0.00 0.00 0.00 0.00 0.00 00:16:24.905 00:16:25.845 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:25.845 Nvme0n1 : 7.00 17772.71 69.42 0.00 0.00 0.00 0.00 0.00 00:16:25.845 =================================================================================================================== 00:16:25.845 Total : 17772.71 69.42 0.00 0.00 0.00 0.00 0.00 00:16:25.845 00:16:26.788 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:26.788 Nvme0n1 : 8.00 17789.75 69.49 0.00 0.00 0.00 0.00 0.00 00:16:26.788 =================================================================================================================== 00:16:26.788 Total : 17789.75 69.49 0.00 0.00 0.00 0.00 0.00 00:16:26.788 00:16:27.729 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:27.729 Nvme0n1 : 9.00 17802.78 69.54 0.00 0.00 0.00 0.00 0.00 00:16:27.729 =================================================================================================================== 00:16:27.729 Total : 17802.78 69.54 0.00 0.00 0.00 0.00 0.00 00:16:27.729 00:16:28.671 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:28.671 Nvme0n1 : 10.00 17814.10 69.59 0.00 0.00 0.00 0.00 0.00 00:16:28.671 =================================================================================================================== 00:16:28.671 Total : 17814.10 69.59 0.00 0.00 0.00 0.00 0.00 00:16:28.671 00:16:28.671 00:16:28.671 Latency(us) 00:16:28.671 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:28.671 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:28.671 Nvme0n1 : 10.00 17813.05 69.58 0.00 0.00 7181.69 4369.07 16384.00 00:16:28.671 =================================================================================================================== 00:16:28.671 Total : 17813.05 69.58 0.00 0.00 7181.69 4369.07 16384.00 00:16:28.671 0 00:16:28.671 12:59:33 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3940921 00:16:28.671 12:59:33 -- common/autotest_common.sh@936 -- # '[' -z 3940921 ']' 00:16:28.671 12:59:33 -- common/autotest_common.sh@940 -- # kill -0 3940921 00:16:28.671 12:59:33 -- common/autotest_common.sh@941 -- # uname 00:16:28.671 12:59:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:28.671 12:59:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3940921 00:16:28.932 12:59:33 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:28.932 12:59:33 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:28.932 12:59:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3940921' 00:16:28.932 killing process with pid 3940921 00:16:28.932 12:59:33 -- common/autotest_common.sh@955 -- # kill 3940921 00:16:28.932 Received shutdown signal, test time was about 10.000000 seconds 00:16:28.932 00:16:28.932 Latency(us) 00:16:28.932 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:28.932 =================================================================================================================== 00:16:28.932 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:28.932 12:59:33 -- common/autotest_common.sh@960 -- # wait 3940921 00:16:28.932 12:59:33 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:16:29.193 12:59:34 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u db09f2d9-aab2-4630-9aa3-e5414b732de2 00:16:29.193 12:59:34 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:16:29.193 12:59:34 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:16:29.193 12:59:34 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:16:29.193 12:59:34 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:16:29.455 [2024-04-26 12:59:34.340295] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:16:29.455 12:59:34 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u db09f2d9-aab2-4630-9aa3-e5414b732de2 00:16:29.455 12:59:34 -- common/autotest_common.sh@638 -- # local es=0 00:16:29.455 12:59:34 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u db09f2d9-aab2-4630-9aa3-e5414b732de2 00:16:29.455 12:59:34 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:29.455 12:59:34 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:29.455 12:59:34 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:29.455 12:59:34 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:29.455 12:59:34 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:29.455 12:59:34 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:29.455 12:59:34 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:29.455 12:59:34 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:16:29.455 12:59:34 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u db09f2d9-aab2-4630-9aa3-e5414b732de2 00:16:29.716 request: 00:16:29.716 { 00:16:29.716 "uuid": "db09f2d9-aab2-4630-9aa3-e5414b732de2", 00:16:29.716 "method": "bdev_lvol_get_lvstores", 00:16:29.716 "req_id": 1 00:16:29.716 } 00:16:29.716 Got JSON-RPC error response 00:16:29.716 response: 00:16:29.716 { 00:16:29.716 "code": -19, 00:16:29.716 "message": "No such device" 00:16:29.716 } 00:16:29.716 12:59:34 -- common/autotest_common.sh@641 -- # es=1 00:16:29.716 12:59:34 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:16:29.716 12:59:34 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:16:29.716 12:59:34 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:16:29.716 12:59:34 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:29.716 aio_bdev 00:16:29.716 12:59:34 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev b1282e6f-c974-4f9e-994b-4f21d8760c12 00:16:29.716 12:59:34 -- common/autotest_common.sh@885 -- # local bdev_name=b1282e6f-c974-4f9e-994b-4f21d8760c12 00:16:29.716 12:59:34 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:16:29.716 12:59:34 -- common/autotest_common.sh@887 -- # local i 00:16:29.716 12:59:34 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:16:29.716 12:59:34 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:16:29.716 12:59:34 -- common/autotest_common.sh@890 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:29.978 12:59:34 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b b1282e6f-c974-4f9e-994b-4f21d8760c12 -t 2000 00:16:29.978 [ 00:16:29.978 { 00:16:29.978 "name": "b1282e6f-c974-4f9e-994b-4f21d8760c12", 00:16:29.978 "aliases": [ 00:16:29.978 "lvs/lvol" 00:16:29.978 ], 00:16:29.978 "product_name": "Logical Volume", 00:16:29.978 "block_size": 4096, 00:16:29.978 "num_blocks": 38912, 00:16:29.978 "uuid": "b1282e6f-c974-4f9e-994b-4f21d8760c12", 00:16:29.978 "assigned_rate_limits": { 00:16:29.978 "rw_ios_per_sec": 0, 00:16:29.978 "rw_mbytes_per_sec": 0, 00:16:29.978 "r_mbytes_per_sec": 0, 00:16:29.978 "w_mbytes_per_sec": 0 00:16:29.978 }, 00:16:29.978 "claimed": false, 00:16:29.978 "zoned": false, 00:16:29.978 "supported_io_types": { 00:16:29.978 "read": true, 00:16:29.978 "write": true, 00:16:29.978 "unmap": true, 00:16:29.978 "write_zeroes": true, 00:16:29.978 "flush": false, 00:16:29.978 "reset": true, 00:16:29.978 "compare": false, 00:16:29.978 "compare_and_write": false, 00:16:29.978 "abort": false, 00:16:29.978 "nvme_admin": false, 00:16:29.978 "nvme_io": false 00:16:29.978 }, 00:16:29.978 "driver_specific": { 00:16:29.978 "lvol": { 00:16:29.978 "lvol_store_uuid": "db09f2d9-aab2-4630-9aa3-e5414b732de2", 00:16:29.978 "base_bdev": "aio_bdev", 00:16:29.978 "thin_provision": false, 00:16:29.978 "snapshot": false, 00:16:29.978 "clone": false, 00:16:29.978 "esnap_clone": false 00:16:29.978 } 00:16:29.978 } 00:16:29.978 } 00:16:29.978 ] 00:16:29.978 12:59:34 -- common/autotest_common.sh@893 -- # return 0 00:16:29.978 12:59:34 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u db09f2d9-aab2-4630-9aa3-e5414b732de2 00:16:29.978 12:59:34 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:16:30.239 12:59:35 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:16:30.239 12:59:35 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u db09f2d9-aab2-4630-9aa3-e5414b732de2 00:16:30.239 12:59:35 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:16:30.499 12:59:35 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:16:30.499 12:59:35 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete b1282e6f-c974-4f9e-994b-4f21d8760c12 00:16:30.499 12:59:35 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u db09f2d9-aab2-4630-9aa3-e5414b732de2 00:16:30.760 12:59:35 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:16:30.760 12:59:35 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:30.760 00:16:30.760 real 0m14.993s 00:16:30.760 user 0m14.748s 00:16:30.760 sys 0m1.249s 00:16:31.059 12:59:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:31.059 12:59:35 -- common/autotest_common.sh@10 -- # set +x 00:16:31.059 ************************************ 00:16:31.059 END TEST lvs_grow_clean 00:16:31.059 ************************************ 00:16:31.059 12:59:35 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:16:31.059 12:59:35 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:31.059 12:59:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:31.059 12:59:35 -- common/autotest_common.sh@10 -- # set +x 00:16:31.059 ************************************ 00:16:31.059 START TEST lvs_grow_dirty 00:16:31.059 ************************************ 00:16:31.059 12:59:35 -- common/autotest_common.sh@1111 -- # lvs_grow dirty 00:16:31.059 12:59:35 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:16:31.059 12:59:35 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:16:31.059 12:59:35 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:16:31.059 12:59:35 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:16:31.059 12:59:35 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:16:31.059 12:59:35 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:16:31.059 12:59:35 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:31.059 12:59:36 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:31.059 12:59:36 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:31.364 12:59:36 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:16:31.364 12:59:36 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:16:31.364 12:59:36 -- target/nvmf_lvs_grow.sh@28 -- # lvs=47d2bb3c-9898-4402-bd91-c437882a4503 00:16:31.364 12:59:36 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 47d2bb3c-9898-4402-bd91-c437882a4503 00:16:31.364 12:59:36 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:16:31.625 12:59:36 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:16:31.625 12:59:36 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:16:31.625 12:59:36 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 47d2bb3c-9898-4402-bd91-c437882a4503 lvol 150 00:16:31.625 12:59:36 -- target/nvmf_lvs_grow.sh@33 -- # lvol=e7c0cb62-9a81-4190-8755-dab646636541 00:16:31.625 12:59:36 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:31.625 12:59:36 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:16:31.886 [2024-04-26 12:59:36.789855] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:16:31.886 [2024-04-26 12:59:36.789908] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:16:31.886 true 00:16:31.886 12:59:36 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:16:31.886 12:59:36 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 47d2bb3c-9898-4402-bd91-c437882a4503 00:16:32.146 12:59:36 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:16:32.146 12:59:36 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:32.146 12:59:37 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 e7c0cb62-9a81-4190-8755-dab646636541 00:16:32.407 12:59:37 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:32.407 12:59:37 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:32.668 12:59:37 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3944006 00:16:32.668 12:59:37 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:32.668 12:59:37 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:16:32.668 12:59:37 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3944006 /var/tmp/bdevperf.sock 00:16:32.668 12:59:37 -- common/autotest_common.sh@817 -- # '[' -z 3944006 ']' 00:16:32.668 12:59:37 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:32.668 12:59:37 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:32.668 12:59:37 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:32.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:32.669 12:59:37 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:32.669 12:59:37 -- common/autotest_common.sh@10 -- # set +x 00:16:32.669 [2024-04-26 12:59:37.601562] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:16:32.669 [2024-04-26 12:59:37.601612] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3944006 ] 00:16:32.669 EAL: No free 2048 kB hugepages reported on node 1 00:16:32.669 [2024-04-26 12:59:37.675569] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:32.669 [2024-04-26 12:59:37.727574] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:33.612 12:59:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:33.612 12:59:38 -- common/autotest_common.sh@850 -- # return 0 00:16:33.612 12:59:38 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:16:33.612 Nvme0n1 00:16:33.612 12:59:38 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:16:33.874 [ 00:16:33.874 { 00:16:33.874 "name": "Nvme0n1", 00:16:33.874 "aliases": [ 00:16:33.874 "e7c0cb62-9a81-4190-8755-dab646636541" 00:16:33.874 ], 00:16:33.874 "product_name": "NVMe disk", 00:16:33.874 "block_size": 4096, 00:16:33.874 "num_blocks": 38912, 00:16:33.874 "uuid": "e7c0cb62-9a81-4190-8755-dab646636541", 00:16:33.874 "assigned_rate_limits": { 00:16:33.874 "rw_ios_per_sec": 0, 00:16:33.874 "rw_mbytes_per_sec": 0, 00:16:33.874 "r_mbytes_per_sec": 0, 00:16:33.874 "w_mbytes_per_sec": 0 00:16:33.874 }, 00:16:33.874 "claimed": false, 00:16:33.874 "zoned": false, 00:16:33.874 "supported_io_types": { 00:16:33.874 "read": true, 00:16:33.874 "write": true, 00:16:33.874 "unmap": true, 00:16:33.874 "write_zeroes": true, 00:16:33.874 "flush": true, 00:16:33.874 "reset": true, 00:16:33.874 "compare": true, 00:16:33.874 "compare_and_write": true, 00:16:33.874 "abort": true, 00:16:33.874 "nvme_admin": true, 00:16:33.874 "nvme_io": true 00:16:33.874 }, 00:16:33.874 "memory_domains": [ 00:16:33.874 { 00:16:33.874 "dma_device_id": "system", 00:16:33.874 "dma_device_type": 1 00:16:33.874 } 00:16:33.874 ], 00:16:33.874 "driver_specific": { 00:16:33.874 "nvme": [ 00:16:33.874 { 00:16:33.874 "trid": { 00:16:33.874 "trtype": "TCP", 00:16:33.874 "adrfam": "IPv4", 00:16:33.874 "traddr": "10.0.0.2", 00:16:33.874 "trsvcid": "4420", 00:16:33.874 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:16:33.874 }, 00:16:33.874 "ctrlr_data": { 00:16:33.874 "cntlid": 1, 00:16:33.874 "vendor_id": "0x8086", 00:16:33.874 "model_number": "SPDK bdev Controller", 00:16:33.874 "serial_number": "SPDK0", 00:16:33.874 "firmware_revision": "24.05", 00:16:33.874 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:33.874 "oacs": { 00:16:33.874 "security": 0, 00:16:33.874 "format": 0, 00:16:33.874 "firmware": 0, 00:16:33.874 "ns_manage": 0 00:16:33.874 }, 00:16:33.874 "multi_ctrlr": true, 00:16:33.874 "ana_reporting": false 00:16:33.874 }, 00:16:33.874 "vs": { 00:16:33.874 "nvme_version": "1.3" 00:16:33.874 }, 00:16:33.874 "ns_data": { 00:16:33.874 "id": 1, 00:16:33.874 "can_share": true 00:16:33.874 } 00:16:33.874 } 00:16:33.874 ], 00:16:33.874 "mp_policy": "active_passive" 00:16:33.874 } 00:16:33.874 } 00:16:33.874 ] 00:16:33.874 12:59:38 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3944340 00:16:33.874 12:59:38 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:16:33.874 12:59:38 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:33.874 Running I/O for 10 seconds... 00:16:35.267 Latency(us) 00:16:35.267 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:35.267 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:35.267 Nvme0n1 : 1.00 17583.00 68.68 0.00 0.00 0.00 0.00 0.00 00:16:35.268 =================================================================================================================== 00:16:35.268 Total : 17583.00 68.68 0.00 0.00 0.00 0.00 0.00 00:16:35.268 00:16:35.839 12:59:40 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 47d2bb3c-9898-4402-bd91-c437882a4503 00:16:35.839 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:35.839 Nvme0n1 : 2.00 17676.00 69.05 0.00 0.00 0.00 0.00 0.00 00:16:35.839 =================================================================================================================== 00:16:35.839 Total : 17676.00 69.05 0.00 0.00 0.00 0.00 0.00 00:16:35.839 00:16:36.100 true 00:16:36.100 12:59:40 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 47d2bb3c-9898-4402-bd91-c437882a4503 00:16:36.100 12:59:40 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:16:36.100 12:59:41 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:16:36.100 12:59:41 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:16:36.100 12:59:41 -- target/nvmf_lvs_grow.sh@65 -- # wait 3944340 00:16:37.040 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:37.040 Nvme0n1 : 3.00 17711.00 69.18 0.00 0.00 0.00 0.00 0.00 00:16:37.040 =================================================================================================================== 00:16:37.040 Total : 17711.00 69.18 0.00 0.00 0.00 0.00 0.00 00:16:37.040 00:16:37.981 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:37.981 Nvme0n1 : 4.00 17772.00 69.42 0.00 0.00 0.00 0.00 0.00 00:16:37.981 =================================================================================================================== 00:16:37.981 Total : 17772.00 69.42 0.00 0.00 0.00 0.00 0.00 00:16:37.981 00:16:38.921 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:38.921 Nvme0n1 : 5.00 17798.40 69.53 0.00 0.00 0.00 0.00 0.00 00:16:38.921 =================================================================================================================== 00:16:38.921 Total : 17798.40 69.53 0.00 0.00 0.00 0.00 0.00 00:16:38.921 00:16:39.860 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:39.860 Nvme0n1 : 6.00 17814.83 69.59 0.00 0.00 0.00 0.00 0.00 00:16:39.860 =================================================================================================================== 00:16:39.860 Total : 17814.83 69.59 0.00 0.00 0.00 0.00 0.00 00:16:39.860 00:16:41.241 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:41.241 Nvme0n1 : 7.00 17836.57 69.67 0.00 0.00 0.00 0.00 0.00 00:16:41.241 =================================================================================================================== 00:16:41.241 Total : 17836.57 69.67 0.00 0.00 0.00 0.00 0.00 00:16:41.241 00:16:42.181 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:42.181 Nvme0n1 : 8.00 17843.88 69.70 0.00 0.00 0.00 0.00 0.00 00:16:42.181 =================================================================================================================== 00:16:42.181 Total : 17843.88 69.70 0.00 0.00 0.00 0.00 0.00 00:16:42.181 00:16:43.122 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:43.122 Nvme0n1 : 9.00 17857.11 69.75 0.00 0.00 0.00 0.00 0.00 00:16:43.122 =================================================================================================================== 00:16:43.122 Total : 17857.11 69.75 0.00 0.00 0.00 0.00 0.00 00:16:43.122 00:16:44.062 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:44.062 Nvme0n1 : 10.00 17868.00 69.80 0.00 0.00 0.00 0.00 0.00 00:16:44.062 =================================================================================================================== 00:16:44.062 Total : 17868.00 69.80 0.00 0.00 0.00 0.00 0.00 00:16:44.062 00:16:44.062 00:16:44.062 Latency(us) 00:16:44.062 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:44.062 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:44.062 Nvme0n1 : 10.01 17869.54 69.80 0.00 0.00 7159.39 4314.45 13817.17 00:16:44.062 =================================================================================================================== 00:16:44.062 Total : 17869.54 69.80 0.00 0.00 7159.39 4314.45 13817.17 00:16:44.062 0 00:16:44.062 12:59:48 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3944006 00:16:44.062 12:59:48 -- common/autotest_common.sh@936 -- # '[' -z 3944006 ']' 00:16:44.062 12:59:48 -- common/autotest_common.sh@940 -- # kill -0 3944006 00:16:44.062 12:59:48 -- common/autotest_common.sh@941 -- # uname 00:16:44.062 12:59:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:44.062 12:59:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3944006 00:16:44.062 12:59:48 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:44.062 12:59:48 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:44.062 12:59:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3944006' 00:16:44.062 killing process with pid 3944006 00:16:44.062 12:59:48 -- common/autotest_common.sh@955 -- # kill 3944006 00:16:44.062 Received shutdown signal, test time was about 10.000000 seconds 00:16:44.062 00:16:44.062 Latency(us) 00:16:44.062 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:44.062 =================================================================================================================== 00:16:44.062 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:44.062 12:59:48 -- common/autotest_common.sh@960 -- # wait 3944006 00:16:44.062 12:59:49 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:16:44.322 12:59:49 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 47d2bb3c-9898-4402-bd91-c437882a4503 00:16:44.322 12:59:49 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:16:44.583 12:59:49 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:16:44.583 12:59:49 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:16:44.583 12:59:49 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 3940495 00:16:44.583 12:59:49 -- target/nvmf_lvs_grow.sh@74 -- # wait 3940495 00:16:44.583 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 3940495 Killed "${NVMF_APP[@]}" "$@" 00:16:44.583 12:59:49 -- target/nvmf_lvs_grow.sh@74 -- # true 00:16:44.583 12:59:49 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:16:44.583 12:59:49 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:44.583 12:59:49 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:44.583 12:59:49 -- common/autotest_common.sh@10 -- # set +x 00:16:44.583 12:59:49 -- nvmf/common.sh@470 -- # nvmfpid=3946363 00:16:44.583 12:59:49 -- nvmf/common.sh@471 -- # waitforlisten 3946363 00:16:44.583 12:59:49 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:44.583 12:59:49 -- common/autotest_common.sh@817 -- # '[' -z 3946363 ']' 00:16:44.583 12:59:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:44.583 12:59:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:44.583 12:59:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:44.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:44.583 12:59:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:44.583 12:59:49 -- common/autotest_common.sh@10 -- # set +x 00:16:44.583 [2024-04-26 12:59:49.510121] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:16:44.583 [2024-04-26 12:59:49.510177] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:44.583 EAL: No free 2048 kB hugepages reported on node 1 00:16:44.583 [2024-04-26 12:59:49.576082] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:44.583 [2024-04-26 12:59:49.639175] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:44.583 [2024-04-26 12:59:49.639216] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:44.583 [2024-04-26 12:59:49.639224] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:44.583 [2024-04-26 12:59:49.639230] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:44.583 [2024-04-26 12:59:49.639235] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:44.583 [2024-04-26 12:59:49.639253] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:45.524 12:59:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:45.524 12:59:50 -- common/autotest_common.sh@850 -- # return 0 00:16:45.524 12:59:50 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:45.524 12:59:50 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:45.524 12:59:50 -- common/autotest_common.sh@10 -- # set +x 00:16:45.524 12:59:50 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:45.524 12:59:50 -- target/nvmf_lvs_grow.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:45.524 [2024-04-26 12:59:50.456079] blobstore.c:4779:bs_recover: *NOTICE*: Performing recovery on blobstore 00:16:45.524 [2024-04-26 12:59:50.456168] blobstore.c:4726:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:16:45.524 [2024-04-26 12:59:50.456198] blobstore.c:4726:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:16:45.524 12:59:50 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:16:45.524 12:59:50 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev e7c0cb62-9a81-4190-8755-dab646636541 00:16:45.524 12:59:50 -- common/autotest_common.sh@885 -- # local bdev_name=e7c0cb62-9a81-4190-8755-dab646636541 00:16:45.524 12:59:50 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:16:45.524 12:59:50 -- common/autotest_common.sh@887 -- # local i 00:16:45.524 12:59:50 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:16:45.524 12:59:50 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:16:45.524 12:59:50 -- common/autotest_common.sh@890 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:45.785 12:59:50 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b e7c0cb62-9a81-4190-8755-dab646636541 -t 2000 00:16:45.785 [ 00:16:45.785 { 00:16:45.785 "name": "e7c0cb62-9a81-4190-8755-dab646636541", 00:16:45.785 "aliases": [ 00:16:45.785 "lvs/lvol" 00:16:45.785 ], 00:16:45.785 "product_name": "Logical Volume", 00:16:45.785 "block_size": 4096, 00:16:45.785 "num_blocks": 38912, 00:16:45.785 "uuid": "e7c0cb62-9a81-4190-8755-dab646636541", 00:16:45.785 "assigned_rate_limits": { 00:16:45.785 "rw_ios_per_sec": 0, 00:16:45.785 "rw_mbytes_per_sec": 0, 00:16:45.785 "r_mbytes_per_sec": 0, 00:16:45.785 "w_mbytes_per_sec": 0 00:16:45.785 }, 00:16:45.785 "claimed": false, 00:16:45.785 "zoned": false, 00:16:45.785 "supported_io_types": { 00:16:45.785 "read": true, 00:16:45.785 "write": true, 00:16:45.785 "unmap": true, 00:16:45.785 "write_zeroes": true, 00:16:45.785 "flush": false, 00:16:45.785 "reset": true, 00:16:45.785 "compare": false, 00:16:45.785 "compare_and_write": false, 00:16:45.785 "abort": false, 00:16:45.785 "nvme_admin": false, 00:16:45.785 "nvme_io": false 00:16:45.785 }, 00:16:45.785 "driver_specific": { 00:16:45.785 "lvol": { 00:16:45.785 "lvol_store_uuid": "47d2bb3c-9898-4402-bd91-c437882a4503", 00:16:45.785 "base_bdev": "aio_bdev", 00:16:45.785 "thin_provision": false, 00:16:45.785 "snapshot": false, 00:16:45.785 "clone": false, 00:16:45.785 "esnap_clone": false 00:16:45.785 } 00:16:45.785 } 00:16:45.785 } 00:16:45.785 ] 00:16:45.785 12:59:50 -- common/autotest_common.sh@893 -- # return 0 00:16:45.785 12:59:50 -- target/nvmf_lvs_grow.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 47d2bb3c-9898-4402-bd91-c437882a4503 00:16:45.785 12:59:50 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:16:46.046 12:59:50 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:16:46.046 12:59:50 -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 47d2bb3c-9898-4402-bd91-c437882a4503 00:16:46.046 12:59:50 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:16:46.306 12:59:51 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:16:46.306 12:59:51 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:16:46.306 [2024-04-26 12:59:51.256095] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:16:46.307 12:59:51 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 47d2bb3c-9898-4402-bd91-c437882a4503 00:16:46.307 12:59:51 -- common/autotest_common.sh@638 -- # local es=0 00:16:46.307 12:59:51 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 47d2bb3c-9898-4402-bd91-c437882a4503 00:16:46.307 12:59:51 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:46.307 12:59:51 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:46.307 12:59:51 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:46.307 12:59:51 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:46.307 12:59:51 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:46.307 12:59:51 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:46.307 12:59:51 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:46.307 12:59:51 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:16:46.307 12:59:51 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 47d2bb3c-9898-4402-bd91-c437882a4503 00:16:46.568 request: 00:16:46.568 { 00:16:46.568 "uuid": "47d2bb3c-9898-4402-bd91-c437882a4503", 00:16:46.568 "method": "bdev_lvol_get_lvstores", 00:16:46.568 "req_id": 1 00:16:46.568 } 00:16:46.568 Got JSON-RPC error response 00:16:46.568 response: 00:16:46.568 { 00:16:46.568 "code": -19, 00:16:46.568 "message": "No such device" 00:16:46.568 } 00:16:46.568 12:59:51 -- common/autotest_common.sh@641 -- # es=1 00:16:46.568 12:59:51 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:16:46.568 12:59:51 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:16:46.568 12:59:51 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:16:46.568 12:59:51 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:46.568 aio_bdev 00:16:46.568 12:59:51 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev e7c0cb62-9a81-4190-8755-dab646636541 00:16:46.568 12:59:51 -- common/autotest_common.sh@885 -- # local bdev_name=e7c0cb62-9a81-4190-8755-dab646636541 00:16:46.568 12:59:51 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:16:46.568 12:59:51 -- common/autotest_common.sh@887 -- # local i 00:16:46.568 12:59:51 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:16:46.568 12:59:51 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:16:46.568 12:59:51 -- common/autotest_common.sh@890 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:46.828 12:59:51 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b e7c0cb62-9a81-4190-8755-dab646636541 -t 2000 00:16:47.089 [ 00:16:47.089 { 00:16:47.089 "name": "e7c0cb62-9a81-4190-8755-dab646636541", 00:16:47.089 "aliases": [ 00:16:47.089 "lvs/lvol" 00:16:47.089 ], 00:16:47.089 "product_name": "Logical Volume", 00:16:47.089 "block_size": 4096, 00:16:47.089 "num_blocks": 38912, 00:16:47.089 "uuid": "e7c0cb62-9a81-4190-8755-dab646636541", 00:16:47.089 "assigned_rate_limits": { 00:16:47.089 "rw_ios_per_sec": 0, 00:16:47.089 "rw_mbytes_per_sec": 0, 00:16:47.089 "r_mbytes_per_sec": 0, 00:16:47.089 "w_mbytes_per_sec": 0 00:16:47.089 }, 00:16:47.089 "claimed": false, 00:16:47.089 "zoned": false, 00:16:47.089 "supported_io_types": { 00:16:47.089 "read": true, 00:16:47.089 "write": true, 00:16:47.089 "unmap": true, 00:16:47.089 "write_zeroes": true, 00:16:47.089 "flush": false, 00:16:47.089 "reset": true, 00:16:47.089 "compare": false, 00:16:47.089 "compare_and_write": false, 00:16:47.089 "abort": false, 00:16:47.089 "nvme_admin": false, 00:16:47.089 "nvme_io": false 00:16:47.089 }, 00:16:47.089 "driver_specific": { 00:16:47.089 "lvol": { 00:16:47.089 "lvol_store_uuid": "47d2bb3c-9898-4402-bd91-c437882a4503", 00:16:47.089 "base_bdev": "aio_bdev", 00:16:47.089 "thin_provision": false, 00:16:47.089 "snapshot": false, 00:16:47.089 "clone": false, 00:16:47.089 "esnap_clone": false 00:16:47.089 } 00:16:47.089 } 00:16:47.089 } 00:16:47.089 ] 00:16:47.089 12:59:51 -- common/autotest_common.sh@893 -- # return 0 00:16:47.089 12:59:51 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 47d2bb3c-9898-4402-bd91-c437882a4503 00:16:47.089 12:59:51 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:16:47.089 12:59:52 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:16:47.089 12:59:52 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 47d2bb3c-9898-4402-bd91-c437882a4503 00:16:47.089 12:59:52 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:16:47.350 12:59:52 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:16:47.350 12:59:52 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete e7c0cb62-9a81-4190-8755-dab646636541 00:16:47.350 12:59:52 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 47d2bb3c-9898-4402-bd91-c437882a4503 00:16:47.609 12:59:52 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:16:47.869 12:59:52 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:47.869 00:16:47.869 real 0m16.766s 00:16:47.869 user 0m44.031s 00:16:47.869 sys 0m2.771s 00:16:47.869 12:59:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:47.869 12:59:52 -- common/autotest_common.sh@10 -- # set +x 00:16:47.869 ************************************ 00:16:47.869 END TEST lvs_grow_dirty 00:16:47.869 ************************************ 00:16:47.869 12:59:52 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:16:47.869 12:59:52 -- common/autotest_common.sh@794 -- # type=--id 00:16:47.869 12:59:52 -- common/autotest_common.sh@795 -- # id=0 00:16:47.869 12:59:52 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:16:47.869 12:59:52 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:16:47.869 12:59:52 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:16:47.869 12:59:52 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:16:47.869 12:59:52 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:16:47.869 12:59:52 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:16:47.869 nvmf_trace.0 00:16:47.869 12:59:52 -- common/autotest_common.sh@809 -- # return 0 00:16:47.869 12:59:52 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:16:47.869 12:59:52 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:47.869 12:59:52 -- nvmf/common.sh@117 -- # sync 00:16:47.869 12:59:52 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:47.869 12:59:52 -- nvmf/common.sh@120 -- # set +e 00:16:47.869 12:59:52 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:47.869 12:59:52 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:47.869 rmmod nvme_tcp 00:16:47.869 rmmod nvme_fabrics 00:16:47.869 rmmod nvme_keyring 00:16:47.869 12:59:52 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:47.869 12:59:52 -- nvmf/common.sh@124 -- # set -e 00:16:47.869 12:59:52 -- nvmf/common.sh@125 -- # return 0 00:16:47.869 12:59:52 -- nvmf/common.sh@478 -- # '[' -n 3946363 ']' 00:16:47.869 12:59:52 -- nvmf/common.sh@479 -- # killprocess 3946363 00:16:47.869 12:59:52 -- common/autotest_common.sh@936 -- # '[' -z 3946363 ']' 00:16:47.869 12:59:52 -- common/autotest_common.sh@940 -- # kill -0 3946363 00:16:47.869 12:59:52 -- common/autotest_common.sh@941 -- # uname 00:16:47.869 12:59:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:47.869 12:59:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3946363 00:16:48.129 12:59:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:48.129 12:59:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:48.129 12:59:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3946363' 00:16:48.129 killing process with pid 3946363 00:16:48.129 12:59:52 -- common/autotest_common.sh@955 -- # kill 3946363 00:16:48.129 12:59:52 -- common/autotest_common.sh@960 -- # wait 3946363 00:16:48.129 12:59:53 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:48.129 12:59:53 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:16:48.129 12:59:53 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:16:48.129 12:59:53 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:48.129 12:59:53 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:48.129 12:59:53 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:48.129 12:59:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:48.129 12:59:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:50.676 12:59:55 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:50.676 00:16:50.676 real 0m42.855s 00:16:50.676 user 1m4.861s 00:16:50.676 sys 0m9.825s 00:16:50.676 12:59:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:50.676 12:59:55 -- common/autotest_common.sh@10 -- # set +x 00:16:50.676 ************************************ 00:16:50.676 END TEST nvmf_lvs_grow 00:16:50.676 ************************************ 00:16:50.676 12:59:55 -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:16:50.676 12:59:55 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:50.676 12:59:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:50.676 12:59:55 -- common/autotest_common.sh@10 -- # set +x 00:16:50.676 ************************************ 00:16:50.676 START TEST nvmf_bdev_io_wait 00:16:50.676 ************************************ 00:16:50.676 12:59:55 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:16:50.676 * Looking for test storage... 00:16:50.676 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:50.676 12:59:55 -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:50.676 12:59:55 -- nvmf/common.sh@7 -- # uname -s 00:16:50.676 12:59:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:50.676 12:59:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:50.676 12:59:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:50.676 12:59:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:50.676 12:59:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:50.676 12:59:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:50.676 12:59:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:50.676 12:59:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:50.676 12:59:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:50.676 12:59:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:50.676 12:59:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:50.676 12:59:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:50.676 12:59:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:50.676 12:59:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:50.676 12:59:55 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:50.676 12:59:55 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:50.676 12:59:55 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:50.676 12:59:55 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:50.676 12:59:55 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:50.676 12:59:55 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:50.676 12:59:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.676 12:59:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.676 12:59:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.676 12:59:55 -- paths/export.sh@5 -- # export PATH 00:16:50.676 12:59:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.676 12:59:55 -- nvmf/common.sh@47 -- # : 0 00:16:50.676 12:59:55 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:50.676 12:59:55 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:50.676 12:59:55 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:50.676 12:59:55 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:50.676 12:59:55 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:50.676 12:59:55 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:50.676 12:59:55 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:50.676 12:59:55 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:50.676 12:59:55 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:50.676 12:59:55 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:50.676 12:59:55 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:16:50.676 12:59:55 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:16:50.676 12:59:55 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:50.676 12:59:55 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:50.676 12:59:55 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:50.676 12:59:55 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:50.676 12:59:55 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:50.676 12:59:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:50.676 12:59:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:50.676 12:59:55 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:16:50.676 12:59:55 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:16:50.676 12:59:55 -- nvmf/common.sh@285 -- # xtrace_disable 00:16:50.676 12:59:55 -- common/autotest_common.sh@10 -- # set +x 00:16:57.261 13:00:02 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:57.261 13:00:02 -- nvmf/common.sh@291 -- # pci_devs=() 00:16:57.261 13:00:02 -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:57.261 13:00:02 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:57.261 13:00:02 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:57.261 13:00:02 -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:57.261 13:00:02 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:57.261 13:00:02 -- nvmf/common.sh@295 -- # net_devs=() 00:16:57.261 13:00:02 -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:57.261 13:00:02 -- nvmf/common.sh@296 -- # e810=() 00:16:57.261 13:00:02 -- nvmf/common.sh@296 -- # local -ga e810 00:16:57.261 13:00:02 -- nvmf/common.sh@297 -- # x722=() 00:16:57.261 13:00:02 -- nvmf/common.sh@297 -- # local -ga x722 00:16:57.261 13:00:02 -- nvmf/common.sh@298 -- # mlx=() 00:16:57.261 13:00:02 -- nvmf/common.sh@298 -- # local -ga mlx 00:16:57.261 13:00:02 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:57.261 13:00:02 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:57.261 13:00:02 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:57.261 13:00:02 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:57.261 13:00:02 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:57.261 13:00:02 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:57.261 13:00:02 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:57.261 13:00:02 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:57.262 13:00:02 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:57.262 13:00:02 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:57.262 13:00:02 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:57.262 13:00:02 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:57.262 13:00:02 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:57.262 13:00:02 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:57.262 13:00:02 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:57.262 13:00:02 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:57.262 13:00:02 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:57.262 13:00:02 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:57.262 13:00:02 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:16:57.262 Found 0000:31:00.0 (0x8086 - 0x159b) 00:16:57.262 13:00:02 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:57.262 13:00:02 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:57.262 13:00:02 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:57.262 13:00:02 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:57.262 13:00:02 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:57.262 13:00:02 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:57.262 13:00:02 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:16:57.262 Found 0000:31:00.1 (0x8086 - 0x159b) 00:16:57.262 13:00:02 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:57.262 13:00:02 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:57.262 13:00:02 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:57.262 13:00:02 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:57.262 13:00:02 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:57.262 13:00:02 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:57.262 13:00:02 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:57.262 13:00:02 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:57.262 13:00:02 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:57.262 13:00:02 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:57.262 13:00:02 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:57.262 13:00:02 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:57.262 13:00:02 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:16:57.262 Found net devices under 0000:31:00.0: cvl_0_0 00:16:57.262 13:00:02 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:57.262 13:00:02 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:57.262 13:00:02 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:57.262 13:00:02 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:57.262 13:00:02 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:57.262 13:00:02 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:16:57.262 Found net devices under 0000:31:00.1: cvl_0_1 00:16:57.262 13:00:02 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:57.262 13:00:02 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:16:57.262 13:00:02 -- nvmf/common.sh@403 -- # is_hw=yes 00:16:57.262 13:00:02 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:16:57.262 13:00:02 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:16:57.262 13:00:02 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:16:57.262 13:00:02 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:57.262 13:00:02 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:57.262 13:00:02 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:57.262 13:00:02 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:57.262 13:00:02 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:57.262 13:00:02 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:57.262 13:00:02 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:57.262 13:00:02 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:57.262 13:00:02 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:57.262 13:00:02 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:57.262 13:00:02 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:57.262 13:00:02 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:57.262 13:00:02 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:57.521 13:00:02 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:57.521 13:00:02 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:57.521 13:00:02 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:57.521 13:00:02 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:57.521 13:00:02 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:57.521 13:00:02 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:57.521 13:00:02 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:57.521 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:57.521 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.497 ms 00:16:57.521 00:16:57.521 --- 10.0.0.2 ping statistics --- 00:16:57.521 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:57.521 rtt min/avg/max/mdev = 0.497/0.497/0.497/0.000 ms 00:16:57.521 13:00:02 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:57.521 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:57.521 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.310 ms 00:16:57.521 00:16:57.521 --- 10.0.0.1 ping statistics --- 00:16:57.521 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:57.521 rtt min/avg/max/mdev = 0.310/0.310/0.310/0.000 ms 00:16:57.521 13:00:02 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:57.521 13:00:02 -- nvmf/common.sh@411 -- # return 0 00:16:57.521 13:00:02 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:57.521 13:00:02 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:57.521 13:00:02 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:16:57.521 13:00:02 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:16:57.521 13:00:02 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:57.521 13:00:02 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:16:57.521 13:00:02 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:16:57.521 13:00:02 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:16:57.521 13:00:02 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:57.521 13:00:02 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:57.521 13:00:02 -- common/autotest_common.sh@10 -- # set +x 00:16:57.521 13:00:02 -- nvmf/common.sh@470 -- # nvmfpid=3951294 00:16:57.521 13:00:02 -- nvmf/common.sh@471 -- # waitforlisten 3951294 00:16:57.521 13:00:02 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:16:57.521 13:00:02 -- common/autotest_common.sh@817 -- # '[' -z 3951294 ']' 00:16:57.521 13:00:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:57.521 13:00:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:57.521 13:00:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:57.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:57.521 13:00:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:57.521 13:00:02 -- common/autotest_common.sh@10 -- # set +x 00:16:57.779 [2024-04-26 13:00:02.594724] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:16:57.779 [2024-04-26 13:00:02.594777] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:57.779 EAL: No free 2048 kB hugepages reported on node 1 00:16:57.779 [2024-04-26 13:00:02.664765] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:57.779 [2024-04-26 13:00:02.736079] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:57.779 [2024-04-26 13:00:02.736116] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:57.779 [2024-04-26 13:00:02.736125] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:57.779 [2024-04-26 13:00:02.736133] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:57.779 [2024-04-26 13:00:02.736140] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:57.779 [2024-04-26 13:00:02.736206] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:57.779 [2024-04-26 13:00:02.736309] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:57.779 [2024-04-26 13:00:02.736450] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:57.779 [2024-04-26 13:00:02.736451] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:58.349 13:00:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:58.349 13:00:03 -- common/autotest_common.sh@850 -- # return 0 00:16:58.349 13:00:03 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:58.349 13:00:03 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:58.349 13:00:03 -- common/autotest_common.sh@10 -- # set +x 00:16:58.610 13:00:03 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:58.610 13:00:03 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:16:58.610 13:00:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:58.610 13:00:03 -- common/autotest_common.sh@10 -- # set +x 00:16:58.610 13:00:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:58.610 13:00:03 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:16:58.610 13:00:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:58.610 13:00:03 -- common/autotest_common.sh@10 -- # set +x 00:16:58.610 13:00:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:58.610 13:00:03 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:58.610 13:00:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:58.610 13:00:03 -- common/autotest_common.sh@10 -- # set +x 00:16:58.610 [2024-04-26 13:00:03.476587] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:58.610 13:00:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:58.610 13:00:03 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:58.610 13:00:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:58.610 13:00:03 -- common/autotest_common.sh@10 -- # set +x 00:16:58.610 Malloc0 00:16:58.610 13:00:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:58.610 13:00:03 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:58.610 13:00:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:58.610 13:00:03 -- common/autotest_common.sh@10 -- # set +x 00:16:58.610 13:00:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:58.610 13:00:03 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:58.610 13:00:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:58.610 13:00:03 -- common/autotest_common.sh@10 -- # set +x 00:16:58.610 13:00:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:58.610 13:00:03 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:58.610 13:00:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:58.610 13:00:03 -- common/autotest_common.sh@10 -- # set +x 00:16:58.610 [2024-04-26 13:00:03.539115] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:58.610 13:00:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:58.610 13:00:03 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3951622 00:16:58.610 13:00:03 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:16:58.610 13:00:03 -- nvmf/common.sh@521 -- # config=() 00:16:58.610 13:00:03 -- nvmf/common.sh@521 -- # local subsystem config 00:16:58.610 13:00:03 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:58.610 13:00:03 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:58.610 { 00:16:58.610 "params": { 00:16:58.610 "name": "Nvme$subsystem", 00:16:58.610 "trtype": "$TEST_TRANSPORT", 00:16:58.610 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:58.610 "adrfam": "ipv4", 00:16:58.610 "trsvcid": "$NVMF_PORT", 00:16:58.610 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:58.610 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:58.610 "hdgst": ${hdgst:-false}, 00:16:58.610 "ddgst": ${ddgst:-false} 00:16:58.610 }, 00:16:58.610 "method": "bdev_nvme_attach_controller" 00:16:58.610 } 00:16:58.610 EOF 00:16:58.610 )") 00:16:58.610 13:00:03 -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:16:58.610 13:00:03 -- target/bdev_io_wait.sh@30 -- # READ_PID=3951624 00:16:58.610 13:00:03 -- nvmf/common.sh@543 -- # cat 00:16:58.610 13:00:03 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3951627 00:16:58.610 13:00:03 -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:16:58.610 13:00:03 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:16:58.610 13:00:03 -- nvmf/common.sh@521 -- # config=() 00:16:58.610 13:00:03 -- nvmf/common.sh@521 -- # local subsystem config 00:16:58.610 13:00:03 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:58.610 13:00:03 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:58.610 { 00:16:58.610 "params": { 00:16:58.610 "name": "Nvme$subsystem", 00:16:58.610 "trtype": "$TEST_TRANSPORT", 00:16:58.610 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:58.610 "adrfam": "ipv4", 00:16:58.610 "trsvcid": "$NVMF_PORT", 00:16:58.610 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:58.610 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:58.610 "hdgst": ${hdgst:-false}, 00:16:58.610 "ddgst": ${ddgst:-false} 00:16:58.610 }, 00:16:58.610 "method": "bdev_nvme_attach_controller" 00:16:58.610 } 00:16:58.610 EOF 00:16:58.610 )") 00:16:58.611 13:00:03 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3951629 00:16:58.611 13:00:03 -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:16:58.611 13:00:03 -- target/bdev_io_wait.sh@35 -- # sync 00:16:58.611 13:00:03 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:16:58.611 13:00:03 -- nvmf/common.sh@521 -- # config=() 00:16:58.611 13:00:03 -- nvmf/common.sh@521 -- # local subsystem config 00:16:58.611 13:00:03 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:58.611 13:00:03 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:58.611 { 00:16:58.611 "params": { 00:16:58.611 "name": "Nvme$subsystem", 00:16:58.611 "trtype": "$TEST_TRANSPORT", 00:16:58.611 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:58.611 "adrfam": "ipv4", 00:16:58.611 "trsvcid": "$NVMF_PORT", 00:16:58.611 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:58.611 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:58.611 "hdgst": ${hdgst:-false}, 00:16:58.611 "ddgst": ${ddgst:-false} 00:16:58.611 }, 00:16:58.611 "method": "bdev_nvme_attach_controller" 00:16:58.611 } 00:16:58.611 EOF 00:16:58.611 )") 00:16:58.611 13:00:03 -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:16:58.611 13:00:03 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:16:58.611 13:00:03 -- nvmf/common.sh@521 -- # config=() 00:16:58.611 13:00:03 -- nvmf/common.sh@543 -- # cat 00:16:58.611 13:00:03 -- nvmf/common.sh@521 -- # local subsystem config 00:16:58.611 13:00:03 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:58.611 13:00:03 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:58.611 { 00:16:58.611 "params": { 00:16:58.611 "name": "Nvme$subsystem", 00:16:58.611 "trtype": "$TEST_TRANSPORT", 00:16:58.611 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:58.611 "adrfam": "ipv4", 00:16:58.611 "trsvcid": "$NVMF_PORT", 00:16:58.611 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:58.611 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:58.611 "hdgst": ${hdgst:-false}, 00:16:58.611 "ddgst": ${ddgst:-false} 00:16:58.611 }, 00:16:58.611 "method": "bdev_nvme_attach_controller" 00:16:58.611 } 00:16:58.611 EOF 00:16:58.611 )") 00:16:58.611 13:00:03 -- nvmf/common.sh@545 -- # jq . 00:16:58.611 13:00:03 -- nvmf/common.sh@543 -- # cat 00:16:58.611 13:00:03 -- target/bdev_io_wait.sh@37 -- # wait 3951622 00:16:58.611 13:00:03 -- nvmf/common.sh@543 -- # cat 00:16:58.611 13:00:03 -- nvmf/common.sh@546 -- # IFS=, 00:16:58.611 13:00:03 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:16:58.611 "params": { 00:16:58.611 "name": "Nvme1", 00:16:58.611 "trtype": "tcp", 00:16:58.611 "traddr": "10.0.0.2", 00:16:58.611 "adrfam": "ipv4", 00:16:58.611 "trsvcid": "4420", 00:16:58.611 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:58.611 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:58.611 "hdgst": false, 00:16:58.611 "ddgst": false 00:16:58.611 }, 00:16:58.611 "method": "bdev_nvme_attach_controller" 00:16:58.611 }' 00:16:58.611 13:00:03 -- nvmf/common.sh@545 -- # jq . 00:16:58.611 13:00:03 -- nvmf/common.sh@545 -- # jq . 00:16:58.611 13:00:03 -- nvmf/common.sh@545 -- # jq . 00:16:58.611 13:00:03 -- nvmf/common.sh@546 -- # IFS=, 00:16:58.611 13:00:03 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:16:58.611 "params": { 00:16:58.611 "name": "Nvme1", 00:16:58.611 "trtype": "tcp", 00:16:58.611 "traddr": "10.0.0.2", 00:16:58.611 "adrfam": "ipv4", 00:16:58.611 "trsvcid": "4420", 00:16:58.611 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:58.611 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:58.611 "hdgst": false, 00:16:58.611 "ddgst": false 00:16:58.611 }, 00:16:58.611 "method": "bdev_nvme_attach_controller" 00:16:58.611 }' 00:16:58.611 13:00:03 -- nvmf/common.sh@546 -- # IFS=, 00:16:58.611 13:00:03 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:16:58.611 "params": { 00:16:58.611 "name": "Nvme1", 00:16:58.611 "trtype": "tcp", 00:16:58.611 "traddr": "10.0.0.2", 00:16:58.611 "adrfam": "ipv4", 00:16:58.611 "trsvcid": "4420", 00:16:58.611 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:58.611 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:58.611 "hdgst": false, 00:16:58.611 "ddgst": false 00:16:58.611 }, 00:16:58.611 "method": "bdev_nvme_attach_controller" 00:16:58.611 }' 00:16:58.611 13:00:03 -- nvmf/common.sh@546 -- # IFS=, 00:16:58.611 13:00:03 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:16:58.611 "params": { 00:16:58.611 "name": "Nvme1", 00:16:58.611 "trtype": "tcp", 00:16:58.611 "traddr": "10.0.0.2", 00:16:58.611 "adrfam": "ipv4", 00:16:58.611 "trsvcid": "4420", 00:16:58.611 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:58.611 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:58.611 "hdgst": false, 00:16:58.611 "ddgst": false 00:16:58.611 }, 00:16:58.611 "method": "bdev_nvme_attach_controller" 00:16:58.611 }' 00:16:58.611 [2024-04-26 13:00:03.590217] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:16:58.611 [2024-04-26 13:00:03.590268] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:16:58.611 [2024-04-26 13:00:03.590356] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:16:58.611 [2024-04-26 13:00:03.590401] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:16:58.611 [2024-04-26 13:00:03.591215] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:16:58.611 [2024-04-26 13:00:03.591259] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:16:58.611 [2024-04-26 13:00:03.594405] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:16:58.611 [2024-04-26 13:00:03.594448] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:16:58.611 EAL: No free 2048 kB hugepages reported on node 1 00:16:58.872 EAL: No free 2048 kB hugepages reported on node 1 00:16:58.872 EAL: No free 2048 kB hugepages reported on node 1 00:16:58.872 [2024-04-26 13:00:03.729569] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:58.872 [2024-04-26 13:00:03.772444] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:58.872 EAL: No free 2048 kB hugepages reported on node 1 00:16:58.872 [2024-04-26 13:00:03.778266] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:16:58.872 [2024-04-26 13:00:03.819846] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:58.872 [2024-04-26 13:00:03.820690] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:16:58.872 [2024-04-26 13:00:03.867869] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:16:58.872 [2024-04-26 13:00:03.879492] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:58.872 [2024-04-26 13:00:03.930659] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:59.133 Running I/O for 1 seconds... 00:16:59.133 Running I/O for 1 seconds... 00:16:59.133 Running I/O for 1 seconds... 00:16:59.394 Running I/O for 1 seconds... 00:16:59.965 00:16:59.965 Latency(us) 00:16:59.965 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:59.965 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:16:59.965 Nvme1n1 : 1.00 14252.07 55.67 0.00 0.00 8954.72 4833.28 19114.67 00:16:59.965 =================================================================================================================== 00:16:59.965 Total : 14252.07 55.67 0.00 0.00 8954.72 4833.28 19114.67 00:16:59.965 00:16:59.965 Latency(us) 00:16:59.965 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:59.965 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:16:59.965 Nvme1n1 : 1.00 190227.70 743.08 0.00 0.00 670.07 266.24 754.35 00:16:59.965 =================================================================================================================== 00:16:59.965 Total : 190227.70 743.08 0.00 0.00 670.07 266.24 754.35 00:17:00.225 00:17:00.225 Latency(us) 00:17:00.225 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:00.225 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:17:00.225 Nvme1n1 : 1.00 16594.64 64.82 0.00 0.00 7690.60 4751.36 17039.36 00:17:00.225 =================================================================================================================== 00:17:00.225 Total : 16594.64 64.82 0.00 0.00 7690.60 4751.36 17039.36 00:17:00.225 00:17:00.225 Latency(us) 00:17:00.225 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:00.225 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:17:00.225 Nvme1n1 : 1.01 11454.39 44.74 0.00 0.00 11140.15 5352.11 23265.28 00:17:00.225 =================================================================================================================== 00:17:00.225 Total : 11454.39 44.74 0.00 0.00 11140.15 5352.11 23265.28 00:17:00.485 13:00:05 -- target/bdev_io_wait.sh@38 -- # wait 3951624 00:17:00.485 13:00:05 -- target/bdev_io_wait.sh@39 -- # wait 3951627 00:17:00.485 13:00:05 -- target/bdev_io_wait.sh@40 -- # wait 3951629 00:17:00.485 13:00:05 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:00.485 13:00:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:00.485 13:00:05 -- common/autotest_common.sh@10 -- # set +x 00:17:00.485 13:00:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:00.485 13:00:05 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:17:00.485 13:00:05 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:17:00.485 13:00:05 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:00.485 13:00:05 -- nvmf/common.sh@117 -- # sync 00:17:00.485 13:00:05 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:00.485 13:00:05 -- nvmf/common.sh@120 -- # set +e 00:17:00.485 13:00:05 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:00.485 13:00:05 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:00.485 rmmod nvme_tcp 00:17:00.485 rmmod nvme_fabrics 00:17:00.485 rmmod nvme_keyring 00:17:00.485 13:00:05 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:00.485 13:00:05 -- nvmf/common.sh@124 -- # set -e 00:17:00.485 13:00:05 -- nvmf/common.sh@125 -- # return 0 00:17:00.485 13:00:05 -- nvmf/common.sh@478 -- # '[' -n 3951294 ']' 00:17:00.485 13:00:05 -- nvmf/common.sh@479 -- # killprocess 3951294 00:17:00.485 13:00:05 -- common/autotest_common.sh@936 -- # '[' -z 3951294 ']' 00:17:00.485 13:00:05 -- common/autotest_common.sh@940 -- # kill -0 3951294 00:17:00.485 13:00:05 -- common/autotest_common.sh@941 -- # uname 00:17:00.485 13:00:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:00.485 13:00:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3951294 00:17:00.485 13:00:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:00.485 13:00:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:00.485 13:00:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3951294' 00:17:00.485 killing process with pid 3951294 00:17:00.485 13:00:05 -- common/autotest_common.sh@955 -- # kill 3951294 00:17:00.485 13:00:05 -- common/autotest_common.sh@960 -- # wait 3951294 00:17:00.746 13:00:05 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:00.746 13:00:05 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:17:00.746 13:00:05 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:17:00.746 13:00:05 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:00.746 13:00:05 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:00.746 13:00:05 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:00.746 13:00:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:00.746 13:00:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:02.658 13:00:07 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:02.658 00:17:02.658 real 0m12.328s 00:17:02.658 user 0m18.660s 00:17:02.658 sys 0m6.719s 00:17:02.658 13:00:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:02.658 13:00:07 -- common/autotest_common.sh@10 -- # set +x 00:17:02.658 ************************************ 00:17:02.658 END TEST nvmf_bdev_io_wait 00:17:02.658 ************************************ 00:17:02.918 13:00:07 -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:17:02.918 13:00:07 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:02.918 13:00:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:02.918 13:00:07 -- common/autotest_common.sh@10 -- # set +x 00:17:02.918 ************************************ 00:17:02.918 START TEST nvmf_queue_depth 00:17:02.918 ************************************ 00:17:02.918 13:00:07 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:17:03.178 * Looking for test storage... 00:17:03.178 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:03.178 13:00:08 -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:03.178 13:00:08 -- nvmf/common.sh@7 -- # uname -s 00:17:03.178 13:00:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:03.178 13:00:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:03.178 13:00:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:03.178 13:00:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:03.178 13:00:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:03.178 13:00:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:03.178 13:00:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:03.178 13:00:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:03.178 13:00:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:03.178 13:00:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:03.178 13:00:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:03.178 13:00:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:03.178 13:00:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:03.178 13:00:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:03.178 13:00:08 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:03.178 13:00:08 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:03.178 13:00:08 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:03.178 13:00:08 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:03.178 13:00:08 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:03.178 13:00:08 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:03.178 13:00:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.178 13:00:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.178 13:00:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.178 13:00:08 -- paths/export.sh@5 -- # export PATH 00:17:03.178 13:00:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.178 13:00:08 -- nvmf/common.sh@47 -- # : 0 00:17:03.178 13:00:08 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:03.178 13:00:08 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:03.178 13:00:08 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:03.178 13:00:08 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:03.178 13:00:08 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:03.178 13:00:08 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:03.178 13:00:08 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:03.178 13:00:08 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:03.178 13:00:08 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:17:03.178 13:00:08 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:17:03.178 13:00:08 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:03.178 13:00:08 -- target/queue_depth.sh@19 -- # nvmftestinit 00:17:03.178 13:00:08 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:17:03.178 13:00:08 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:03.178 13:00:08 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:03.179 13:00:08 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:03.179 13:00:08 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:03.179 13:00:08 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:03.179 13:00:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:03.179 13:00:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:03.179 13:00:08 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:17:03.179 13:00:08 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:17:03.179 13:00:08 -- nvmf/common.sh@285 -- # xtrace_disable 00:17:03.179 13:00:08 -- common/autotest_common.sh@10 -- # set +x 00:17:09.823 13:00:14 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:09.823 13:00:14 -- nvmf/common.sh@291 -- # pci_devs=() 00:17:09.823 13:00:14 -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:09.823 13:00:14 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:09.823 13:00:14 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:09.823 13:00:14 -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:09.823 13:00:14 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:09.823 13:00:14 -- nvmf/common.sh@295 -- # net_devs=() 00:17:09.823 13:00:14 -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:09.823 13:00:14 -- nvmf/common.sh@296 -- # e810=() 00:17:09.823 13:00:14 -- nvmf/common.sh@296 -- # local -ga e810 00:17:09.823 13:00:14 -- nvmf/common.sh@297 -- # x722=() 00:17:09.823 13:00:14 -- nvmf/common.sh@297 -- # local -ga x722 00:17:09.823 13:00:14 -- nvmf/common.sh@298 -- # mlx=() 00:17:09.823 13:00:14 -- nvmf/common.sh@298 -- # local -ga mlx 00:17:09.823 13:00:14 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:09.823 13:00:14 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:09.823 13:00:14 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:09.823 13:00:14 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:09.823 13:00:14 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:09.823 13:00:14 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:09.823 13:00:14 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:09.823 13:00:14 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:09.823 13:00:14 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:09.823 13:00:14 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:09.823 13:00:14 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:09.823 13:00:14 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:09.823 13:00:14 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:09.823 13:00:14 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:09.823 13:00:14 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:09.823 13:00:14 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:09.823 13:00:14 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:09.823 13:00:14 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:09.823 13:00:14 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:17:09.823 Found 0000:31:00.0 (0x8086 - 0x159b) 00:17:09.823 13:00:14 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:09.823 13:00:14 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:09.824 13:00:14 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:09.824 13:00:14 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:09.824 13:00:14 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:09.824 13:00:14 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:09.824 13:00:14 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:17:09.824 Found 0000:31:00.1 (0x8086 - 0x159b) 00:17:09.824 13:00:14 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:09.824 13:00:14 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:09.824 13:00:14 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:09.824 13:00:14 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:09.824 13:00:14 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:09.824 13:00:14 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:09.824 13:00:14 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:09.824 13:00:14 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:09.824 13:00:14 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:09.824 13:00:14 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:09.824 13:00:14 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:09.824 13:00:14 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:09.824 13:00:14 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:17:09.824 Found net devices under 0000:31:00.0: cvl_0_0 00:17:09.824 13:00:14 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:09.824 13:00:14 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:09.824 13:00:14 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:09.824 13:00:14 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:09.824 13:00:14 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:09.824 13:00:14 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:17:09.824 Found net devices under 0000:31:00.1: cvl_0_1 00:17:09.824 13:00:14 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:09.824 13:00:14 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:17:09.824 13:00:14 -- nvmf/common.sh@403 -- # is_hw=yes 00:17:09.824 13:00:14 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:17:09.824 13:00:14 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:17:09.824 13:00:14 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:17:09.824 13:00:14 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:09.824 13:00:14 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:09.824 13:00:14 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:09.824 13:00:14 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:09.824 13:00:14 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:09.824 13:00:14 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:09.824 13:00:14 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:09.824 13:00:14 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:09.824 13:00:14 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:09.824 13:00:14 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:09.824 13:00:14 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:09.824 13:00:14 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:09.824 13:00:14 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:10.085 13:00:14 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:10.085 13:00:14 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:10.085 13:00:15 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:10.085 13:00:15 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:10.085 13:00:15 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:10.085 13:00:15 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:10.085 13:00:15 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:10.085 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:10.085 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.709 ms 00:17:10.085 00:17:10.085 --- 10.0.0.2 ping statistics --- 00:17:10.085 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:10.085 rtt min/avg/max/mdev = 0.709/0.709/0.709/0.000 ms 00:17:10.085 13:00:15 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:10.346 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:10.346 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.320 ms 00:17:10.346 00:17:10.346 --- 10.0.0.1 ping statistics --- 00:17:10.346 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:10.346 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:17:10.346 13:00:15 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:10.346 13:00:15 -- nvmf/common.sh@411 -- # return 0 00:17:10.347 13:00:15 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:10.347 13:00:15 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:10.347 13:00:15 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:10.347 13:00:15 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:10.347 13:00:15 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:10.347 13:00:15 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:10.347 13:00:15 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:10.347 13:00:15 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:17:10.347 13:00:15 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:10.347 13:00:15 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:10.347 13:00:15 -- common/autotest_common.sh@10 -- # set +x 00:17:10.347 13:00:15 -- nvmf/common.sh@470 -- # nvmfpid=3956710 00:17:10.347 13:00:15 -- nvmf/common.sh@471 -- # waitforlisten 3956710 00:17:10.347 13:00:15 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:10.347 13:00:15 -- common/autotest_common.sh@817 -- # '[' -z 3956710 ']' 00:17:10.347 13:00:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:10.347 13:00:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:10.347 13:00:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:10.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:10.347 13:00:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:10.347 13:00:15 -- common/autotest_common.sh@10 -- # set +x 00:17:10.347 [2024-04-26 13:00:15.262956] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:17:10.347 [2024-04-26 13:00:15.263017] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:10.347 EAL: No free 2048 kB hugepages reported on node 1 00:17:10.347 [2024-04-26 13:00:15.347220] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:10.608 [2024-04-26 13:00:15.434023] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:10.608 [2024-04-26 13:00:15.434081] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:10.608 [2024-04-26 13:00:15.434089] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:10.608 [2024-04-26 13:00:15.434096] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:10.608 [2024-04-26 13:00:15.434102] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:10.608 [2024-04-26 13:00:15.434125] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:11.180 13:00:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:11.180 13:00:16 -- common/autotest_common.sh@850 -- # return 0 00:17:11.180 13:00:16 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:11.180 13:00:16 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:11.180 13:00:16 -- common/autotest_common.sh@10 -- # set +x 00:17:11.180 13:00:16 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:11.180 13:00:16 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:11.180 13:00:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:11.181 13:00:16 -- common/autotest_common.sh@10 -- # set +x 00:17:11.181 [2024-04-26 13:00:16.099077] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:11.181 13:00:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:11.181 13:00:16 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:11.181 13:00:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:11.181 13:00:16 -- common/autotest_common.sh@10 -- # set +x 00:17:11.181 Malloc0 00:17:11.181 13:00:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:11.181 13:00:16 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:11.181 13:00:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:11.181 13:00:16 -- common/autotest_common.sh@10 -- # set +x 00:17:11.181 13:00:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:11.181 13:00:16 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:11.181 13:00:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:11.181 13:00:16 -- common/autotest_common.sh@10 -- # set +x 00:17:11.181 13:00:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:11.181 13:00:16 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:11.181 13:00:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:11.181 13:00:16 -- common/autotest_common.sh@10 -- # set +x 00:17:11.181 [2024-04-26 13:00:16.170536] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:11.181 13:00:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:11.181 13:00:16 -- target/queue_depth.sh@30 -- # bdevperf_pid=3956865 00:17:11.181 13:00:16 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:11.181 13:00:16 -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:17:11.181 13:00:16 -- target/queue_depth.sh@33 -- # waitforlisten 3956865 /var/tmp/bdevperf.sock 00:17:11.181 13:00:16 -- common/autotest_common.sh@817 -- # '[' -z 3956865 ']' 00:17:11.181 13:00:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:11.181 13:00:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:11.181 13:00:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:11.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:11.181 13:00:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:11.181 13:00:16 -- common/autotest_common.sh@10 -- # set +x 00:17:11.181 [2024-04-26 13:00:16.223699] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:17:11.181 [2024-04-26 13:00:16.223769] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3956865 ] 00:17:11.440 EAL: No free 2048 kB hugepages reported on node 1 00:17:11.440 [2024-04-26 13:00:16.290542] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:11.440 [2024-04-26 13:00:16.362447] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:12.011 13:00:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:12.011 13:00:17 -- common/autotest_common.sh@850 -- # return 0 00:17:12.011 13:00:17 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:12.011 13:00:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:12.011 13:00:17 -- common/autotest_common.sh@10 -- # set +x 00:17:12.271 NVMe0n1 00:17:12.271 13:00:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:12.271 13:00:17 -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:12.271 Running I/O for 10 seconds... 00:17:22.271 00:17:22.271 Latency(us) 00:17:22.271 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:22.271 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:17:22.271 Verification LBA range: start 0x0 length 0x4000 00:17:22.271 NVMe0n1 : 10.05 11341.24 44.30 0.00 0.00 89946.35 12397.23 73837.23 00:17:22.271 =================================================================================================================== 00:17:22.271 Total : 11341.24 44.30 0.00 0.00 89946.35 12397.23 73837.23 00:17:22.271 0 00:17:22.271 13:00:27 -- target/queue_depth.sh@39 -- # killprocess 3956865 00:17:22.271 13:00:27 -- common/autotest_common.sh@936 -- # '[' -z 3956865 ']' 00:17:22.271 13:00:27 -- common/autotest_common.sh@940 -- # kill -0 3956865 00:17:22.271 13:00:27 -- common/autotest_common.sh@941 -- # uname 00:17:22.271 13:00:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:22.271 13:00:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3956865 00:17:22.532 13:00:27 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:22.532 13:00:27 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:22.532 13:00:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3956865' 00:17:22.532 killing process with pid 3956865 00:17:22.532 13:00:27 -- common/autotest_common.sh@955 -- # kill 3956865 00:17:22.532 Received shutdown signal, test time was about 10.000000 seconds 00:17:22.532 00:17:22.532 Latency(us) 00:17:22.532 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:22.532 =================================================================================================================== 00:17:22.532 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:22.532 13:00:27 -- common/autotest_common.sh@960 -- # wait 3956865 00:17:22.532 13:00:27 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:17:22.532 13:00:27 -- target/queue_depth.sh@43 -- # nvmftestfini 00:17:22.532 13:00:27 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:22.532 13:00:27 -- nvmf/common.sh@117 -- # sync 00:17:22.532 13:00:27 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:22.532 13:00:27 -- nvmf/common.sh@120 -- # set +e 00:17:22.532 13:00:27 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:22.532 13:00:27 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:22.532 rmmod nvme_tcp 00:17:22.532 rmmod nvme_fabrics 00:17:22.532 rmmod nvme_keyring 00:17:22.532 13:00:27 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:22.532 13:00:27 -- nvmf/common.sh@124 -- # set -e 00:17:22.532 13:00:27 -- nvmf/common.sh@125 -- # return 0 00:17:22.532 13:00:27 -- nvmf/common.sh@478 -- # '[' -n 3956710 ']' 00:17:22.532 13:00:27 -- nvmf/common.sh@479 -- # killprocess 3956710 00:17:22.532 13:00:27 -- common/autotest_common.sh@936 -- # '[' -z 3956710 ']' 00:17:22.532 13:00:27 -- common/autotest_common.sh@940 -- # kill -0 3956710 00:17:22.532 13:00:27 -- common/autotest_common.sh@941 -- # uname 00:17:22.532 13:00:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:22.532 13:00:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3956710 00:17:22.811 13:00:27 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:22.812 13:00:27 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:22.812 13:00:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3956710' 00:17:22.812 killing process with pid 3956710 00:17:22.812 13:00:27 -- common/autotest_common.sh@955 -- # kill 3956710 00:17:22.812 13:00:27 -- common/autotest_common.sh@960 -- # wait 3956710 00:17:22.812 13:00:27 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:22.812 13:00:27 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:17:22.812 13:00:27 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:17:22.812 13:00:27 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:22.812 13:00:27 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:22.812 13:00:27 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:22.812 13:00:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:22.812 13:00:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:25.359 13:00:29 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:25.359 00:17:25.359 real 0m21.928s 00:17:25.359 user 0m25.564s 00:17:25.359 sys 0m6.461s 00:17:25.359 13:00:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:25.359 13:00:29 -- common/autotest_common.sh@10 -- # set +x 00:17:25.359 ************************************ 00:17:25.359 END TEST nvmf_queue_depth 00:17:25.359 ************************************ 00:17:25.359 13:00:29 -- nvmf/nvmf.sh@52 -- # run_test nvmf_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:17:25.359 13:00:29 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:25.359 13:00:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:25.359 13:00:29 -- common/autotest_common.sh@10 -- # set +x 00:17:25.359 ************************************ 00:17:25.359 START TEST nvmf_multipath 00:17:25.359 ************************************ 00:17:25.359 13:00:30 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:17:25.359 * Looking for test storage... 00:17:25.359 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:25.359 13:00:30 -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:25.359 13:00:30 -- nvmf/common.sh@7 -- # uname -s 00:17:25.359 13:00:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:25.359 13:00:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:25.359 13:00:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:25.359 13:00:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:25.359 13:00:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:25.359 13:00:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:25.359 13:00:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:25.359 13:00:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:25.359 13:00:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:25.359 13:00:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:25.360 13:00:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:25.360 13:00:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:25.360 13:00:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:25.360 13:00:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:25.360 13:00:30 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:25.360 13:00:30 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:25.360 13:00:30 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:25.360 13:00:30 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:25.360 13:00:30 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:25.360 13:00:30 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:25.360 13:00:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:25.360 13:00:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:25.360 13:00:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:25.360 13:00:30 -- paths/export.sh@5 -- # export PATH 00:17:25.360 13:00:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:25.360 13:00:30 -- nvmf/common.sh@47 -- # : 0 00:17:25.360 13:00:30 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:25.360 13:00:30 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:25.360 13:00:30 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:25.360 13:00:30 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:25.360 13:00:30 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:25.360 13:00:30 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:25.360 13:00:30 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:25.360 13:00:30 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:25.360 13:00:30 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:25.360 13:00:30 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:25.360 13:00:30 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:17:25.360 13:00:30 -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:25.360 13:00:30 -- target/multipath.sh@43 -- # nvmftestinit 00:17:25.360 13:00:30 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:17:25.360 13:00:30 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:25.360 13:00:30 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:25.360 13:00:30 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:25.360 13:00:30 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:25.360 13:00:30 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:25.360 13:00:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:25.360 13:00:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:25.360 13:00:30 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:17:25.360 13:00:30 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:17:25.360 13:00:30 -- nvmf/common.sh@285 -- # xtrace_disable 00:17:25.360 13:00:30 -- common/autotest_common.sh@10 -- # set +x 00:17:31.942 13:00:36 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:31.942 13:00:36 -- nvmf/common.sh@291 -- # pci_devs=() 00:17:31.942 13:00:36 -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:31.942 13:00:36 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:31.942 13:00:36 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:31.942 13:00:36 -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:31.942 13:00:36 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:31.942 13:00:36 -- nvmf/common.sh@295 -- # net_devs=() 00:17:31.942 13:00:36 -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:31.942 13:00:36 -- nvmf/common.sh@296 -- # e810=() 00:17:31.942 13:00:36 -- nvmf/common.sh@296 -- # local -ga e810 00:17:31.942 13:00:36 -- nvmf/common.sh@297 -- # x722=() 00:17:31.942 13:00:36 -- nvmf/common.sh@297 -- # local -ga x722 00:17:31.942 13:00:36 -- nvmf/common.sh@298 -- # mlx=() 00:17:31.942 13:00:36 -- nvmf/common.sh@298 -- # local -ga mlx 00:17:31.942 13:00:36 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:31.942 13:00:36 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:31.942 13:00:36 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:31.942 13:00:36 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:31.942 13:00:36 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:31.942 13:00:36 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:31.942 13:00:36 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:31.942 13:00:36 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:31.942 13:00:36 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:31.942 13:00:36 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:31.942 13:00:36 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:31.942 13:00:36 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:31.942 13:00:36 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:31.942 13:00:36 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:31.942 13:00:36 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:31.942 13:00:36 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:31.942 13:00:36 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:31.942 13:00:36 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:31.942 13:00:36 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:17:31.942 Found 0000:31:00.0 (0x8086 - 0x159b) 00:17:31.942 13:00:36 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:31.942 13:00:36 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:31.942 13:00:36 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:31.942 13:00:36 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:31.942 13:00:36 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:31.942 13:00:36 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:31.942 13:00:36 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:17:31.942 Found 0000:31:00.1 (0x8086 - 0x159b) 00:17:31.942 13:00:36 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:31.942 13:00:36 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:31.942 13:00:36 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:31.942 13:00:36 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:31.942 13:00:36 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:31.942 13:00:36 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:31.942 13:00:36 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:31.942 13:00:36 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:31.942 13:00:36 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:31.942 13:00:36 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:31.942 13:00:36 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:31.942 13:00:36 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:31.942 13:00:36 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:17:31.942 Found net devices under 0000:31:00.0: cvl_0_0 00:17:31.942 13:00:36 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:31.942 13:00:36 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:31.942 13:00:36 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:31.942 13:00:36 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:31.942 13:00:36 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:31.942 13:00:36 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:17:31.942 Found net devices under 0000:31:00.1: cvl_0_1 00:17:31.942 13:00:36 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:31.942 13:00:36 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:17:31.942 13:00:36 -- nvmf/common.sh@403 -- # is_hw=yes 00:17:31.942 13:00:36 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:17:31.942 13:00:36 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:17:31.942 13:00:36 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:17:31.942 13:00:36 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:31.942 13:00:36 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:31.942 13:00:36 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:31.942 13:00:36 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:31.942 13:00:36 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:31.942 13:00:36 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:31.942 13:00:36 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:31.942 13:00:36 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:31.942 13:00:36 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:31.942 13:00:36 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:31.942 13:00:36 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:31.942 13:00:36 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:31.942 13:00:36 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:32.201 13:00:37 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:32.201 13:00:37 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:32.201 13:00:37 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:32.201 13:00:37 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:32.201 13:00:37 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:32.201 13:00:37 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:32.201 13:00:37 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:32.201 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:32.201 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.653 ms 00:17:32.201 00:17:32.201 --- 10.0.0.2 ping statistics --- 00:17:32.201 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:32.201 rtt min/avg/max/mdev = 0.653/0.653/0.653/0.000 ms 00:17:32.201 13:00:37 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:32.201 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:32.201 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:17:32.201 00:17:32.201 --- 10.0.0.1 ping statistics --- 00:17:32.201 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:32.201 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:17:32.201 13:00:37 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:32.201 13:00:37 -- nvmf/common.sh@411 -- # return 0 00:17:32.201 13:00:37 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:32.201 13:00:37 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:32.201 13:00:37 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:32.201 13:00:37 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:32.201 13:00:37 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:32.201 13:00:37 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:32.201 13:00:37 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:32.201 13:00:37 -- target/multipath.sh@45 -- # '[' -z ']' 00:17:32.201 13:00:37 -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:17:32.201 only one NIC for nvmf test 00:17:32.201 13:00:37 -- target/multipath.sh@47 -- # nvmftestfini 00:17:32.201 13:00:37 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:32.201 13:00:37 -- nvmf/common.sh@117 -- # sync 00:17:32.201 13:00:37 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:32.201 13:00:37 -- nvmf/common.sh@120 -- # set +e 00:17:32.201 13:00:37 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:32.201 13:00:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:32.201 rmmod nvme_tcp 00:17:32.201 rmmod nvme_fabrics 00:17:32.201 rmmod nvme_keyring 00:17:32.201 13:00:37 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:32.462 13:00:37 -- nvmf/common.sh@124 -- # set -e 00:17:32.462 13:00:37 -- nvmf/common.sh@125 -- # return 0 00:17:32.462 13:00:37 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:17:32.462 13:00:37 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:32.462 13:00:37 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:17:32.462 13:00:37 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:17:32.462 13:00:37 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:32.462 13:00:37 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:32.462 13:00:37 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:32.462 13:00:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:32.462 13:00:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:34.378 13:00:39 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:34.378 13:00:39 -- target/multipath.sh@48 -- # exit 0 00:17:34.378 13:00:39 -- target/multipath.sh@1 -- # nvmftestfini 00:17:34.378 13:00:39 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:34.378 13:00:39 -- nvmf/common.sh@117 -- # sync 00:17:34.378 13:00:39 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:34.378 13:00:39 -- nvmf/common.sh@120 -- # set +e 00:17:34.378 13:00:39 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:34.378 13:00:39 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:34.378 13:00:39 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:34.378 13:00:39 -- nvmf/common.sh@124 -- # set -e 00:17:34.378 13:00:39 -- nvmf/common.sh@125 -- # return 0 00:17:34.378 13:00:39 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:17:34.378 13:00:39 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:34.378 13:00:39 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:17:34.378 13:00:39 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:17:34.378 13:00:39 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:34.378 13:00:39 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:34.378 13:00:39 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:34.378 13:00:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:34.378 13:00:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:34.378 13:00:39 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:34.378 00:17:34.378 real 0m9.347s 00:17:34.378 user 0m1.971s 00:17:34.378 sys 0m5.259s 00:17:34.378 13:00:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:34.378 13:00:39 -- common/autotest_common.sh@10 -- # set +x 00:17:34.378 ************************************ 00:17:34.378 END TEST nvmf_multipath 00:17:34.378 ************************************ 00:17:34.378 13:00:39 -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:17:34.378 13:00:39 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:34.378 13:00:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:34.378 13:00:39 -- common/autotest_common.sh@10 -- # set +x 00:17:34.639 ************************************ 00:17:34.639 START TEST nvmf_zcopy 00:17:34.639 ************************************ 00:17:34.639 13:00:39 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:17:34.639 * Looking for test storage... 00:17:34.639 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:34.639 13:00:39 -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:34.639 13:00:39 -- nvmf/common.sh@7 -- # uname -s 00:17:34.639 13:00:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:34.639 13:00:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:34.639 13:00:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:34.639 13:00:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:34.639 13:00:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:34.639 13:00:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:34.639 13:00:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:34.639 13:00:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:34.639 13:00:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:34.639 13:00:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:34.639 13:00:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:34.639 13:00:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:34.639 13:00:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:34.639 13:00:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:34.639 13:00:39 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:34.639 13:00:39 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:34.639 13:00:39 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:34.639 13:00:39 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:34.639 13:00:39 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:34.639 13:00:39 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:34.639 13:00:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:34.639 13:00:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:34.639 13:00:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:34.639 13:00:39 -- paths/export.sh@5 -- # export PATH 00:17:34.639 13:00:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:34.639 13:00:39 -- nvmf/common.sh@47 -- # : 0 00:17:34.639 13:00:39 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:34.639 13:00:39 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:34.639 13:00:39 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:34.639 13:00:39 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:34.639 13:00:39 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:34.639 13:00:39 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:34.639 13:00:39 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:34.639 13:00:39 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:34.904 13:00:39 -- target/zcopy.sh@12 -- # nvmftestinit 00:17:34.904 13:00:39 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:17:34.904 13:00:39 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:34.904 13:00:39 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:34.904 13:00:39 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:34.904 13:00:39 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:34.904 13:00:39 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:34.904 13:00:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:34.904 13:00:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:34.904 13:00:39 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:17:34.904 13:00:39 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:17:34.904 13:00:39 -- nvmf/common.sh@285 -- # xtrace_disable 00:17:34.904 13:00:39 -- common/autotest_common.sh@10 -- # set +x 00:17:43.052 13:00:46 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:43.052 13:00:46 -- nvmf/common.sh@291 -- # pci_devs=() 00:17:43.052 13:00:46 -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:43.052 13:00:46 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:43.052 13:00:46 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:43.052 13:00:46 -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:43.052 13:00:46 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:43.052 13:00:46 -- nvmf/common.sh@295 -- # net_devs=() 00:17:43.052 13:00:46 -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:43.052 13:00:46 -- nvmf/common.sh@296 -- # e810=() 00:17:43.052 13:00:46 -- nvmf/common.sh@296 -- # local -ga e810 00:17:43.052 13:00:46 -- nvmf/common.sh@297 -- # x722=() 00:17:43.052 13:00:46 -- nvmf/common.sh@297 -- # local -ga x722 00:17:43.052 13:00:46 -- nvmf/common.sh@298 -- # mlx=() 00:17:43.052 13:00:46 -- nvmf/common.sh@298 -- # local -ga mlx 00:17:43.052 13:00:46 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:43.052 13:00:46 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:43.052 13:00:46 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:43.052 13:00:46 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:43.052 13:00:46 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:43.052 13:00:46 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:43.052 13:00:46 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:43.052 13:00:46 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:43.052 13:00:46 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:43.052 13:00:46 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:43.052 13:00:46 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:43.052 13:00:46 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:43.052 13:00:46 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:43.052 13:00:46 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:43.052 13:00:46 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:43.052 13:00:46 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:43.052 13:00:46 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:43.052 13:00:46 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:43.052 13:00:46 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:17:43.052 Found 0000:31:00.0 (0x8086 - 0x159b) 00:17:43.052 13:00:46 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:43.052 13:00:46 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:43.052 13:00:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:43.052 13:00:46 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:43.052 13:00:46 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:43.052 13:00:46 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:43.052 13:00:46 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:17:43.052 Found 0000:31:00.1 (0x8086 - 0x159b) 00:17:43.052 13:00:46 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:43.052 13:00:46 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:43.052 13:00:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:43.052 13:00:46 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:43.052 13:00:46 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:43.052 13:00:46 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:43.052 13:00:46 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:43.052 13:00:46 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:43.052 13:00:46 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:43.052 13:00:46 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:43.052 13:00:46 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:43.052 13:00:46 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:43.052 13:00:46 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:17:43.052 Found net devices under 0000:31:00.0: cvl_0_0 00:17:43.052 13:00:46 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:43.052 13:00:46 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:43.052 13:00:46 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:43.052 13:00:46 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:43.052 13:00:46 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:43.052 13:00:46 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:17:43.052 Found net devices under 0000:31:00.1: cvl_0_1 00:17:43.052 13:00:46 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:43.052 13:00:46 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:17:43.052 13:00:46 -- nvmf/common.sh@403 -- # is_hw=yes 00:17:43.052 13:00:46 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:17:43.052 13:00:46 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:17:43.052 13:00:46 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:17:43.052 13:00:46 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:43.052 13:00:46 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:43.052 13:00:46 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:43.052 13:00:46 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:43.052 13:00:46 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:43.052 13:00:46 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:43.052 13:00:46 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:43.052 13:00:46 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:43.052 13:00:46 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:43.052 13:00:46 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:43.052 13:00:46 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:43.052 13:00:46 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:43.052 13:00:46 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:43.052 13:00:46 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:43.052 13:00:46 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:43.052 13:00:46 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:43.052 13:00:46 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:43.052 13:00:46 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:43.052 13:00:46 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:43.052 13:00:46 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:43.052 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:43.052 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.604 ms 00:17:43.052 00:17:43.052 --- 10.0.0.2 ping statistics --- 00:17:43.052 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:43.052 rtt min/avg/max/mdev = 0.604/0.604/0.604/0.000 ms 00:17:43.052 13:00:46 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:43.052 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:43.052 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.300 ms 00:17:43.052 00:17:43.052 --- 10.0.0.1 ping statistics --- 00:17:43.052 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:43.052 rtt min/avg/max/mdev = 0.300/0.300/0.300/0.000 ms 00:17:43.052 13:00:46 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:43.052 13:00:46 -- nvmf/common.sh@411 -- # return 0 00:17:43.052 13:00:46 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:43.052 13:00:46 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:43.052 13:00:46 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:43.052 13:00:46 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:43.052 13:00:46 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:43.052 13:00:46 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:43.052 13:00:46 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:43.052 13:00:47 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:17:43.052 13:00:47 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:43.052 13:00:47 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:43.052 13:00:47 -- common/autotest_common.sh@10 -- # set +x 00:17:43.052 13:00:47 -- nvmf/common.sh@470 -- # nvmfpid=3967657 00:17:43.052 13:00:47 -- nvmf/common.sh@471 -- # waitforlisten 3967657 00:17:43.052 13:00:47 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:43.052 13:00:47 -- common/autotest_common.sh@817 -- # '[' -z 3967657 ']' 00:17:43.052 13:00:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:43.052 13:00:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:43.052 13:00:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:43.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:43.052 13:00:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:43.052 13:00:47 -- common/autotest_common.sh@10 -- # set +x 00:17:43.052 [2024-04-26 13:00:47.070016] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:17:43.052 [2024-04-26 13:00:47.070066] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:43.052 EAL: No free 2048 kB hugepages reported on node 1 00:17:43.052 [2024-04-26 13:00:47.152412] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:43.052 [2024-04-26 13:00:47.224722] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:43.052 [2024-04-26 13:00:47.224775] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:43.052 [2024-04-26 13:00:47.224783] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:43.052 [2024-04-26 13:00:47.224789] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:43.052 [2024-04-26 13:00:47.224795] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:43.052 [2024-04-26 13:00:47.224819] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:43.052 13:00:47 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:43.052 13:00:47 -- common/autotest_common.sh@850 -- # return 0 00:17:43.052 13:00:47 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:43.052 13:00:47 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:43.052 13:00:47 -- common/autotest_common.sh@10 -- # set +x 00:17:43.052 13:00:47 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:43.052 13:00:47 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:17:43.052 13:00:47 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:17:43.052 13:00:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:43.052 13:00:47 -- common/autotest_common.sh@10 -- # set +x 00:17:43.052 [2024-04-26 13:00:47.885194] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:43.052 13:00:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:43.052 13:00:47 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:43.052 13:00:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:43.052 13:00:47 -- common/autotest_common.sh@10 -- # set +x 00:17:43.052 13:00:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:43.052 13:00:47 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:43.052 13:00:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:43.052 13:00:47 -- common/autotest_common.sh@10 -- # set +x 00:17:43.052 [2024-04-26 13:00:47.901482] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:43.052 13:00:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:43.052 13:00:47 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:43.052 13:00:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:43.052 13:00:47 -- common/autotest_common.sh@10 -- # set +x 00:17:43.052 13:00:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:43.052 13:00:47 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:17:43.052 13:00:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:43.052 13:00:47 -- common/autotest_common.sh@10 -- # set +x 00:17:43.052 malloc0 00:17:43.052 13:00:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:43.052 13:00:47 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:43.052 13:00:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:43.052 13:00:47 -- common/autotest_common.sh@10 -- # set +x 00:17:43.052 13:00:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:43.052 13:00:47 -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:17:43.052 13:00:47 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:17:43.052 13:00:47 -- nvmf/common.sh@521 -- # config=() 00:17:43.052 13:00:47 -- nvmf/common.sh@521 -- # local subsystem config 00:17:43.052 13:00:47 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:43.052 13:00:47 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:43.052 { 00:17:43.052 "params": { 00:17:43.052 "name": "Nvme$subsystem", 00:17:43.052 "trtype": "$TEST_TRANSPORT", 00:17:43.052 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:43.052 "adrfam": "ipv4", 00:17:43.052 "trsvcid": "$NVMF_PORT", 00:17:43.052 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:43.052 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:43.053 "hdgst": ${hdgst:-false}, 00:17:43.053 "ddgst": ${ddgst:-false} 00:17:43.053 }, 00:17:43.053 "method": "bdev_nvme_attach_controller" 00:17:43.053 } 00:17:43.053 EOF 00:17:43.053 )") 00:17:43.053 13:00:47 -- nvmf/common.sh@543 -- # cat 00:17:43.053 13:00:47 -- nvmf/common.sh@545 -- # jq . 00:17:43.053 13:00:47 -- nvmf/common.sh@546 -- # IFS=, 00:17:43.053 13:00:47 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:17:43.053 "params": { 00:17:43.053 "name": "Nvme1", 00:17:43.053 "trtype": "tcp", 00:17:43.053 "traddr": "10.0.0.2", 00:17:43.053 "adrfam": "ipv4", 00:17:43.053 "trsvcid": "4420", 00:17:43.053 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:43.053 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:43.053 "hdgst": false, 00:17:43.053 "ddgst": false 00:17:43.053 }, 00:17:43.053 "method": "bdev_nvme_attach_controller" 00:17:43.053 }' 00:17:43.053 [2024-04-26 13:00:47.994264] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:17:43.053 [2024-04-26 13:00:47.994331] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3967694 ] 00:17:43.053 EAL: No free 2048 kB hugepages reported on node 1 00:17:43.053 [2024-04-26 13:00:48.061291] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:43.313 [2024-04-26 13:00:48.133911] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:43.313 Running I/O for 10 seconds... 00:17:55.554 00:17:55.554 Latency(us) 00:17:55.554 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:55.554 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:17:55.554 Verification LBA range: start 0x0 length 0x1000 00:17:55.554 Nvme1n1 : 10.05 8863.83 69.25 0.00 0.00 14336.90 3058.35 45001.39 00:17:55.554 =================================================================================================================== 00:17:55.554 Total : 8863.83 69.25 0.00 0.00 14336.90 3058.35 45001.39 00:17:55.554 13:00:58 -- target/zcopy.sh@39 -- # perfpid=3969728 00:17:55.554 13:00:58 -- target/zcopy.sh@41 -- # xtrace_disable 00:17:55.554 13:00:58 -- common/autotest_common.sh@10 -- # set +x 00:17:55.554 13:00:58 -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:17:55.554 13:00:58 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:17:55.554 13:00:58 -- nvmf/common.sh@521 -- # config=() 00:17:55.554 13:00:58 -- nvmf/common.sh@521 -- # local subsystem config 00:17:55.554 [2024-04-26 13:00:58.536137] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.554 [2024-04-26 13:00:58.536169] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.554 13:00:58 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:55.554 13:00:58 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:55.554 { 00:17:55.554 "params": { 00:17:55.554 "name": "Nvme$subsystem", 00:17:55.554 "trtype": "$TEST_TRANSPORT", 00:17:55.554 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:55.554 "adrfam": "ipv4", 00:17:55.554 "trsvcid": "$NVMF_PORT", 00:17:55.554 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:55.554 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:55.554 "hdgst": ${hdgst:-false}, 00:17:55.554 "ddgst": ${ddgst:-false} 00:17:55.554 }, 00:17:55.554 "method": "bdev_nvme_attach_controller" 00:17:55.554 } 00:17:55.554 EOF 00:17:55.554 )") 00:17:55.554 13:00:58 -- nvmf/common.sh@543 -- # cat 00:17:55.554 [2024-04-26 13:00:58.544124] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.554 [2024-04-26 13:00:58.544137] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.554 13:00:58 -- nvmf/common.sh@545 -- # jq . 00:17:55.554 13:00:58 -- nvmf/common.sh@546 -- # IFS=, 00:17:55.554 13:00:58 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:17:55.554 "params": { 00:17:55.554 "name": "Nvme1", 00:17:55.554 "trtype": "tcp", 00:17:55.554 "traddr": "10.0.0.2", 00:17:55.554 "adrfam": "ipv4", 00:17:55.554 "trsvcid": "4420", 00:17:55.554 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:55.554 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:55.554 "hdgst": false, 00:17:55.554 "ddgst": false 00:17:55.554 }, 00:17:55.554 "method": "bdev_nvme_attach_controller" 00:17:55.554 }' 00:17:55.555 [2024-04-26 13:00:58.552143] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.555 [2024-04-26 13:00:58.552150] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.555 [2024-04-26 13:00:58.560162] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.555 [2024-04-26 13:00:58.560169] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.555 [2024-04-26 13:00:58.568183] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.555 [2024-04-26 13:00:58.568190] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.555 [2024-04-26 13:00:58.576202] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.555 [2024-04-26 13:00:58.576209] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.555 [2024-04-26 13:00:58.578663] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:17:55.555 [2024-04-26 13:00:58.578710] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3969728 ] 00:17:55.555 [2024-04-26 13:00:58.584223] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.555 [2024-04-26 13:00:58.584230] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.555 [2024-04-26 13:00:58.592244] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.555 [2024-04-26 13:00:58.592250] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.555 [2024-04-26 13:00:58.600265] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.555 [2024-04-26 13:00:58.600271] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.555 EAL: No free 2048 kB hugepages reported on node 1 00:17:55.555 [2024-04-26 13:00:58.608286] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.555 [2024-04-26 13:00:58.608294] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.555 [2024-04-26 13:00:58.616306] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.555 [2024-04-26 13:00:58.616313] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.555 [2024-04-26 13:00:58.624327] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.555 [2024-04-26 13:00:58.624334] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.555 [2024-04-26 13:00:58.632347] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.555 [2024-04-26 13:00:58.632353] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.555 [2024-04-26 13:00:58.637423] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:55.555 [2024-04-26 13:00:58.640367] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.555 [2024-04-26 13:00:58.640375] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.555 [2024-04-26 13:00:58.648388] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.555 [2024-04-26 13:00:58.648396] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.555 [2024-04-26 13:00:58.656409] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.555 [2024-04-26 13:00:58.656418] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.555 [2024-04-26 13:00:58.664428] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.555 [2024-04-26 13:00:58.664436] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.555 [2024-04-26 13:00:58.672449] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.555 [2024-04-26 13:00:58.672461] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.555 [2024-04-26 13:00:58.680470] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.555 [2024-04-26 13:00:58.680479] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.555 [2024-04-26 13:00:58.688491] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.555 [2024-04-26 13:00:58.688499] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.555 [2024-04-26 13:00:58.696512] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.555 [2024-04-26 13:00:58.696520] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.555 [2024-04-26 13:00:58.699490] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:55.555 [2024-04-26 13:00:58.704532] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.555 [2024-04-26 13:00:58.704540] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.555 [2024-04-26 13:00:58.712555] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.555 [2024-04-26 13:00:58.712565] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.555 [2024-04-26 13:00:58.720578] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.555 [2024-04-26 13:00:58.720590] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.555 [2024-04-26 13:00:58.728595] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.555 [2024-04-26 13:00:58.728604] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.555 [2024-04-26 13:00:58.736615] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.555 [2024-04-26 13:00:58.736623] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.555 [2024-04-26 13:00:58.744635] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.555 [2024-04-26 13:00:58.744642] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.555 [2024-04-26 13:00:58.752656] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.555 [2024-04-26 13:00:58.752663] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.555 [2024-04-26 13:00:58.760677] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.555 [2024-04-26 13:00:58.760683] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.555 [2024-04-26 13:00:58.768697] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.555 [2024-04-26 13:00:58.768703] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.555 [2024-04-26 13:00:58.776723] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.555 [2024-04-26 13:00:58.776734] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.555 [2024-04-26 13:00:58.784743] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.555 [2024-04-26 13:00:58.784752] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.555 [2024-04-26 13:00:58.792764] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.555 [2024-04-26 13:00:58.792773] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.555 [2024-04-26 13:00:58.800785] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.555 [2024-04-26 13:00:58.800793] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.555 [2024-04-26 13:00:58.808806] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.555 [2024-04-26 13:00:58.808819] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.555 [2024-04-26 13:00:58.816826] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.555 [2024-04-26 13:00:58.816835] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.555 [2024-04-26 13:00:58.824853] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.555 [2024-04-26 13:00:58.824863] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.555 [2024-04-26 13:00:58.832868] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.555 [2024-04-26 13:00:58.832875] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.555 [2024-04-26 13:00:58.840897] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.555 [2024-04-26 13:00:58.840911] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.555 Running I/O for 5 seconds... 00:17:55.555 [2024-04-26 13:00:58.848911] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.555 [2024-04-26 13:00:58.848919] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.555 [2024-04-26 13:00:58.858994] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.555 [2024-04-26 13:00:58.859009] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.555 [2024-04-26 13:00:58.873018] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.555 [2024-04-26 13:00:58.873035] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.555 [2024-04-26 13:00:58.886054] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.555 [2024-04-26 13:00:58.886071] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.555 [2024-04-26 13:00:58.899171] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.555 [2024-04-26 13:00:58.899188] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.555 [2024-04-26 13:00:58.907626] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.555 [2024-04-26 13:00:58.907641] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.555 [2024-04-26 13:00:58.916691] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.555 [2024-04-26 13:00:58.916706] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.555 [2024-04-26 13:00:58.925904] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.555 [2024-04-26 13:00:58.925919] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.555 [2024-04-26 13:00:58.935081] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.555 [2024-04-26 13:00:58.935095] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.555 [2024-04-26 13:00:58.943951] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.555 [2024-04-26 13:00:58.943966] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.555 [2024-04-26 13:00:58.953373] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.555 [2024-04-26 13:00:58.953387] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.555 [2024-04-26 13:00:58.962064] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.555 [2024-04-26 13:00:58.962079] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.555 [2024-04-26 13:00:58.971058] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.555 [2024-04-26 13:00:58.971073] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.555 [2024-04-26 13:00:58.979972] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.555 [2024-04-26 13:00:58.979987] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.555 [2024-04-26 13:00:58.988569] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.555 [2024-04-26 13:00:58.988584] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.555 [2024-04-26 13:00:58.996705] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.555 [2024-04-26 13:00:58.996719] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.555 [2024-04-26 13:00:59.006003] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.555 [2024-04-26 13:00:59.006019] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.555 [2024-04-26 13:00:59.014160] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.555 [2024-04-26 13:00:59.014175] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.555 [2024-04-26 13:00:59.023314] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.555 [2024-04-26 13:00:59.023329] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.555 [2024-04-26 13:00:59.032496] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.555 [2024-04-26 13:00:59.032511] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.555 [2024-04-26 13:00:59.041589] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.555 [2024-04-26 13:00:59.041603] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.555 [2024-04-26 13:00:59.050568] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.555 [2024-04-26 13:00:59.050583] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.555 [2024-04-26 13:00:59.059209] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.555 [2024-04-26 13:00:59.059223] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.555 [2024-04-26 13:00:59.067778] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.555 [2024-04-26 13:00:59.067792] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.555 [2024-04-26 13:00:59.077022] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.555 [2024-04-26 13:00:59.077037] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.555 [2024-04-26 13:00:59.085660] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.555 [2024-04-26 13:00:59.085675] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.555 [2024-04-26 13:00:59.094714] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.555 [2024-04-26 13:00:59.094729] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.555 [2024-04-26 13:00:59.103692] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.555 [2024-04-26 13:00:59.103707] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.555 [2024-04-26 13:00:59.112912] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.555 [2024-04-26 13:00:59.112927] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.555 [2024-04-26 13:00:59.122181] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.555 [2024-04-26 13:00:59.122197] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.555 [2024-04-26 13:00:59.130388] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.555 [2024-04-26 13:00:59.130403] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.555 [2024-04-26 13:00:59.139577] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.555 [2024-04-26 13:00:59.139592] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.555 [2024-04-26 13:00:59.148397] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.556 [2024-04-26 13:00:59.148412] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.556 [2024-04-26 13:00:59.157308] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.556 [2024-04-26 13:00:59.157323] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.556 [2024-04-26 13:00:59.165991] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.556 [2024-04-26 13:00:59.166005] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.556 [2024-04-26 13:00:59.174684] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.556 [2024-04-26 13:00:59.174699] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.556 [2024-04-26 13:00:59.184226] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.556 [2024-04-26 13:00:59.184241] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.556 [2024-04-26 13:00:59.192918] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.556 [2024-04-26 13:00:59.192932] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.556 [2024-04-26 13:00:59.201931] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.556 [2024-04-26 13:00:59.201945] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.556 [2024-04-26 13:00:59.210795] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.556 [2024-04-26 13:00:59.210809] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.556 [2024-04-26 13:00:59.219945] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.556 [2024-04-26 13:00:59.219960] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.556 [2024-04-26 13:00:59.228866] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.556 [2024-04-26 13:00:59.228881] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.556 [2024-04-26 13:00:59.237413] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.556 [2024-04-26 13:00:59.237428] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.556 [2024-04-26 13:00:59.246465] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.556 [2024-04-26 13:00:59.246479] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.556 [2024-04-26 13:00:59.255644] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.556 [2024-04-26 13:00:59.255659] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.556 [2024-04-26 13:00:59.264885] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.556 [2024-04-26 13:00:59.264900] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.556 [2024-04-26 13:00:59.273944] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.556 [2024-04-26 13:00:59.273959] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.556 [2024-04-26 13:00:59.282802] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.556 [2024-04-26 13:00:59.282816] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.556 [2024-04-26 13:00:59.291604] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.556 [2024-04-26 13:00:59.291619] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.556 [2024-04-26 13:00:59.300899] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.556 [2024-04-26 13:00:59.300913] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.556 [2024-04-26 13:00:59.309895] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.556 [2024-04-26 13:00:59.309910] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.556 [2024-04-26 13:00:59.318940] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.556 [2024-04-26 13:00:59.318961] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.556 [2024-04-26 13:00:59.327197] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.556 [2024-04-26 13:00:59.327211] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.556 [2024-04-26 13:00:59.336000] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.556 [2024-04-26 13:00:59.336015] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.556 [2024-04-26 13:00:59.344635] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.556 [2024-04-26 13:00:59.344649] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.556 [2024-04-26 13:00:59.353434] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.556 [2024-04-26 13:00:59.353449] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.556 [2024-04-26 13:00:59.362025] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.556 [2024-04-26 13:00:59.362039] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.556 [2024-04-26 13:00:59.371231] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.556 [2024-04-26 13:00:59.371246] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.556 [2024-04-26 13:00:59.379848] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.556 [2024-04-26 13:00:59.379863] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.556 [2024-04-26 13:00:59.388626] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.556 [2024-04-26 13:00:59.388641] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.556 [2024-04-26 13:00:59.397297] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.556 [2024-04-26 13:00:59.397312] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.556 [2024-04-26 13:00:59.406279] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.556 [2024-04-26 13:00:59.406293] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.556 [2024-04-26 13:00:59.415193] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.556 [2024-04-26 13:00:59.415207] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.556 [2024-04-26 13:00:59.424435] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.556 [2024-04-26 13:00:59.424449] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.556 [2024-04-26 13:00:59.432426] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.556 [2024-04-26 13:00:59.432439] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.556 [2024-04-26 13:00:59.441537] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.556 [2024-04-26 13:00:59.441551] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.556 [2024-04-26 13:00:59.450887] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.556 [2024-04-26 13:00:59.450901] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.556 [2024-04-26 13:00:59.460264] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.556 [2024-04-26 13:00:59.460278] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.556 [2024-04-26 13:00:59.468947] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.556 [2024-04-26 13:00:59.468960] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.556 [2024-04-26 13:00:59.477929] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.556 [2024-04-26 13:00:59.477943] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.556 [2024-04-26 13:00:59.486685] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.556 [2024-04-26 13:00:59.486703] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.556 [2024-04-26 13:00:59.495326] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.556 [2024-04-26 13:00:59.495340] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.556 [2024-04-26 13:00:59.504096] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.556 [2024-04-26 13:00:59.504110] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.556 [2024-04-26 13:00:59.513062] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.556 [2024-04-26 13:00:59.513077] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.556 [2024-04-26 13:00:59.521766] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.556 [2024-04-26 13:00:59.521780] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.556 [2024-04-26 13:00:59.530720] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.556 [2024-04-26 13:00:59.530734] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.556 [2024-04-26 13:00:59.539508] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.556 [2024-04-26 13:00:59.539522] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.556 [2024-04-26 13:00:59.548246] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.556 [2024-04-26 13:00:59.548260] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.556 [2024-04-26 13:00:59.557348] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.556 [2024-04-26 13:00:59.557362] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.556 [2024-04-26 13:00:59.566492] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.556 [2024-04-26 13:00:59.566506] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.556 [2024-04-26 13:00:59.575305] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.556 [2024-04-26 13:00:59.575319] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.556 [2024-04-26 13:00:59.583998] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.556 [2024-04-26 13:00:59.584012] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.556 [2024-04-26 13:00:59.592466] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.556 [2024-04-26 13:00:59.592480] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.556 [2024-04-26 13:00:59.601324] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.556 [2024-04-26 13:00:59.601338] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.556 [2024-04-26 13:00:59.610212] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.556 [2024-04-26 13:00:59.610227] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.556 [2024-04-26 13:00:59.618988] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.556 [2024-04-26 13:00:59.619002] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.556 [2024-04-26 13:00:59.627527] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.556 [2024-04-26 13:00:59.627541] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.556 [2024-04-26 13:00:59.636481] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.556 [2024-04-26 13:00:59.636495] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.556 [2024-04-26 13:00:59.645831] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.556 [2024-04-26 13:00:59.645849] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.556 [2024-04-26 13:00:59.654003] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.556 [2024-04-26 13:00:59.654020] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.556 [2024-04-26 13:00:59.662999] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.556 [2024-04-26 13:00:59.663013] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.556 [2024-04-26 13:00:59.672125] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.556 [2024-04-26 13:00:59.672140] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.556 [2024-04-26 13:00:59.680410] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.556 [2024-04-26 13:00:59.680424] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.556 [2024-04-26 13:00:59.689694] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.556 [2024-04-26 13:00:59.689709] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.556 [2024-04-26 13:00:59.698472] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.556 [2024-04-26 13:00:59.698486] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.556 [2024-04-26 13:00:59.707400] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.556 [2024-04-26 13:00:59.707414] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.556 [2024-04-26 13:00:59.716161] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.556 [2024-04-26 13:00:59.716175] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.556 [2024-04-26 13:00:59.724320] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.556 [2024-04-26 13:00:59.724333] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.556 [2024-04-26 13:00:59.733314] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.556 [2024-04-26 13:00:59.733328] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.556 [2024-04-26 13:00:59.742090] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.556 [2024-04-26 13:00:59.742104] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.556 [2024-04-26 13:00:59.750702] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.556 [2024-04-26 13:00:59.750716] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.556 [2024-04-26 13:00:59.759678] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.556 [2024-04-26 13:00:59.759691] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.556 [2024-04-26 13:00:59.767737] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.556 [2024-04-26 13:00:59.767751] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.556 [2024-04-26 13:00:59.775939] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.556 [2024-04-26 13:00:59.775953] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.556 [2024-04-26 13:00:59.784292] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.556 [2024-04-26 13:00:59.784306] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.556 [2024-04-26 13:00:59.792911] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.557 [2024-04-26 13:00:59.792926] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.557 [2024-04-26 13:00:59.801759] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.557 [2024-04-26 13:00:59.801774] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.557 [2024-04-26 13:00:59.810525] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.557 [2024-04-26 13:00:59.810539] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.557 [2024-04-26 13:00:59.819551] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.557 [2024-04-26 13:00:59.819569] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.557 [2024-04-26 13:00:59.827717] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.557 [2024-04-26 13:00:59.827731] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.557 [2024-04-26 13:00:59.836806] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.557 [2024-04-26 13:00:59.836820] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.557 [2024-04-26 13:00:59.846203] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.557 [2024-04-26 13:00:59.846217] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.557 [2024-04-26 13:00:59.854357] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.557 [2024-04-26 13:00:59.854371] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.557 [2024-04-26 13:00:59.863350] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.557 [2024-04-26 13:00:59.863364] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.557 [2024-04-26 13:00:59.872338] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.557 [2024-04-26 13:00:59.872352] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.557 [2024-04-26 13:00:59.880432] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.557 [2024-04-26 13:00:59.880446] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.557 [2024-04-26 13:00:59.889600] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.557 [2024-04-26 13:00:59.889615] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.557 [2024-04-26 13:00:59.897928] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.557 [2024-04-26 13:00:59.897942] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.557 [2024-04-26 13:00:59.906759] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.557 [2024-04-26 13:00:59.906774] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.557 [2024-04-26 13:00:59.915706] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.557 [2024-04-26 13:00:59.915720] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.557 [2024-04-26 13:00:59.924454] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.557 [2024-04-26 13:00:59.924468] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.557 [2024-04-26 13:00:59.933811] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.557 [2024-04-26 13:00:59.933825] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.557 [2024-04-26 13:00:59.942471] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.557 [2024-04-26 13:00:59.942485] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.557 [2024-04-26 13:00:59.950968] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.557 [2024-04-26 13:00:59.950983] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.557 [2024-04-26 13:00:59.959644] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.557 [2024-04-26 13:00:59.959659] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.557 [2024-04-26 13:00:59.969120] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.557 [2024-04-26 13:00:59.969135] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.557 [2024-04-26 13:00:59.977173] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.557 [2024-04-26 13:00:59.977186] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.557 [2024-04-26 13:00:59.986266] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.557 [2024-04-26 13:00:59.986284] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.557 [2024-04-26 13:00:59.994943] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.557 [2024-04-26 13:00:59.994958] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.557 [2024-04-26 13:01:00.003993] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.557 [2024-04-26 13:01:00.004009] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.557 [2024-04-26 13:01:00.013470] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.557 [2024-04-26 13:01:00.013485] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.557 [2024-04-26 13:01:00.022339] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.557 [2024-04-26 13:01:00.022354] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.557 [2024-04-26 13:01:00.030720] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.557 [2024-04-26 13:01:00.030735] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.557 [2024-04-26 13:01:00.039320] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.557 [2024-04-26 13:01:00.039334] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.557 [2024-04-26 13:01:00.048664] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.557 [2024-04-26 13:01:00.048679] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.557 [2024-04-26 13:01:00.057501] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.557 [2024-04-26 13:01:00.057515] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.557 [2024-04-26 13:01:00.066696] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.557 [2024-04-26 13:01:00.066711] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.557 [2024-04-26 13:01:00.075424] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.557 [2024-04-26 13:01:00.075440] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.557 [2024-04-26 13:01:00.084823] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.557 [2024-04-26 13:01:00.084842] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.557 [2024-04-26 13:01:00.093518] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.557 [2024-04-26 13:01:00.093533] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.557 [2024-04-26 13:01:00.102034] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.557 [2024-04-26 13:01:00.102049] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.557 [2024-04-26 13:01:00.110671] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.557 [2024-04-26 13:01:00.110686] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.557 [2024-04-26 13:01:00.119306] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.557 [2024-04-26 13:01:00.119320] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.557 [2024-04-26 13:01:00.128560] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.557 [2024-04-26 13:01:00.128575] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.557 [2024-04-26 13:01:00.137341] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.557 [2024-04-26 13:01:00.137355] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.557 [2024-04-26 13:01:00.146393] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.557 [2024-04-26 13:01:00.146408] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.557 [2024-04-26 13:01:00.155517] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.557 [2024-04-26 13:01:00.155531] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.557 [2024-04-26 13:01:00.164084] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.557 [2024-04-26 13:01:00.164099] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.557 [2024-04-26 13:01:00.173595] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.557 [2024-04-26 13:01:00.173610] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.557 [2024-04-26 13:01:00.182495] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.557 [2024-04-26 13:01:00.182509] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.557 [2024-04-26 13:01:00.191597] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.557 [2024-04-26 13:01:00.191611] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.557 [2024-04-26 13:01:00.200057] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.557 [2024-04-26 13:01:00.200072] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.557 [2024-04-26 13:01:00.209787] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.557 [2024-04-26 13:01:00.209803] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.557 [2024-04-26 13:01:00.218893] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.557 [2024-04-26 13:01:00.218908] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.557 [2024-04-26 13:01:00.227995] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.557 [2024-04-26 13:01:00.228010] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.557 [2024-04-26 13:01:00.237215] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.557 [2024-04-26 13:01:00.237229] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.557 [2024-04-26 13:01:00.245797] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.557 [2024-04-26 13:01:00.245811] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.557 [2024-04-26 13:01:00.254296] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.557 [2024-04-26 13:01:00.254310] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.557 [2024-04-26 13:01:00.263524] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.557 [2024-04-26 13:01:00.263537] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.557 [2024-04-26 13:01:00.272828] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.557 [2024-04-26 13:01:00.272846] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.557 [2024-04-26 13:01:00.282187] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.557 [2024-04-26 13:01:00.282202] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.557 [2024-04-26 13:01:00.291347] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.557 [2024-04-26 13:01:00.291362] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.557 [2024-04-26 13:01:00.299789] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.557 [2024-04-26 13:01:00.299803] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.557 [2024-04-26 13:01:00.308811] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.557 [2024-04-26 13:01:00.308826] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.557 [2024-04-26 13:01:00.317069] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.557 [2024-04-26 13:01:00.317083] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.557 [2024-04-26 13:01:00.325379] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.557 [2024-04-26 13:01:00.325394] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.557 [2024-04-26 13:01:00.334725] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.557 [2024-04-26 13:01:00.334740] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.557 [2024-04-26 13:01:00.343976] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.557 [2024-04-26 13:01:00.343991] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.557 [2024-04-26 13:01:00.352707] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.557 [2024-04-26 13:01:00.352722] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.557 [2024-04-26 13:01:00.361920] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.557 [2024-04-26 13:01:00.361935] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.557 [2024-04-26 13:01:00.370690] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.557 [2024-04-26 13:01:00.370705] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.557 [2024-04-26 13:01:00.379704] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.557 [2024-04-26 13:01:00.379718] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.557 [2024-04-26 13:01:00.388599] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.557 [2024-04-26 13:01:00.388614] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.557 [2024-04-26 13:01:00.397278] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.557 [2024-04-26 13:01:00.397293] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.557 [2024-04-26 13:01:00.405951] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.557 [2024-04-26 13:01:00.405965] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.557 [2024-04-26 13:01:00.414586] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.557 [2024-04-26 13:01:00.414600] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.557 [2024-04-26 13:01:00.423422] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.557 [2024-04-26 13:01:00.423437] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.557 [2024-04-26 13:01:00.431540] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.557 [2024-04-26 13:01:00.431554] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.557 [2024-04-26 13:01:00.440685] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.558 [2024-04-26 13:01:00.440700] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.558 [2024-04-26 13:01:00.449539] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.558 [2024-04-26 13:01:00.449554] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.558 [2024-04-26 13:01:00.458386] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.558 [2024-04-26 13:01:00.458400] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.558 [2024-04-26 13:01:00.467074] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.558 [2024-04-26 13:01:00.467088] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.558 [2024-04-26 13:01:00.476048] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.558 [2024-04-26 13:01:00.476063] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.558 [2024-04-26 13:01:00.484336] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.558 [2024-04-26 13:01:00.484350] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.558 [2024-04-26 13:01:00.493194] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.558 [2024-04-26 13:01:00.493209] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.558 [2024-04-26 13:01:00.501742] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.558 [2024-04-26 13:01:00.501757] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.558 [2024-04-26 13:01:00.510482] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.558 [2024-04-26 13:01:00.510497] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.558 [2024-04-26 13:01:00.519239] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.558 [2024-04-26 13:01:00.519254] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.558 [2024-04-26 13:01:00.528294] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.558 [2024-04-26 13:01:00.528309] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.558 [2024-04-26 13:01:00.537184] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.558 [2024-04-26 13:01:00.537198] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.558 [2024-04-26 13:01:00.545345] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.558 [2024-04-26 13:01:00.545359] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.558 [2024-04-26 13:01:00.553398] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.558 [2024-04-26 13:01:00.553412] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.558 [2024-04-26 13:01:00.562542] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.558 [2024-04-26 13:01:00.562556] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.558 [2024-04-26 13:01:00.571197] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.558 [2024-04-26 13:01:00.571211] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.558 [2024-04-26 13:01:00.580132] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.558 [2024-04-26 13:01:00.580146] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.558 [2024-04-26 13:01:00.589371] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.558 [2024-04-26 13:01:00.589385] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.558 [2024-04-26 13:01:00.598149] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.558 [2024-04-26 13:01:00.598163] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.558 [2024-04-26 13:01:00.606957] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.558 [2024-04-26 13:01:00.606971] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.818 [2024-04-26 13:01:00.615549] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.818 [2024-04-26 13:01:00.615564] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.818 [2024-04-26 13:01:00.624904] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.818 [2024-04-26 13:01:00.624918] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.818 [2024-04-26 13:01:00.632904] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.818 [2024-04-26 13:01:00.632918] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.818 [2024-04-26 13:01:00.642227] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.818 [2024-04-26 13:01:00.642241] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.818 [2024-04-26 13:01:00.651320] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.818 [2024-04-26 13:01:00.651335] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.818 [2024-04-26 13:01:00.660830] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.818 [2024-04-26 13:01:00.660848] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.818 [2024-04-26 13:01:00.669367] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.818 [2024-04-26 13:01:00.669381] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.818 [2024-04-26 13:01:00.677969] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.818 [2024-04-26 13:01:00.677983] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.818 [2024-04-26 13:01:00.686690] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.818 [2024-04-26 13:01:00.686704] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.818 [2024-04-26 13:01:00.695736] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.818 [2024-04-26 13:01:00.695750] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.818 [2024-04-26 13:01:00.704130] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.818 [2024-04-26 13:01:00.704145] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.818 [2024-04-26 13:01:00.712602] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.818 [2024-04-26 13:01:00.712615] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.818 [2024-04-26 13:01:00.721563] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.818 [2024-04-26 13:01:00.721576] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.818 [2024-04-26 13:01:00.730308] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.818 [2024-04-26 13:01:00.730323] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.818 [2024-04-26 13:01:00.739702] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.818 [2024-04-26 13:01:00.739716] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.818 [2024-04-26 13:01:00.748445] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.818 [2024-04-26 13:01:00.748460] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.818 [2024-04-26 13:01:00.757267] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.818 [2024-04-26 13:01:00.757281] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.818 [2024-04-26 13:01:00.766086] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.818 [2024-04-26 13:01:00.766100] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.818 [2024-04-26 13:01:00.774174] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.818 [2024-04-26 13:01:00.774189] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.818 [2024-04-26 13:01:00.783313] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.818 [2024-04-26 13:01:00.783326] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.818 [2024-04-26 13:01:00.792122] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.818 [2024-04-26 13:01:00.792137] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.818 [2024-04-26 13:01:00.801020] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.818 [2024-04-26 13:01:00.801034] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.818 [2024-04-26 13:01:00.809743] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.818 [2024-04-26 13:01:00.809757] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.818 [2024-04-26 13:01:00.818878] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.818 [2024-04-26 13:01:00.818896] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.818 [2024-04-26 13:01:00.828063] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.818 [2024-04-26 13:01:00.828078] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.818 [2024-04-26 13:01:00.837070] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.818 [2024-04-26 13:01:00.837084] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.818 [2024-04-26 13:01:00.846110] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.818 [2024-04-26 13:01:00.846124] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.818 [2024-04-26 13:01:00.855307] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.818 [2024-04-26 13:01:00.855321] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.818 [2024-04-26 13:01:00.863888] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.818 [2024-04-26 13:01:00.863902] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.818 [2024-04-26 13:01:00.872393] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.818 [2024-04-26 13:01:00.872407] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.079 [2024-04-26 13:01:00.881535] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.079 [2024-04-26 13:01:00.881550] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.079 [2024-04-26 13:01:00.890231] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.079 [2024-04-26 13:01:00.890245] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.079 [2024-04-26 13:01:00.899083] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.079 [2024-04-26 13:01:00.899097] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.079 [2024-04-26 13:01:00.907909] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.079 [2024-04-26 13:01:00.907923] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.079 [2024-04-26 13:01:00.916670] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.079 [2024-04-26 13:01:00.916684] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.079 [2024-04-26 13:01:00.924827] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.079 [2024-04-26 13:01:00.924846] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.079 [2024-04-26 13:01:00.933848] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.079 [2024-04-26 13:01:00.933862] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.079 [2024-04-26 13:01:00.941983] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.079 [2024-04-26 13:01:00.941997] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.079 [2024-04-26 13:01:00.951092] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.079 [2024-04-26 13:01:00.951106] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.079 [2024-04-26 13:01:00.959934] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.079 [2024-04-26 13:01:00.959949] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.079 [2024-04-26 13:01:00.968702] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.079 [2024-04-26 13:01:00.968715] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.079 [2024-04-26 13:01:00.977730] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.079 [2024-04-26 13:01:00.977744] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.079 [2024-04-26 13:01:00.986299] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.079 [2024-04-26 13:01:00.986316] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.079 [2024-04-26 13:01:00.995167] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.079 [2024-04-26 13:01:00.995181] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.079 [2024-04-26 13:01:01.004119] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.079 [2024-04-26 13:01:01.004133] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.079 [2024-04-26 13:01:01.012825] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.079 [2024-04-26 13:01:01.012843] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.079 [2024-04-26 13:01:01.021971] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.079 [2024-04-26 13:01:01.021985] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.079 [2024-04-26 13:01:01.030741] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.079 [2024-04-26 13:01:01.030755] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.079 [2024-04-26 13:01:01.039793] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.079 [2024-04-26 13:01:01.039808] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.079 [2024-04-26 13:01:01.049186] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.079 [2024-04-26 13:01:01.049200] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.079 [2024-04-26 13:01:01.058013] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.079 [2024-04-26 13:01:01.058027] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.079 [2024-04-26 13:01:01.066704] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.079 [2024-04-26 13:01:01.066718] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.079 [2024-04-26 13:01:01.075365] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.079 [2024-04-26 13:01:01.075379] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.079 [2024-04-26 13:01:01.084364] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.079 [2024-04-26 13:01:01.084378] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.079 [2024-04-26 13:01:01.092525] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.079 [2024-04-26 13:01:01.092539] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.079 [2024-04-26 13:01:01.101591] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.079 [2024-04-26 13:01:01.101606] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.079 [2024-04-26 13:01:01.110077] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.079 [2024-04-26 13:01:01.110091] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.079 [2024-04-26 13:01:01.118607] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.079 [2024-04-26 13:01:01.118621] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.079 [2024-04-26 13:01:01.127156] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.079 [2024-04-26 13:01:01.127170] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.079 [2024-04-26 13:01:01.136094] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.079 [2024-04-26 13:01:01.136108] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.339 [2024-04-26 13:01:01.145099] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.340 [2024-04-26 13:01:01.145113] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.340 [2024-04-26 13:01:01.153550] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.340 [2024-04-26 13:01:01.153568] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.340 [2024-04-26 13:01:01.162841] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.340 [2024-04-26 13:01:01.162856] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.340 [2024-04-26 13:01:01.172415] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.340 [2024-04-26 13:01:01.172430] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.340 [2024-04-26 13:01:01.181031] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.340 [2024-04-26 13:01:01.181046] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.340 [2024-04-26 13:01:01.189984] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.340 [2024-04-26 13:01:01.189999] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.340 [2024-04-26 13:01:01.198646] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.340 [2024-04-26 13:01:01.198660] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.340 [2024-04-26 13:01:01.207951] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.340 [2024-04-26 13:01:01.207965] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.340 [2024-04-26 13:01:01.216667] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.340 [2024-04-26 13:01:01.216681] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.340 [2024-04-26 13:01:01.226074] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.340 [2024-04-26 13:01:01.226089] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.340 [2024-04-26 13:01:01.234896] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.340 [2024-04-26 13:01:01.234910] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.340 [2024-04-26 13:01:01.243561] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.340 [2024-04-26 13:01:01.243576] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.340 [2024-04-26 13:01:01.252080] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.340 [2024-04-26 13:01:01.252094] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.340 [2024-04-26 13:01:01.260798] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.340 [2024-04-26 13:01:01.260813] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.340 [2024-04-26 13:01:01.269371] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.340 [2024-04-26 13:01:01.269385] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.340 [2024-04-26 13:01:01.278308] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.340 [2024-04-26 13:01:01.278323] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.340 [2024-04-26 13:01:01.287405] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.340 [2024-04-26 13:01:01.287420] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.340 [2024-04-26 13:01:01.296248] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.340 [2024-04-26 13:01:01.296262] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.340 [2024-04-26 13:01:01.304714] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.340 [2024-04-26 13:01:01.304728] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.340 [2024-04-26 13:01:01.314036] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.340 [2024-04-26 13:01:01.314051] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.340 [2024-04-26 13:01:01.322894] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.340 [2024-04-26 13:01:01.322911] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.340 [2024-04-26 13:01:01.331545] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.340 [2024-04-26 13:01:01.331559] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.340 [2024-04-26 13:01:01.340681] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.340 [2024-04-26 13:01:01.340695] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.340 [2024-04-26 13:01:01.349305] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.340 [2024-04-26 13:01:01.349319] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.340 [2024-04-26 13:01:01.357971] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.340 [2024-04-26 13:01:01.357985] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.340 [2024-04-26 13:01:01.366868] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.340 [2024-04-26 13:01:01.366882] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.340 [2024-04-26 13:01:01.376156] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.340 [2024-04-26 13:01:01.376170] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.340 [2024-04-26 13:01:01.385751] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.340 [2024-04-26 13:01:01.385765] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.340 [2024-04-26 13:01:01.394575] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.340 [2024-04-26 13:01:01.394589] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.601 [2024-04-26 13:01:01.402923] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.601 [2024-04-26 13:01:01.402938] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.601 [2024-04-26 13:01:01.411920] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.601 [2024-04-26 13:01:01.411935] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.601 [2024-04-26 13:01:01.419972] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.601 [2024-04-26 13:01:01.419986] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.601 [2024-04-26 13:01:01.428933] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.601 [2024-04-26 13:01:01.428948] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.601 [2024-04-26 13:01:01.438043] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.601 [2024-04-26 13:01:01.438057] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.601 [2024-04-26 13:01:01.446128] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.601 [2024-04-26 13:01:01.446142] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.601 [2024-04-26 13:01:01.455347] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.601 [2024-04-26 13:01:01.455361] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.601 [2024-04-26 13:01:01.464435] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.601 [2024-04-26 13:01:01.464450] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.601 [2024-04-26 13:01:01.473222] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.601 [2024-04-26 13:01:01.473236] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.601 [2024-04-26 13:01:01.482228] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.601 [2024-04-26 13:01:01.482242] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.601 [2024-04-26 13:01:01.491433] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.601 [2024-04-26 13:01:01.491448] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.601 [2024-04-26 13:01:01.500193] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.601 [2024-04-26 13:01:01.500208] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.601 [2024-04-26 13:01:01.508832] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.601 [2024-04-26 13:01:01.508850] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.601 [2024-04-26 13:01:01.517466] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.601 [2024-04-26 13:01:01.517481] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.601 [2024-04-26 13:01:01.526590] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.601 [2024-04-26 13:01:01.526604] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.601 [2024-04-26 13:01:01.535582] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.601 [2024-04-26 13:01:01.535597] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.601 [2024-04-26 13:01:01.545189] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.601 [2024-04-26 13:01:01.545204] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.601 [2024-04-26 13:01:01.553922] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.601 [2024-04-26 13:01:01.553936] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.601 [2024-04-26 13:01:01.563037] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.601 [2024-04-26 13:01:01.563051] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.601 [2024-04-26 13:01:01.572245] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.601 [2024-04-26 13:01:01.572259] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.601 [2024-04-26 13:01:01.580879] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.601 [2024-04-26 13:01:01.580892] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.601 [2024-04-26 13:01:01.589647] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.601 [2024-04-26 13:01:01.589661] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.601 [2024-04-26 13:01:01.598393] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.601 [2024-04-26 13:01:01.598408] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.601 [2024-04-26 13:01:01.607006] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.601 [2024-04-26 13:01:01.607020] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.601 [2024-04-26 13:01:01.615740] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.601 [2024-04-26 13:01:01.615753] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.601 [2024-04-26 13:01:01.624223] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.601 [2024-04-26 13:01:01.624236] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.601 [2024-04-26 13:01:01.633205] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.601 [2024-04-26 13:01:01.633219] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.601 [2024-04-26 13:01:01.642377] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.601 [2024-04-26 13:01:01.642391] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.601 [2024-04-26 13:01:01.650584] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.601 [2024-04-26 13:01:01.650598] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.601 [2024-04-26 13:01:01.659003] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.601 [2024-04-26 13:01:01.659017] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.863 [2024-04-26 13:01:01.668051] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.863 [2024-04-26 13:01:01.668066] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.863 [2024-04-26 13:01:01.677142] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.863 [2024-04-26 13:01:01.677157] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.863 [2024-04-26 13:01:01.685768] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.863 [2024-04-26 13:01:01.685782] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.863 [2024-04-26 13:01:01.695212] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.863 [2024-04-26 13:01:01.695226] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.863 [2024-04-26 13:01:01.704082] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.863 [2024-04-26 13:01:01.704096] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.863 [2024-04-26 13:01:01.712145] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.863 [2024-04-26 13:01:01.712159] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.863 [2024-04-26 13:01:01.721187] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.863 [2024-04-26 13:01:01.721201] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.863 [2024-04-26 13:01:01.730274] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.863 [2024-04-26 13:01:01.730288] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.863 [2024-04-26 13:01:01.738889] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.863 [2024-04-26 13:01:01.738903] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.864 [2024-04-26 13:01:01.747721] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.864 [2024-04-26 13:01:01.747735] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.864 [2024-04-26 13:01:01.757038] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.864 [2024-04-26 13:01:01.757052] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.864 [2024-04-26 13:01:01.765237] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.864 [2024-04-26 13:01:01.765251] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.864 [2024-04-26 13:01:01.774166] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.864 [2024-04-26 13:01:01.774179] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.864 [2024-04-26 13:01:01.783129] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.864 [2024-04-26 13:01:01.783144] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.864 [2024-04-26 13:01:01.791842] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.864 [2024-04-26 13:01:01.791856] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.864 [2024-04-26 13:01:01.801065] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.864 [2024-04-26 13:01:01.801080] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.864 [2024-04-26 13:01:01.809745] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.864 [2024-04-26 13:01:01.809759] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.864 [2024-04-26 13:01:01.818379] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.864 [2024-04-26 13:01:01.818393] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.864 [2024-04-26 13:01:01.827365] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.864 [2024-04-26 13:01:01.827380] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.864 [2024-04-26 13:01:01.835418] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.864 [2024-04-26 13:01:01.835432] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.864 [2024-04-26 13:01:01.844388] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.864 [2024-04-26 13:01:01.844403] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.864 [2024-04-26 13:01:01.853090] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.864 [2024-04-26 13:01:01.853104] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.864 [2024-04-26 13:01:01.862416] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.864 [2024-04-26 13:01:01.862431] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.864 [2024-04-26 13:01:01.871467] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.864 [2024-04-26 13:01:01.871481] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.864 [2024-04-26 13:01:01.880006] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.864 [2024-04-26 13:01:01.880020] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.864 [2024-04-26 13:01:01.888741] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.864 [2024-04-26 13:01:01.888755] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.865 [2024-04-26 13:01:01.897937] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.865 [2024-04-26 13:01:01.897951] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.865 [2024-04-26 13:01:01.906253] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.865 [2024-04-26 13:01:01.906267] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.865 [2024-04-26 13:01:01.914867] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.865 [2024-04-26 13:01:01.914881] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.127 [2024-04-26 13:01:01.923926] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.127 [2024-04-26 13:01:01.923940] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.127 [2024-04-26 13:01:01.932943] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.127 [2024-04-26 13:01:01.932957] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.127 [2024-04-26 13:01:01.941926] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.127 [2024-04-26 13:01:01.941941] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.127 [2024-04-26 13:01:01.950722] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.127 [2024-04-26 13:01:01.950736] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.127 [2024-04-26 13:01:01.959908] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.127 [2024-04-26 13:01:01.959923] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.127 [2024-04-26 13:01:01.969135] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.127 [2024-04-26 13:01:01.969150] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.127 [2024-04-26 13:01:01.978516] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.127 [2024-04-26 13:01:01.978531] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.127 [2024-04-26 13:01:01.987150] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.127 [2024-04-26 13:01:01.987165] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.127 [2024-04-26 13:01:01.995724] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.127 [2024-04-26 13:01:01.995738] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.127 [2024-04-26 13:01:02.004525] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.127 [2024-04-26 13:01:02.004540] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.127 [2024-04-26 13:01:02.012862] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.127 [2024-04-26 13:01:02.012877] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.127 [2024-04-26 13:01:02.021061] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.127 [2024-04-26 13:01:02.021075] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.127 [2024-04-26 13:01:02.030167] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.127 [2024-04-26 13:01:02.030181] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.127 [2024-04-26 13:01:02.038994] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.127 [2024-04-26 13:01:02.039008] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.127 [2024-04-26 13:01:02.047843] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.127 [2024-04-26 13:01:02.047857] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.127 [2024-04-26 13:01:02.056880] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.127 [2024-04-26 13:01:02.056895] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.127 [2024-04-26 13:01:02.065891] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.127 [2024-04-26 13:01:02.065906] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.127 [2024-04-26 13:01:02.074025] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.127 [2024-04-26 13:01:02.074039] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.127 [2024-04-26 13:01:02.083275] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.127 [2024-04-26 13:01:02.083289] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.127 [2024-04-26 13:01:02.092138] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.127 [2024-04-26 13:01:02.092152] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.127 [2024-04-26 13:01:02.100641] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.127 [2024-04-26 13:01:02.100655] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.127 [2024-04-26 13:01:02.110050] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.127 [2024-04-26 13:01:02.110065] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.127 [2024-04-26 13:01:02.118221] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.127 [2024-04-26 13:01:02.118235] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.127 [2024-04-26 13:01:02.126855] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.127 [2024-04-26 13:01:02.126870] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.127 [2024-04-26 13:01:02.135067] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.127 [2024-04-26 13:01:02.135081] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.127 [2024-04-26 13:01:02.144118] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.127 [2024-04-26 13:01:02.144132] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.127 [2024-04-26 13:01:02.152346] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.127 [2024-04-26 13:01:02.152363] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.127 [2024-04-26 13:01:02.161338] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.127 [2024-04-26 13:01:02.161352] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.127 [2024-04-26 13:01:02.170532] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.127 [2024-04-26 13:01:02.170546] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.127 [2024-04-26 13:01:02.179256] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.127 [2024-04-26 13:01:02.179270] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.389 [2024-04-26 13:01:02.188211] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.389 [2024-04-26 13:01:02.188226] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.389 [2024-04-26 13:01:02.197542] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.389 [2024-04-26 13:01:02.197557] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.389 [2024-04-26 13:01:02.205796] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.389 [2024-04-26 13:01:02.205811] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.389 [2024-04-26 13:01:02.215114] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.389 [2024-04-26 13:01:02.215128] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.389 [2024-04-26 13:01:02.224076] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.389 [2024-04-26 13:01:02.224090] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.389 [2024-04-26 13:01:02.232800] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.389 [2024-04-26 13:01:02.232815] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.389 [2024-04-26 13:01:02.241920] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.389 [2024-04-26 13:01:02.241935] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.389 [2024-04-26 13:01:02.250945] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.389 [2024-04-26 13:01:02.250959] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.389 [2024-04-26 13:01:02.259281] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.389 [2024-04-26 13:01:02.259296] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.389 [2024-04-26 13:01:02.268266] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.389 [2024-04-26 13:01:02.268280] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.389 [2024-04-26 13:01:02.277603] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.389 [2024-04-26 13:01:02.277618] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.389 [2024-04-26 13:01:02.285737] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.389 [2024-04-26 13:01:02.285751] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.389 [2024-04-26 13:01:02.295213] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.389 [2024-04-26 13:01:02.295228] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.389 [2024-04-26 13:01:02.303537] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.389 [2024-04-26 13:01:02.303552] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.389 [2024-04-26 13:01:02.312656] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.389 [2024-04-26 13:01:02.312670] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.389 [2024-04-26 13:01:02.320816] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.389 [2024-04-26 13:01:02.320834] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.389 [2024-04-26 13:01:02.329984] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.389 [2024-04-26 13:01:02.329997] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.389 [2024-04-26 13:01:02.338702] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.389 [2024-04-26 13:01:02.338716] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.389 [2024-04-26 13:01:02.347945] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.389 [2024-04-26 13:01:02.347960] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.389 [2024-04-26 13:01:02.357285] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.389 [2024-04-26 13:01:02.357300] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.389 [2024-04-26 13:01:02.365433] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.389 [2024-04-26 13:01:02.365447] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.389 [2024-04-26 13:01:02.374874] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.389 [2024-04-26 13:01:02.374889] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.389 [2024-04-26 13:01:02.383370] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.389 [2024-04-26 13:01:02.383384] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.389 [2024-04-26 13:01:02.392134] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.389 [2024-04-26 13:01:02.392148] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.389 [2024-04-26 13:01:02.400883] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.389 [2024-04-26 13:01:02.400897] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.389 [2024-04-26 13:01:02.409666] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.389 [2024-04-26 13:01:02.409680] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.389 [2024-04-26 13:01:02.418220] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.389 [2024-04-26 13:01:02.418234] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.389 [2024-04-26 13:01:02.426923] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.389 [2024-04-26 13:01:02.426938] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.389 [2024-04-26 13:01:02.435657] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.389 [2024-04-26 13:01:02.435671] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.389 [2024-04-26 13:01:02.444680] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.389 [2024-04-26 13:01:02.444694] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.651 [2024-04-26 13:01:02.453705] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.651 [2024-04-26 13:01:02.453719] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.651 [2024-04-26 13:01:02.462394] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.651 [2024-04-26 13:01:02.462408] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.651 [2024-04-26 13:01:02.471711] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.651 [2024-04-26 13:01:02.471726] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.651 [2024-04-26 13:01:02.480644] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.651 [2024-04-26 13:01:02.480659] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.651 [2024-04-26 13:01:02.489381] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.651 [2024-04-26 13:01:02.489399] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.651 [2024-04-26 13:01:02.497645] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.651 [2024-04-26 13:01:02.497659] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.651 [2024-04-26 13:01:02.506513] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.651 [2024-04-26 13:01:02.506527] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.651 [2024-04-26 13:01:02.515611] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.651 [2024-04-26 13:01:02.515625] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.651 [2024-04-26 13:01:02.529474] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.651 [2024-04-26 13:01:02.529489] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.651 [2024-04-26 13:01:02.537887] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.651 [2024-04-26 13:01:02.537902] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.651 [2024-04-26 13:01:02.546993] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.651 [2024-04-26 13:01:02.547008] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.651 [2024-04-26 13:01:02.556179] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.651 [2024-04-26 13:01:02.556194] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.651 [2024-04-26 13:01:02.565111] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.651 [2024-04-26 13:01:02.565126] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.651 [2024-04-26 13:01:02.573810] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.651 [2024-04-26 13:01:02.573824] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.651 [2024-04-26 13:01:02.583091] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.651 [2024-04-26 13:01:02.583105] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.651 [2024-04-26 13:01:02.591745] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.651 [2024-04-26 13:01:02.591759] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.651 [2024-04-26 13:01:02.600733] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.651 [2024-04-26 13:01:02.600748] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.651 [2024-04-26 13:01:02.609934] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.651 [2024-04-26 13:01:02.609949] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.651 [2024-04-26 13:01:02.618168] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.651 [2024-04-26 13:01:02.618182] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.651 [2024-04-26 13:01:02.626197] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.651 [2024-04-26 13:01:02.626211] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.651 [2024-04-26 13:01:02.634971] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.651 [2024-04-26 13:01:02.634985] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.651 [2024-04-26 13:01:02.644066] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.651 [2024-04-26 13:01:02.644081] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.651 [2024-04-26 13:01:02.652990] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.651 [2024-04-26 13:01:02.653004] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.651 [2024-04-26 13:01:02.661574] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.651 [2024-04-26 13:01:02.661594] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.651 [2024-04-26 13:01:02.670368] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.651 [2024-04-26 13:01:02.670383] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.651 [2024-04-26 13:01:02.678949] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.651 [2024-04-26 13:01:02.678963] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.651 [2024-04-26 13:01:02.687570] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.651 [2024-04-26 13:01:02.687584] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.651 [2024-04-26 13:01:02.696610] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.651 [2024-04-26 13:01:02.696624] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.651 [2024-04-26 13:01:02.705403] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.651 [2024-04-26 13:01:02.705418] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.913 [2024-04-26 13:01:02.714136] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.913 [2024-04-26 13:01:02.714150] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.913 [2024-04-26 13:01:02.723037] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.913 [2024-04-26 13:01:02.723051] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.913 [2024-04-26 13:01:02.731705] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.913 [2024-04-26 13:01:02.731719] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.913 [2024-04-26 13:01:02.740800] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.913 [2024-04-26 13:01:02.740814] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.913 [2024-04-26 13:01:02.749311] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.913 [2024-04-26 13:01:02.749325] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.913 [2024-04-26 13:01:02.758673] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.913 [2024-04-26 13:01:02.758688] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.913 [2024-04-26 13:01:02.767680] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.913 [2024-04-26 13:01:02.767695] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.913 [2024-04-26 13:01:02.775787] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.913 [2024-04-26 13:01:02.775801] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.913 [2024-04-26 13:01:02.784933] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.913 [2024-04-26 13:01:02.784947] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.913 [2024-04-26 13:01:02.794253] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.913 [2024-04-26 13:01:02.794267] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.913 [2024-04-26 13:01:02.803483] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.913 [2024-04-26 13:01:02.803497] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.913 [2024-04-26 13:01:02.812440] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.913 [2024-04-26 13:01:02.812454] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.913 [2024-04-26 13:01:02.821321] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.913 [2024-04-26 13:01:02.821335] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.913 [2024-04-26 13:01:02.830462] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.913 [2024-04-26 13:01:02.830477] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.913 [2024-04-26 13:01:02.839326] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.913 [2024-04-26 13:01:02.839341] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.913 [2024-04-26 13:01:02.848801] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.913 [2024-04-26 13:01:02.848816] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.913 [2024-04-26 13:01:02.857623] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.913 [2024-04-26 13:01:02.857637] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.913 [2024-04-26 13:01:02.867050] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.913 [2024-04-26 13:01:02.867064] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.913 [2024-04-26 13:01:02.875842] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.913 [2024-04-26 13:01:02.875855] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.913 [2024-04-26 13:01:02.884693] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.913 [2024-04-26 13:01:02.884707] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.913 [2024-04-26 13:01:02.893772] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.913 [2024-04-26 13:01:02.893786] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.913 [2024-04-26 13:01:02.903169] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.913 [2024-04-26 13:01:02.903183] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.913 [2024-04-26 13:01:02.912322] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.913 [2024-04-26 13:01:02.912336] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.913 [2024-04-26 13:01:02.921135] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.913 [2024-04-26 13:01:02.921149] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.913 [2024-04-26 13:01:02.929656] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.913 [2024-04-26 13:01:02.929670] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.913 [2024-04-26 13:01:02.938619] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.913 [2024-04-26 13:01:02.938633] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.913 [2024-04-26 13:01:02.947383] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.913 [2024-04-26 13:01:02.947396] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.913 [2024-04-26 13:01:02.956758] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.913 [2024-04-26 13:01:02.956773] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.913 [2024-04-26 13:01:02.965585] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.913 [2024-04-26 13:01:02.965599] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.175 [2024-04-26 13:01:02.974730] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.175 [2024-04-26 13:01:02.974745] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.175 [2024-04-26 13:01:02.983345] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.175 [2024-04-26 13:01:02.983358] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.175 [2024-04-26 13:01:02.992462] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.175 [2024-04-26 13:01:02.992476] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.175 [2024-04-26 13:01:03.001453] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.175 [2024-04-26 13:01:03.001467] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.175 [2024-04-26 13:01:03.009873] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.175 [2024-04-26 13:01:03.009887] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.175 [2024-04-26 13:01:03.018721] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.175 [2024-04-26 13:01:03.018735] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.175 [2024-04-26 13:01:03.027613] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.175 [2024-04-26 13:01:03.027627] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.175 [2024-04-26 13:01:03.036797] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.175 [2024-04-26 13:01:03.036811] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.175 [2024-04-26 13:01:03.045602] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.175 [2024-04-26 13:01:03.045616] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.175 [2024-04-26 13:01:03.054379] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.175 [2024-04-26 13:01:03.054393] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.175 [2024-04-26 13:01:03.063059] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.175 [2024-04-26 13:01:03.063072] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.175 [2024-04-26 13:01:03.072212] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.175 [2024-04-26 13:01:03.072226] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.175 [2024-04-26 13:01:03.081086] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.175 [2024-04-26 13:01:03.081100] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.175 [2024-04-26 13:01:03.090139] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.175 [2024-04-26 13:01:03.090152] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.175 [2024-04-26 13:01:03.098363] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.175 [2024-04-26 13:01:03.098377] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.175 [2024-04-26 13:01:03.107538] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.175 [2024-04-26 13:01:03.107553] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.175 [2024-04-26 13:01:03.116893] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.175 [2024-04-26 13:01:03.116907] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.175 [2024-04-26 13:01:03.125936] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.175 [2024-04-26 13:01:03.125950] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.175 [2024-04-26 13:01:03.134146] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.175 [2024-04-26 13:01:03.134161] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.175 [2024-04-26 13:01:03.142349] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.175 [2024-04-26 13:01:03.142363] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.175 [2024-04-26 13:01:03.151685] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.175 [2024-04-26 13:01:03.151699] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.175 [2024-04-26 13:01:03.160794] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.175 [2024-04-26 13:01:03.160809] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.175 [2024-04-26 13:01:03.170295] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.175 [2024-04-26 13:01:03.170310] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.175 [2024-04-26 13:01:03.179103] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.175 [2024-04-26 13:01:03.179117] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.175 [2024-04-26 13:01:03.187241] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.175 [2024-04-26 13:01:03.187254] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.175 [2024-04-26 13:01:03.195984] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.175 [2024-04-26 13:01:03.195998] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.175 [2024-04-26 13:01:03.205029] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.175 [2024-04-26 13:01:03.205043] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.175 [2024-04-26 13:01:03.213807] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.175 [2024-04-26 13:01:03.213822] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.175 [2024-04-26 13:01:03.223025] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.175 [2024-04-26 13:01:03.223040] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.175 [2024-04-26 13:01:03.232215] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.175 [2024-04-26 13:01:03.232230] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.435 [2024-04-26 13:01:03.240734] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.435 [2024-04-26 13:01:03.240748] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.435 [2024-04-26 13:01:03.249310] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.435 [2024-04-26 13:01:03.249324] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.435 [2024-04-26 13:01:03.257891] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.435 [2024-04-26 13:01:03.257905] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.435 [2024-04-26 13:01:03.266336] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.435 [2024-04-26 13:01:03.266351] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.436 [2024-04-26 13:01:03.275452] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.436 [2024-04-26 13:01:03.275467] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.436 [2024-04-26 13:01:03.284256] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.436 [2024-04-26 13:01:03.284270] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.436 [2024-04-26 13:01:03.293338] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.436 [2024-04-26 13:01:03.293352] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.436 [2024-04-26 13:01:03.301750] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.436 [2024-04-26 13:01:03.301764] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.436 [2024-04-26 13:01:03.311199] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.436 [2024-04-26 13:01:03.311213] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.436 [2024-04-26 13:01:03.320185] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.436 [2024-04-26 13:01:03.320199] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.436 [2024-04-26 13:01:03.328789] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.436 [2024-04-26 13:01:03.328804] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.436 [2024-04-26 13:01:03.337544] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.436 [2024-04-26 13:01:03.337558] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.436 [2024-04-26 13:01:03.346793] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.436 [2024-04-26 13:01:03.346808] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.436 [2024-04-26 13:01:03.355939] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.436 [2024-04-26 13:01:03.355954] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.436 [2024-04-26 13:01:03.364499] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.436 [2024-04-26 13:01:03.364513] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.436 [2024-04-26 13:01:03.373570] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.436 [2024-04-26 13:01:03.373584] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.436 [2024-04-26 13:01:03.382852] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.436 [2024-04-26 13:01:03.382867] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.436 [2024-04-26 13:01:03.391123] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.436 [2024-04-26 13:01:03.391137] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.436 [2024-04-26 13:01:03.400121] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.436 [2024-04-26 13:01:03.400136] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.436 [2024-04-26 13:01:03.409051] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.436 [2024-04-26 13:01:03.409066] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.436 [2024-04-26 13:01:03.418031] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.436 [2024-04-26 13:01:03.418046] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.436 [2024-04-26 13:01:03.426109] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.436 [2024-04-26 13:01:03.426123] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.436 [2024-04-26 13:01:03.434703] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.436 [2024-04-26 13:01:03.434717] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.436 [2024-04-26 13:01:03.443754] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.436 [2024-04-26 13:01:03.443769] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.436 [2024-04-26 13:01:03.451945] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.436 [2024-04-26 13:01:03.451960] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.436 [2024-04-26 13:01:03.460130] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.436 [2024-04-26 13:01:03.460143] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.436 [2024-04-26 13:01:03.468971] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.436 [2024-04-26 13:01:03.468986] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.436 [2024-04-26 13:01:03.478018] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.436 [2024-04-26 13:01:03.478032] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.436 [2024-04-26 13:01:03.486756] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.436 [2024-04-26 13:01:03.486770] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.696 [2024-04-26 13:01:03.495642] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.696 [2024-04-26 13:01:03.495660] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.696 [2024-04-26 13:01:03.504660] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.696 [2024-04-26 13:01:03.504674] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.697 [2024-04-26 13:01:03.513493] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.697 [2024-04-26 13:01:03.513507] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.697 [2024-04-26 13:01:03.522376] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.697 [2024-04-26 13:01:03.522391] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.697 [2024-04-26 13:01:03.531047] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.697 [2024-04-26 13:01:03.531061] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.697 [2024-04-26 13:01:03.539691] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.697 [2024-04-26 13:01:03.539705] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.697 [2024-04-26 13:01:03.548653] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.697 [2024-04-26 13:01:03.548667] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.697 [2024-04-26 13:01:03.557697] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.697 [2024-04-26 13:01:03.557711] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.697 [2024-04-26 13:01:03.566333] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.697 [2024-04-26 13:01:03.566346] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.697 [2024-04-26 13:01:03.575558] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.697 [2024-04-26 13:01:03.575573] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.697 [2024-04-26 13:01:03.584426] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.697 [2024-04-26 13:01:03.584441] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.697 [2024-04-26 13:01:03.593215] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.697 [2024-04-26 13:01:03.593229] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.697 [2024-04-26 13:01:03.601428] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.697 [2024-04-26 13:01:03.601441] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.697 [2024-04-26 13:01:03.610593] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.697 [2024-04-26 13:01:03.610607] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.697 [2024-04-26 13:01:03.619212] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.697 [2024-04-26 13:01:03.619226] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.697 [2024-04-26 13:01:03.628180] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.697 [2024-04-26 13:01:03.628194] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.697 [2024-04-26 13:01:03.637283] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.697 [2024-04-26 13:01:03.637297] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.697 [2024-04-26 13:01:03.651332] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.697 [2024-04-26 13:01:03.651346] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.697 [2024-04-26 13:01:03.664915] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.697 [2024-04-26 13:01:03.664930] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.697 [2024-04-26 13:01:03.678496] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.697 [2024-04-26 13:01:03.678514] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.697 [2024-04-26 13:01:03.691385] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.697 [2024-04-26 13:01:03.691400] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.697 [2024-04-26 13:01:03.704880] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.697 [2024-04-26 13:01:03.704894] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.697 [2024-04-26 13:01:03.717781] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.697 [2024-04-26 13:01:03.717796] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.697 [2024-04-26 13:01:03.730624] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.697 [2024-04-26 13:01:03.730639] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.697 [2024-04-26 13:01:03.743473] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.697 [2024-04-26 13:01:03.743487] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.697 [2024-04-26 13:01:03.756043] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.697 [2024-04-26 13:01:03.756057] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.958 [2024-04-26 13:01:03.768794] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.958 [2024-04-26 13:01:03.768808] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.958 [2024-04-26 13:01:03.782109] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.958 [2024-04-26 13:01:03.782123] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.958 [2024-04-26 13:01:03.794799] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.958 [2024-04-26 13:01:03.794813] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.958 [2024-04-26 13:01:03.802923] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.958 [2024-04-26 13:01:03.802938] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.958 [2024-04-26 13:01:03.816530] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.958 [2024-04-26 13:01:03.816545] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.958 [2024-04-26 13:01:03.830174] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.958 [2024-04-26 13:01:03.830188] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.958 [2024-04-26 13:01:03.843456] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.958 [2024-04-26 13:01:03.843471] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.958 [2024-04-26 13:01:03.856483] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.958 [2024-04-26 13:01:03.856498] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.958 [2024-04-26 13:01:03.866441] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.958 [2024-04-26 13:01:03.866454] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.958 00:17:58.958 Latency(us) 00:17:58.958 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:58.958 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:17:58.958 Nvme1n1 : 5.01 18474.45 144.33 0.00 0.00 6920.80 2689.71 16493.23 00:17:58.958 =================================================================================================================== 00:17:58.958 Total : 18474.45 144.33 0.00 0.00 6920.80 2689.71 16493.23 00:17:58.958 [2024-04-26 13:01:03.874458] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.958 [2024-04-26 13:01:03.874473] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.958 [2024-04-26 13:01:03.882477] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.958 [2024-04-26 13:01:03.882487] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.958 [2024-04-26 13:01:03.890501] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.958 [2024-04-26 13:01:03.890511] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.958 [2024-04-26 13:01:03.898524] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.958 [2024-04-26 13:01:03.898533] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.958 [2024-04-26 13:01:03.906543] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.958 [2024-04-26 13:01:03.906553] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.958 [2024-04-26 13:01:03.914564] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.958 [2024-04-26 13:01:03.914574] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.958 [2024-04-26 13:01:03.922582] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.958 [2024-04-26 13:01:03.922591] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.958 [2024-04-26 13:01:03.930600] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.958 [2024-04-26 13:01:03.930608] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.958 [2024-04-26 13:01:03.938620] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.958 [2024-04-26 13:01:03.938628] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.958 [2024-04-26 13:01:03.946640] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.958 [2024-04-26 13:01:03.946647] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.958 [2024-04-26 13:01:03.954662] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.958 [2024-04-26 13:01:03.954670] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.958 [2024-04-26 13:01:03.962683] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.958 [2024-04-26 13:01:03.962694] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.958 [2024-04-26 13:01:03.970700] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.958 [2024-04-26 13:01:03.970708] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.958 [2024-04-26 13:01:03.978722] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.958 [2024-04-26 13:01:03.978731] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.958 [2024-04-26 13:01:03.986742] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.958 [2024-04-26 13:01:03.986751] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.958 [2024-04-26 13:01:03.994762] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.958 [2024-04-26 13:01:03.994769] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3969728) - No such process 00:17:58.958 13:01:04 -- target/zcopy.sh@49 -- # wait 3969728 00:17:58.958 13:01:04 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:58.958 13:01:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:58.958 13:01:04 -- common/autotest_common.sh@10 -- # set +x 00:17:58.958 13:01:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:58.958 13:01:04 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:17:58.958 13:01:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:58.958 13:01:04 -- common/autotest_common.sh@10 -- # set +x 00:17:59.220 delay0 00:17:59.220 13:01:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:59.220 13:01:04 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:17:59.220 13:01:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:59.220 13:01:04 -- common/autotest_common.sh@10 -- # set +x 00:17:59.220 13:01:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:59.220 13:01:04 -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:17:59.220 EAL: No free 2048 kB hugepages reported on node 1 00:17:59.220 [2024-04-26 13:01:04.124279] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:18:07.463 Initializing NVMe Controllers 00:18:07.463 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:07.463 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:07.463 Initialization complete. Launching workers. 00:18:07.463 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 265, failed: 22499 00:18:07.463 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 22657, failed to submit 107 00:18:07.463 success 22552, unsuccess 105, failed 0 00:18:07.463 13:01:11 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:18:07.463 13:01:11 -- target/zcopy.sh@60 -- # nvmftestfini 00:18:07.463 13:01:11 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:07.463 13:01:11 -- nvmf/common.sh@117 -- # sync 00:18:07.463 13:01:11 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:07.463 13:01:11 -- nvmf/common.sh@120 -- # set +e 00:18:07.463 13:01:11 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:07.463 13:01:11 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:07.463 rmmod nvme_tcp 00:18:07.463 rmmod nvme_fabrics 00:18:07.463 rmmod nvme_keyring 00:18:07.463 13:01:11 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:07.463 13:01:11 -- nvmf/common.sh@124 -- # set -e 00:18:07.463 13:01:11 -- nvmf/common.sh@125 -- # return 0 00:18:07.463 13:01:11 -- nvmf/common.sh@478 -- # '[' -n 3967657 ']' 00:18:07.463 13:01:11 -- nvmf/common.sh@479 -- # killprocess 3967657 00:18:07.463 13:01:11 -- common/autotest_common.sh@936 -- # '[' -z 3967657 ']' 00:18:07.463 13:01:11 -- common/autotest_common.sh@940 -- # kill -0 3967657 00:18:07.463 13:01:11 -- common/autotest_common.sh@941 -- # uname 00:18:07.463 13:01:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:07.463 13:01:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3967657 00:18:07.463 13:01:11 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:07.463 13:01:11 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:07.463 13:01:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3967657' 00:18:07.463 killing process with pid 3967657 00:18:07.463 13:01:11 -- common/autotest_common.sh@955 -- # kill 3967657 00:18:07.463 13:01:11 -- common/autotest_common.sh@960 -- # wait 3967657 00:18:07.463 13:01:11 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:07.463 13:01:11 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:18:07.463 13:01:11 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:18:07.463 13:01:11 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:07.463 13:01:11 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:07.463 13:01:11 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:07.463 13:01:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:07.463 13:01:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:08.843 13:01:13 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:08.843 00:18:08.843 real 0m33.954s 00:18:08.843 user 0m45.650s 00:18:08.843 sys 0m11.240s 00:18:08.843 13:01:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:08.843 13:01:13 -- common/autotest_common.sh@10 -- # set +x 00:18:08.843 ************************************ 00:18:08.843 END TEST nvmf_zcopy 00:18:08.843 ************************************ 00:18:08.843 13:01:13 -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:18:08.843 13:01:13 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:08.843 13:01:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:08.843 13:01:13 -- common/autotest_common.sh@10 -- # set +x 00:18:08.843 ************************************ 00:18:08.843 START TEST nvmf_nmic 00:18:08.843 ************************************ 00:18:08.843 13:01:13 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:18:08.843 * Looking for test storage... 00:18:08.843 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:08.843 13:01:13 -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:08.843 13:01:13 -- nvmf/common.sh@7 -- # uname -s 00:18:08.843 13:01:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:08.843 13:01:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:08.843 13:01:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:08.843 13:01:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:08.843 13:01:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:08.843 13:01:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:08.843 13:01:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:08.843 13:01:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:08.843 13:01:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:08.843 13:01:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:08.843 13:01:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:08.843 13:01:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:08.843 13:01:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:08.843 13:01:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:08.843 13:01:13 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:08.843 13:01:13 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:08.843 13:01:13 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:08.843 13:01:13 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:08.843 13:01:13 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:08.843 13:01:13 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:08.843 13:01:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:08.843 13:01:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:08.843 13:01:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:08.843 13:01:13 -- paths/export.sh@5 -- # export PATH 00:18:08.843 13:01:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:08.843 13:01:13 -- nvmf/common.sh@47 -- # : 0 00:18:08.843 13:01:13 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:08.843 13:01:13 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:08.843 13:01:13 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:08.843 13:01:13 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:08.844 13:01:13 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:08.844 13:01:13 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:08.844 13:01:13 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:08.844 13:01:13 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:08.844 13:01:13 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:08.844 13:01:13 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:08.844 13:01:13 -- target/nmic.sh@14 -- # nvmftestinit 00:18:08.844 13:01:13 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:18:08.844 13:01:13 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:08.844 13:01:13 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:08.844 13:01:13 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:08.844 13:01:13 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:08.844 13:01:13 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:08.844 13:01:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:08.844 13:01:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:08.844 13:01:13 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:18:08.844 13:01:13 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:18:08.844 13:01:13 -- nvmf/common.sh@285 -- # xtrace_disable 00:18:08.844 13:01:13 -- common/autotest_common.sh@10 -- # set +x 00:18:16.978 13:01:20 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:16.978 13:01:20 -- nvmf/common.sh@291 -- # pci_devs=() 00:18:16.978 13:01:20 -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:16.978 13:01:20 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:16.978 13:01:20 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:16.978 13:01:20 -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:16.978 13:01:20 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:16.978 13:01:20 -- nvmf/common.sh@295 -- # net_devs=() 00:18:16.978 13:01:20 -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:16.978 13:01:20 -- nvmf/common.sh@296 -- # e810=() 00:18:16.978 13:01:20 -- nvmf/common.sh@296 -- # local -ga e810 00:18:16.978 13:01:20 -- nvmf/common.sh@297 -- # x722=() 00:18:16.978 13:01:20 -- nvmf/common.sh@297 -- # local -ga x722 00:18:16.978 13:01:20 -- nvmf/common.sh@298 -- # mlx=() 00:18:16.978 13:01:20 -- nvmf/common.sh@298 -- # local -ga mlx 00:18:16.978 13:01:20 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:16.978 13:01:20 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:16.978 13:01:20 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:16.978 13:01:20 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:16.978 13:01:20 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:16.978 13:01:20 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:16.978 13:01:20 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:16.978 13:01:20 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:16.978 13:01:20 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:16.978 13:01:20 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:16.978 13:01:20 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:16.978 13:01:20 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:16.978 13:01:20 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:16.978 13:01:20 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:16.978 13:01:20 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:16.978 13:01:20 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:16.978 13:01:20 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:16.978 13:01:20 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:16.978 13:01:20 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:18:16.978 Found 0000:31:00.0 (0x8086 - 0x159b) 00:18:16.978 13:01:20 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:16.978 13:01:20 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:16.978 13:01:20 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:16.978 13:01:20 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:16.978 13:01:20 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:16.978 13:01:20 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:16.978 13:01:20 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:18:16.978 Found 0000:31:00.1 (0x8086 - 0x159b) 00:18:16.978 13:01:20 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:16.978 13:01:20 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:16.978 13:01:20 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:16.978 13:01:20 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:16.978 13:01:20 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:16.978 13:01:20 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:16.978 13:01:20 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:16.978 13:01:20 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:16.978 13:01:20 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:16.978 13:01:20 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:16.978 13:01:20 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:16.978 13:01:20 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:16.978 13:01:20 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:18:16.978 Found net devices under 0000:31:00.0: cvl_0_0 00:18:16.978 13:01:20 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:16.978 13:01:20 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:16.978 13:01:20 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:16.978 13:01:20 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:16.978 13:01:20 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:16.978 13:01:20 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:18:16.978 Found net devices under 0000:31:00.1: cvl_0_1 00:18:16.978 13:01:20 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:16.978 13:01:20 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:18:16.978 13:01:20 -- nvmf/common.sh@403 -- # is_hw=yes 00:18:16.978 13:01:20 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:18:16.978 13:01:20 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:18:16.978 13:01:20 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:18:16.978 13:01:20 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:16.978 13:01:20 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:16.978 13:01:20 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:16.978 13:01:20 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:16.978 13:01:20 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:16.978 13:01:20 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:16.978 13:01:20 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:16.978 13:01:20 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:16.978 13:01:20 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:16.978 13:01:20 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:16.978 13:01:20 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:16.978 13:01:20 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:16.978 13:01:20 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:16.978 13:01:20 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:16.978 13:01:20 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:16.978 13:01:20 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:16.978 13:01:20 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:16.978 13:01:20 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:16.978 13:01:20 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:16.978 13:01:20 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:16.978 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:16.978 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.670 ms 00:18:16.978 00:18:16.978 --- 10.0.0.2 ping statistics --- 00:18:16.978 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:16.978 rtt min/avg/max/mdev = 0.670/0.670/0.670/0.000 ms 00:18:16.978 13:01:20 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:16.978 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:16.978 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.405 ms 00:18:16.978 00:18:16.978 --- 10.0.0.1 ping statistics --- 00:18:16.978 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:16.978 rtt min/avg/max/mdev = 0.405/0.405/0.405/0.000 ms 00:18:16.978 13:01:20 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:16.978 13:01:20 -- nvmf/common.sh@411 -- # return 0 00:18:16.978 13:01:20 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:18:16.978 13:01:20 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:16.978 13:01:20 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:18:16.978 13:01:20 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:18:16.978 13:01:20 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:16.978 13:01:20 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:18:16.978 13:01:20 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:18:16.978 13:01:21 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:18:16.978 13:01:21 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:16.978 13:01:21 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:16.978 13:01:21 -- common/autotest_common.sh@10 -- # set +x 00:18:16.978 13:01:21 -- nvmf/common.sh@470 -- # nvmfpid=3976446 00:18:16.978 13:01:21 -- nvmf/common.sh@471 -- # waitforlisten 3976446 00:18:16.978 13:01:21 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:16.978 13:01:21 -- common/autotest_common.sh@817 -- # '[' -z 3976446 ']' 00:18:16.979 13:01:21 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:16.979 13:01:21 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:16.979 13:01:21 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:16.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:16.979 13:01:21 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:16.979 13:01:21 -- common/autotest_common.sh@10 -- # set +x 00:18:16.979 [2024-04-26 13:01:21.065924] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:18:16.979 [2024-04-26 13:01:21.065978] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:16.979 EAL: No free 2048 kB hugepages reported on node 1 00:18:16.979 [2024-04-26 13:01:21.134379] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:16.979 [2024-04-26 13:01:21.203870] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:16.979 [2024-04-26 13:01:21.203907] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:16.979 [2024-04-26 13:01:21.203916] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:16.979 [2024-04-26 13:01:21.203924] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:16.979 [2024-04-26 13:01:21.203931] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:16.979 [2024-04-26 13:01:21.204086] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:16.979 [2024-04-26 13:01:21.204207] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:16.979 [2024-04-26 13:01:21.204364] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:16.979 [2024-04-26 13:01:21.204365] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:16.979 13:01:21 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:16.979 13:01:21 -- common/autotest_common.sh@850 -- # return 0 00:18:16.979 13:01:21 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:16.979 13:01:21 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:16.979 13:01:21 -- common/autotest_common.sh@10 -- # set +x 00:18:16.979 13:01:21 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:16.979 13:01:21 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:16.979 13:01:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:16.979 13:01:21 -- common/autotest_common.sh@10 -- # set +x 00:18:16.979 [2024-04-26 13:01:21.881413] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:16.979 13:01:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:16.979 13:01:21 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:16.979 13:01:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:16.979 13:01:21 -- common/autotest_common.sh@10 -- # set +x 00:18:16.979 Malloc0 00:18:16.979 13:01:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:16.979 13:01:21 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:16.979 13:01:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:16.979 13:01:21 -- common/autotest_common.sh@10 -- # set +x 00:18:16.979 13:01:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:16.979 13:01:21 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:16.979 13:01:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:16.979 13:01:21 -- common/autotest_common.sh@10 -- # set +x 00:18:16.979 13:01:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:16.979 13:01:21 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:16.979 13:01:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:16.979 13:01:21 -- common/autotest_common.sh@10 -- # set +x 00:18:16.979 [2024-04-26 13:01:21.941007] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:16.979 13:01:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:16.979 13:01:21 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:18:16.979 test case1: single bdev can't be used in multiple subsystems 00:18:16.979 13:01:21 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:18:16.979 13:01:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:16.979 13:01:21 -- common/autotest_common.sh@10 -- # set +x 00:18:16.979 13:01:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:16.979 13:01:21 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:18:16.979 13:01:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:16.979 13:01:21 -- common/autotest_common.sh@10 -- # set +x 00:18:16.979 13:01:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:16.979 13:01:21 -- target/nmic.sh@28 -- # nmic_status=0 00:18:16.979 13:01:21 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:18:16.979 13:01:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:16.979 13:01:21 -- common/autotest_common.sh@10 -- # set +x 00:18:16.979 [2024-04-26 13:01:21.976948] bdev.c:8005:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:18:16.979 [2024-04-26 13:01:21.976967] subsystem.c:1940:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:18:16.979 [2024-04-26 13:01:21.976975] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.979 request: 00:18:16.979 { 00:18:16.979 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:18:16.979 "namespace": { 00:18:16.979 "bdev_name": "Malloc0", 00:18:16.979 "no_auto_visible": false 00:18:16.979 }, 00:18:16.979 "method": "nvmf_subsystem_add_ns", 00:18:16.979 "req_id": 1 00:18:16.979 } 00:18:16.979 Got JSON-RPC error response 00:18:16.979 response: 00:18:16.979 { 00:18:16.979 "code": -32602, 00:18:16.979 "message": "Invalid parameters" 00:18:16.979 } 00:18:16.979 13:01:21 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:18:16.979 13:01:21 -- target/nmic.sh@29 -- # nmic_status=1 00:18:16.979 13:01:21 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:18:16.979 13:01:21 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:18:16.979 Adding namespace failed - expected result. 00:18:16.979 13:01:21 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:18:16.979 test case2: host connect to nvmf target in multiple paths 00:18:16.979 13:01:21 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:16.979 13:01:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:16.979 13:01:21 -- common/autotest_common.sh@10 -- # set +x 00:18:16.979 [2024-04-26 13:01:21.989095] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:16.979 13:01:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:16.979 13:01:21 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:18.891 13:01:23 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:18:20.278 13:01:25 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:18:20.278 13:01:25 -- common/autotest_common.sh@1184 -- # local i=0 00:18:20.278 13:01:25 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:18:20.278 13:01:25 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:18:20.278 13:01:25 -- common/autotest_common.sh@1191 -- # sleep 2 00:18:22.193 13:01:27 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:18:22.193 13:01:27 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:18:22.193 13:01:27 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:18:22.193 13:01:27 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:18:22.193 13:01:27 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:18:22.193 13:01:27 -- common/autotest_common.sh@1194 -- # return 0 00:18:22.193 13:01:27 -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:18:22.193 [global] 00:18:22.193 thread=1 00:18:22.193 invalidate=1 00:18:22.193 rw=write 00:18:22.193 time_based=1 00:18:22.193 runtime=1 00:18:22.193 ioengine=libaio 00:18:22.193 direct=1 00:18:22.193 bs=4096 00:18:22.193 iodepth=1 00:18:22.193 norandommap=0 00:18:22.193 numjobs=1 00:18:22.193 00:18:22.193 verify_dump=1 00:18:22.193 verify_backlog=512 00:18:22.193 verify_state_save=0 00:18:22.193 do_verify=1 00:18:22.193 verify=crc32c-intel 00:18:22.193 [job0] 00:18:22.193 filename=/dev/nvme0n1 00:18:22.193 Could not set queue depth (nvme0n1) 00:18:22.453 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:22.453 fio-3.35 00:18:22.453 Starting 1 thread 00:18:23.838 00:18:23.838 job0: (groupid=0, jobs=1): err= 0: pid=3977985: Fri Apr 26 13:01:28 2024 00:18:23.838 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:18:23.838 slat (nsec): min=6634, max=59985, avg=21629.65, stdev=8560.74 00:18:23.838 clat (usec): min=299, max=873, avg=572.61, stdev=73.67 00:18:23.838 lat (usec): min=309, max=899, avg=594.24, stdev=76.48 00:18:23.838 clat percentiles (usec): 00:18:23.838 | 1.00th=[ 359], 5.00th=[ 449], 10.00th=[ 478], 20.00th=[ 502], 00:18:23.838 | 30.00th=[ 537], 40.00th=[ 578], 50.00th=[ 594], 60.00th=[ 603], 00:18:23.838 | 70.00th=[ 619], 80.00th=[ 627], 90.00th=[ 652], 95.00th=[ 668], 00:18:23.838 | 99.00th=[ 701], 99.50th=[ 709], 99.90th=[ 734], 99.95th=[ 873], 00:18:23.838 | 99.99th=[ 873] 00:18:23.838 write: IOPS=1043, BW=4176KiB/s (4276kB/s)(4180KiB/1001msec); 0 zone resets 00:18:23.838 slat (usec): min=9, max=32862, avg=57.39, stdev=1015.83 00:18:23.838 clat (usec): min=108, max=491, avg=303.56, stdev=51.07 00:18:23.838 lat (usec): min=118, max=33105, avg=360.95, stdev=1015.30 00:18:23.838 clat percentiles (usec): 00:18:23.838 | 1.00th=[ 198], 5.00th=[ 219], 10.00th=[ 227], 20.00th=[ 253], 00:18:23.838 | 30.00th=[ 273], 40.00th=[ 314], 50.00th=[ 318], 60.00th=[ 322], 00:18:23.838 | 70.00th=[ 326], 80.00th=[ 343], 90.00th=[ 363], 95.00th=[ 379], 00:18:23.838 | 99.00th=[ 433], 99.50th=[ 445], 99.90th=[ 469], 99.95th=[ 490], 00:18:23.838 | 99.99th=[ 490] 00:18:23.838 bw ( KiB/s): min= 4096, max= 4096, per=98.09%, avg=4096.00, stdev= 0.00, samples=1 00:18:23.838 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:23.838 lat (usec) : 250=8.80%, 500=51.23%, 750=39.92%, 1000=0.05% 00:18:23.838 cpu : usr=3.20%, sys=4.70%, ctx=2072, majf=0, minf=1 00:18:23.838 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:23.838 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:23.838 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:23.838 issued rwts: total=1024,1045,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:23.838 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:23.838 00:18:23.838 Run status group 0 (all jobs): 00:18:23.838 READ: bw=4092KiB/s (4190kB/s), 4092KiB/s-4092KiB/s (4190kB/s-4190kB/s), io=4096KiB (4194kB), run=1001-1001msec 00:18:23.838 WRITE: bw=4176KiB/s (4276kB/s), 4176KiB/s-4176KiB/s (4276kB/s-4276kB/s), io=4180KiB (4280kB), run=1001-1001msec 00:18:23.838 00:18:23.838 Disk stats (read/write): 00:18:23.838 nvme0n1: ios=892/1024, merge=0/0, ticks=1463/289, in_queue=1752, util=99.20% 00:18:23.838 13:01:28 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:24.099 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:18:24.099 13:01:28 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:24.099 13:01:28 -- common/autotest_common.sh@1205 -- # local i=0 00:18:24.099 13:01:28 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:18:24.099 13:01:28 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:24.099 13:01:28 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:18:24.099 13:01:28 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:24.099 13:01:28 -- common/autotest_common.sh@1217 -- # return 0 00:18:24.099 13:01:28 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:18:24.099 13:01:28 -- target/nmic.sh@53 -- # nvmftestfini 00:18:24.099 13:01:28 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:24.099 13:01:28 -- nvmf/common.sh@117 -- # sync 00:18:24.099 13:01:28 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:24.099 13:01:28 -- nvmf/common.sh@120 -- # set +e 00:18:24.099 13:01:28 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:24.099 13:01:28 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:24.099 rmmod nvme_tcp 00:18:24.099 rmmod nvme_fabrics 00:18:24.099 rmmod nvme_keyring 00:18:24.099 13:01:29 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:24.099 13:01:29 -- nvmf/common.sh@124 -- # set -e 00:18:24.099 13:01:29 -- nvmf/common.sh@125 -- # return 0 00:18:24.099 13:01:29 -- nvmf/common.sh@478 -- # '[' -n 3976446 ']' 00:18:24.099 13:01:29 -- nvmf/common.sh@479 -- # killprocess 3976446 00:18:24.099 13:01:29 -- common/autotest_common.sh@936 -- # '[' -z 3976446 ']' 00:18:24.099 13:01:29 -- common/autotest_common.sh@940 -- # kill -0 3976446 00:18:24.099 13:01:29 -- common/autotest_common.sh@941 -- # uname 00:18:24.099 13:01:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:24.099 13:01:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3976446 00:18:24.100 13:01:29 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:24.100 13:01:29 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:24.100 13:01:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3976446' 00:18:24.100 killing process with pid 3976446 00:18:24.100 13:01:29 -- common/autotest_common.sh@955 -- # kill 3976446 00:18:24.100 13:01:29 -- common/autotest_common.sh@960 -- # wait 3976446 00:18:24.361 13:01:29 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:24.361 13:01:29 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:18:24.361 13:01:29 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:18:24.361 13:01:29 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:24.361 13:01:29 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:24.361 13:01:29 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:24.361 13:01:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:24.361 13:01:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:26.277 13:01:31 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:26.277 00:18:26.277 real 0m17.585s 00:18:26.277 user 0m45.916s 00:18:26.277 sys 0m6.175s 00:18:26.277 13:01:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:26.277 13:01:31 -- common/autotest_common.sh@10 -- # set +x 00:18:26.277 ************************************ 00:18:26.277 END TEST nvmf_nmic 00:18:26.277 ************************************ 00:18:26.538 13:01:31 -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:18:26.538 13:01:31 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:26.538 13:01:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:26.538 13:01:31 -- common/autotest_common.sh@10 -- # set +x 00:18:26.538 ************************************ 00:18:26.538 START TEST nvmf_fio_target 00:18:26.538 ************************************ 00:18:26.538 13:01:31 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:18:26.538 * Looking for test storage... 00:18:26.538 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:26.538 13:01:31 -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:26.538 13:01:31 -- nvmf/common.sh@7 -- # uname -s 00:18:26.538 13:01:31 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:26.538 13:01:31 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:26.538 13:01:31 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:26.538 13:01:31 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:26.538 13:01:31 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:26.538 13:01:31 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:26.538 13:01:31 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:26.538 13:01:31 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:26.538 13:01:31 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:26.801 13:01:31 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:26.801 13:01:31 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:26.801 13:01:31 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:26.801 13:01:31 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:26.801 13:01:31 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:26.801 13:01:31 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:26.801 13:01:31 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:26.801 13:01:31 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:26.801 13:01:31 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:26.801 13:01:31 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:26.801 13:01:31 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:26.801 13:01:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:26.801 13:01:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:26.801 13:01:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:26.801 13:01:31 -- paths/export.sh@5 -- # export PATH 00:18:26.801 13:01:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:26.801 13:01:31 -- nvmf/common.sh@47 -- # : 0 00:18:26.801 13:01:31 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:26.801 13:01:31 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:26.801 13:01:31 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:26.801 13:01:31 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:26.801 13:01:31 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:26.801 13:01:31 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:26.801 13:01:31 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:26.801 13:01:31 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:26.801 13:01:31 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:26.801 13:01:31 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:26.801 13:01:31 -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:26.801 13:01:31 -- target/fio.sh@16 -- # nvmftestinit 00:18:26.801 13:01:31 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:18:26.801 13:01:31 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:26.801 13:01:31 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:26.801 13:01:31 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:26.801 13:01:31 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:26.801 13:01:31 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:26.801 13:01:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:26.801 13:01:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:26.801 13:01:31 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:18:26.801 13:01:31 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:18:26.801 13:01:31 -- nvmf/common.sh@285 -- # xtrace_disable 00:18:26.801 13:01:31 -- common/autotest_common.sh@10 -- # set +x 00:18:34.941 13:01:38 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:34.941 13:01:38 -- nvmf/common.sh@291 -- # pci_devs=() 00:18:34.941 13:01:38 -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:34.941 13:01:38 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:34.941 13:01:38 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:34.941 13:01:38 -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:34.941 13:01:38 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:34.941 13:01:38 -- nvmf/common.sh@295 -- # net_devs=() 00:18:34.941 13:01:38 -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:34.941 13:01:38 -- nvmf/common.sh@296 -- # e810=() 00:18:34.941 13:01:38 -- nvmf/common.sh@296 -- # local -ga e810 00:18:34.941 13:01:38 -- nvmf/common.sh@297 -- # x722=() 00:18:34.941 13:01:38 -- nvmf/common.sh@297 -- # local -ga x722 00:18:34.941 13:01:38 -- nvmf/common.sh@298 -- # mlx=() 00:18:34.941 13:01:38 -- nvmf/common.sh@298 -- # local -ga mlx 00:18:34.941 13:01:38 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:34.941 13:01:38 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:34.941 13:01:38 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:34.941 13:01:38 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:34.941 13:01:38 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:34.941 13:01:38 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:34.941 13:01:38 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:34.941 13:01:38 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:34.941 13:01:38 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:34.941 13:01:38 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:34.941 13:01:38 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:34.941 13:01:38 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:34.941 13:01:38 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:34.941 13:01:38 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:34.941 13:01:38 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:34.941 13:01:38 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:34.941 13:01:38 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:34.941 13:01:38 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:34.941 13:01:38 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:18:34.941 Found 0000:31:00.0 (0x8086 - 0x159b) 00:18:34.941 13:01:38 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:34.941 13:01:38 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:34.941 13:01:38 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:34.941 13:01:38 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:34.941 13:01:38 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:34.941 13:01:38 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:34.941 13:01:38 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:18:34.941 Found 0000:31:00.1 (0x8086 - 0x159b) 00:18:34.941 13:01:38 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:34.941 13:01:38 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:34.941 13:01:38 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:34.941 13:01:38 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:34.941 13:01:38 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:34.941 13:01:38 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:34.941 13:01:38 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:34.941 13:01:38 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:34.941 13:01:38 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:34.941 13:01:38 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:34.941 13:01:38 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:34.941 13:01:38 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:34.941 13:01:38 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:18:34.941 Found net devices under 0000:31:00.0: cvl_0_0 00:18:34.941 13:01:38 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:34.941 13:01:38 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:34.941 13:01:38 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:34.941 13:01:38 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:34.941 13:01:38 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:34.941 13:01:38 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:18:34.941 Found net devices under 0000:31:00.1: cvl_0_1 00:18:34.941 13:01:38 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:34.941 13:01:38 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:18:34.941 13:01:38 -- nvmf/common.sh@403 -- # is_hw=yes 00:18:34.941 13:01:38 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:18:34.941 13:01:38 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:18:34.942 13:01:38 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:18:34.942 13:01:38 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:34.942 13:01:38 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:34.942 13:01:38 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:34.942 13:01:38 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:34.942 13:01:38 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:34.942 13:01:38 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:34.942 13:01:38 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:34.942 13:01:38 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:34.942 13:01:38 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:34.942 13:01:38 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:34.942 13:01:38 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:34.942 13:01:38 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:34.942 13:01:38 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:34.942 13:01:38 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:34.942 13:01:38 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:34.942 13:01:38 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:34.942 13:01:38 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:34.942 13:01:38 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:34.942 13:01:38 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:34.942 13:01:38 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:34.942 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:34.942 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.512 ms 00:18:34.942 00:18:34.942 --- 10.0.0.2 ping statistics --- 00:18:34.942 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:34.942 rtt min/avg/max/mdev = 0.512/0.512/0.512/0.000 ms 00:18:34.942 13:01:38 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:34.942 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:34.942 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:18:34.942 00:18:34.942 --- 10.0.0.1 ping statistics --- 00:18:34.942 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:34.942 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:18:34.942 13:01:38 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:34.942 13:01:38 -- nvmf/common.sh@411 -- # return 0 00:18:34.942 13:01:38 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:18:34.942 13:01:38 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:34.942 13:01:38 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:18:34.942 13:01:38 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:18:34.942 13:01:38 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:34.942 13:01:38 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:18:34.942 13:01:38 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:18:34.942 13:01:38 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:18:34.942 13:01:38 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:34.942 13:01:38 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:34.942 13:01:38 -- common/autotest_common.sh@10 -- # set +x 00:18:34.942 13:01:38 -- nvmf/common.sh@470 -- # nvmfpid=3982416 00:18:34.942 13:01:38 -- nvmf/common.sh@471 -- # waitforlisten 3982416 00:18:34.942 13:01:38 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:34.942 13:01:38 -- common/autotest_common.sh@817 -- # '[' -z 3982416 ']' 00:18:34.942 13:01:38 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:34.942 13:01:38 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:34.942 13:01:38 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:34.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:34.942 13:01:38 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:34.942 13:01:38 -- common/autotest_common.sh@10 -- # set +x 00:18:34.942 [2024-04-26 13:01:39.039038] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:18:34.942 [2024-04-26 13:01:39.039111] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:34.942 EAL: No free 2048 kB hugepages reported on node 1 00:18:34.942 [2024-04-26 13:01:39.112014] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:34.942 [2024-04-26 13:01:39.186860] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:34.942 [2024-04-26 13:01:39.186906] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:34.942 [2024-04-26 13:01:39.186915] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:34.942 [2024-04-26 13:01:39.186923] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:34.942 [2024-04-26 13:01:39.186930] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:34.942 [2024-04-26 13:01:39.187163] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:34.942 [2024-04-26 13:01:39.187344] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:34.942 [2024-04-26 13:01:39.187387] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:34.942 [2024-04-26 13:01:39.187388] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:34.942 13:01:39 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:34.942 13:01:39 -- common/autotest_common.sh@850 -- # return 0 00:18:34.942 13:01:39 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:34.942 13:01:39 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:34.942 13:01:39 -- common/autotest_common.sh@10 -- # set +x 00:18:34.942 13:01:39 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:34.942 13:01:39 -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:35.203 [2024-04-26 13:01:40.005844] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:35.203 13:01:40 -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:35.203 13:01:40 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:18:35.203 13:01:40 -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:35.463 13:01:40 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:18:35.463 13:01:40 -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:35.725 13:01:40 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:18:35.725 13:01:40 -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:35.725 13:01:40 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:18:35.725 13:01:40 -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:18:35.986 13:01:40 -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:36.246 13:01:41 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:18:36.246 13:01:41 -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:36.246 13:01:41 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:18:36.246 13:01:41 -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:36.508 13:01:41 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:18:36.508 13:01:41 -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:18:36.769 13:01:41 -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:36.769 13:01:41 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:18:36.769 13:01:41 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:37.029 13:01:41 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:18:37.029 13:01:41 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:37.291 13:01:42 -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:37.291 [2024-04-26 13:01:42.271075] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:37.291 13:01:42 -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:18:37.552 13:01:42 -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:18:37.813 13:01:42 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:39.199 13:01:44 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:18:39.199 13:01:44 -- common/autotest_common.sh@1184 -- # local i=0 00:18:39.199 13:01:44 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:18:39.199 13:01:44 -- common/autotest_common.sh@1186 -- # [[ -n 4 ]] 00:18:39.199 13:01:44 -- common/autotest_common.sh@1187 -- # nvme_device_counter=4 00:18:39.199 13:01:44 -- common/autotest_common.sh@1191 -- # sleep 2 00:18:41.745 13:01:46 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:18:41.745 13:01:46 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:18:41.745 13:01:46 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:18:41.745 13:01:46 -- common/autotest_common.sh@1193 -- # nvme_devices=4 00:18:41.745 13:01:46 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:18:41.745 13:01:46 -- common/autotest_common.sh@1194 -- # return 0 00:18:41.745 13:01:46 -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:18:41.745 [global] 00:18:41.745 thread=1 00:18:41.745 invalidate=1 00:18:41.745 rw=write 00:18:41.745 time_based=1 00:18:41.745 runtime=1 00:18:41.745 ioengine=libaio 00:18:41.745 direct=1 00:18:41.745 bs=4096 00:18:41.745 iodepth=1 00:18:41.745 norandommap=0 00:18:41.745 numjobs=1 00:18:41.745 00:18:41.745 verify_dump=1 00:18:41.745 verify_backlog=512 00:18:41.745 verify_state_save=0 00:18:41.745 do_verify=1 00:18:41.745 verify=crc32c-intel 00:18:41.745 [job0] 00:18:41.745 filename=/dev/nvme0n1 00:18:41.745 [job1] 00:18:41.745 filename=/dev/nvme0n2 00:18:41.745 [job2] 00:18:41.745 filename=/dev/nvme0n3 00:18:41.745 [job3] 00:18:41.745 filename=/dev/nvme0n4 00:18:41.745 Could not set queue depth (nvme0n1) 00:18:41.745 Could not set queue depth (nvme0n2) 00:18:41.745 Could not set queue depth (nvme0n3) 00:18:41.745 Could not set queue depth (nvme0n4) 00:18:41.745 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:41.745 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:41.745 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:41.745 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:41.745 fio-3.35 00:18:41.745 Starting 4 threads 00:18:43.157 00:18:43.157 job0: (groupid=0, jobs=1): err= 0: pid=3984301: Fri Apr 26 13:01:47 2024 00:18:43.157 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:18:43.157 slat (nsec): min=7167, max=60106, avg=24739.83, stdev=3136.08 00:18:43.157 clat (usec): min=689, max=1209, avg=1006.80, stdev=76.05 00:18:43.157 lat (usec): min=713, max=1233, avg=1031.54, stdev=76.06 00:18:43.157 clat percentiles (usec): 00:18:43.157 | 1.00th=[ 791], 5.00th=[ 857], 10.00th=[ 906], 20.00th=[ 963], 00:18:43.157 | 30.00th=[ 988], 40.00th=[ 1004], 50.00th=[ 1020], 60.00th=[ 1037], 00:18:43.157 | 70.00th=[ 1045], 80.00th=[ 1057], 90.00th=[ 1090], 95.00th=[ 1106], 00:18:43.157 | 99.00th=[ 1172], 99.50th=[ 1172], 99.90th=[ 1205], 99.95th=[ 1205], 00:18:43.157 | 99.99th=[ 1205] 00:18:43.157 write: IOPS=756, BW=3025KiB/s (3098kB/s)(3028KiB/1001msec); 0 zone resets 00:18:43.157 slat (nsec): min=9163, max=72700, avg=27077.04, stdev=9593.46 00:18:43.157 clat (usec): min=125, max=899, avg=584.05, stdev=148.72 00:18:43.157 lat (usec): min=136, max=930, avg=611.12, stdev=153.31 00:18:43.157 clat percentiles (usec): 00:18:43.157 | 1.00th=[ 233], 5.00th=[ 285], 10.00th=[ 351], 20.00th=[ 465], 00:18:43.157 | 30.00th=[ 523], 40.00th=[ 586], 50.00th=[ 619], 60.00th=[ 644], 00:18:43.157 | 70.00th=[ 676], 80.00th=[ 717], 90.00th=[ 750], 95.00th=[ 783], 00:18:43.157 | 99.00th=[ 832], 99.50th=[ 857], 99.90th=[ 898], 99.95th=[ 898], 00:18:43.157 | 99.99th=[ 898] 00:18:43.157 bw ( KiB/s): min= 4096, max= 4096, per=37.67%, avg=4096.00, stdev= 0.00, samples=1 00:18:43.157 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:43.157 lat (usec) : 250=1.10%, 500=14.03%, 750=39.09%, 1000=21.51% 00:18:43.157 lat (msec) : 2=24.27% 00:18:43.157 cpu : usr=2.40%, sys=2.80%, ctx=1271, majf=0, minf=1 00:18:43.157 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:43.157 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:43.157 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:43.157 issued rwts: total=512,757,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:43.157 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:43.157 job1: (groupid=0, jobs=1): err= 0: pid=3984302: Fri Apr 26 13:01:47 2024 00:18:43.157 read: IOPS=512, BW=2050KiB/s (2100kB/s)(2116KiB/1032msec) 00:18:43.157 slat (nsec): min=6529, max=60421, avg=23917.70, stdev=5566.01 00:18:43.157 clat (usec): min=167, max=42547, avg=1232.47, stdev=5044.27 00:18:43.157 lat (usec): min=173, max=42571, avg=1256.39, stdev=5044.38 00:18:43.157 clat percentiles (usec): 00:18:43.157 | 1.00th=[ 186], 5.00th=[ 330], 10.00th=[ 388], 20.00th=[ 465], 00:18:43.157 | 30.00th=[ 529], 40.00th=[ 570], 50.00th=[ 611], 60.00th=[ 693], 00:18:43.157 | 70.00th=[ 742], 80.00th=[ 775], 90.00th=[ 807], 95.00th=[ 848], 00:18:43.157 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:18:43.157 | 99.99th=[42730] 00:18:43.157 write: IOPS=992, BW=3969KiB/s (4064kB/s)(4096KiB/1032msec); 0 zone resets 00:18:43.157 slat (nsec): min=9157, max=66158, avg=24385.52, stdev=10696.41 00:18:43.157 clat (usec): min=101, max=821, avg=323.76, stdev=160.18 00:18:43.157 lat (usec): min=110, max=853, avg=348.14, stdev=165.30 00:18:43.157 clat percentiles (usec): 00:18:43.157 | 1.00th=[ 108], 5.00th=[ 119], 10.00th=[ 127], 20.00th=[ 155], 00:18:43.157 | 30.00th=[ 229], 40.00th=[ 253], 50.00th=[ 285], 60.00th=[ 343], 00:18:43.157 | 70.00th=[ 392], 80.00th=[ 482], 90.00th=[ 578], 95.00th=[ 619], 00:18:43.157 | 99.00th=[ 709], 99.50th=[ 725], 99.90th=[ 775], 99.95th=[ 824], 00:18:43.157 | 99.99th=[ 824] 00:18:43.157 bw ( KiB/s): min= 3456, max= 4736, per=37.67%, avg=4096.00, stdev=905.10, samples=2 00:18:43.157 iops : min= 864, max= 1184, avg=1024.00, stdev=226.27, samples=2 00:18:43.157 lat (usec) : 250=26.92%, 500=35.61%, 750=28.40%, 1000=8.50% 00:18:43.157 lat (msec) : 2=0.06%, 50=0.52% 00:18:43.157 cpu : usr=1.36%, sys=4.36%, ctx=1554, majf=0, minf=1 00:18:43.157 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:43.157 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:43.157 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:43.157 issued rwts: total=529,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:43.157 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:43.157 job2: (groupid=0, jobs=1): err= 0: pid=3984303: Fri Apr 26 13:01:47 2024 00:18:43.157 read: IOPS=18, BW=74.1KiB/s (75.9kB/s)(76.0KiB/1025msec) 00:18:43.157 slat (nsec): min=24375, max=25227, avg=24681.47, stdev=273.72 00:18:43.157 clat (usec): min=41717, max=43023, avg=42334.84, stdev=491.94 00:18:43.157 lat (usec): min=41742, max=43048, avg=42359.52, stdev=491.87 00:18:43.157 clat percentiles (usec): 00:18:43.157 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:18:43.157 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:18:43.157 | 70.00th=[42730], 80.00th=[42730], 90.00th=[43254], 95.00th=[43254], 00:18:43.157 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:18:43.157 | 99.99th=[43254] 00:18:43.157 write: IOPS=499, BW=1998KiB/s (2046kB/s)(2048KiB/1025msec); 0 zone resets 00:18:43.157 slat (nsec): min=9354, max=49461, avg=26804.23, stdev=9321.05 00:18:43.157 clat (usec): min=126, max=776, avg=397.25, stdev=138.93 00:18:43.157 lat (usec): min=135, max=813, avg=424.05, stdev=143.40 00:18:43.157 clat percentiles (usec): 00:18:43.157 | 1.00th=[ 137], 5.00th=[ 155], 10.00th=[ 239], 20.00th=[ 273], 00:18:43.157 | 30.00th=[ 310], 40.00th=[ 359], 50.00th=[ 383], 60.00th=[ 433], 00:18:43.157 | 70.00th=[ 482], 80.00th=[ 515], 90.00th=[ 578], 95.00th=[ 644], 00:18:43.157 | 99.00th=[ 709], 99.50th=[ 750], 99.90th=[ 775], 99.95th=[ 775], 00:18:43.157 | 99.99th=[ 775] 00:18:43.157 bw ( KiB/s): min= 4096, max= 4096, per=37.67%, avg=4096.00, stdev= 0.00, samples=1 00:18:43.157 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:43.157 lat (usec) : 250=14.12%, 500=59.89%, 750=21.85%, 1000=0.56% 00:18:43.157 lat (msec) : 50=3.58% 00:18:43.157 cpu : usr=0.29%, sys=1.66%, ctx=531, majf=0, minf=1 00:18:43.157 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:43.157 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:43.157 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:43.157 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:43.157 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:43.157 job3: (groupid=0, jobs=1): err= 0: pid=3984304: Fri Apr 26 13:01:47 2024 00:18:43.157 read: IOPS=229, BW=919KiB/s (941kB/s)(920KiB/1001msec) 00:18:43.157 slat (nsec): min=8358, max=41972, avg=24488.38, stdev=3305.26 00:18:43.157 clat (usec): min=849, max=42621, avg=2864.41, stdev=8347.15 00:18:43.157 lat (usec): min=874, max=42646, avg=2888.89, stdev=8347.23 00:18:43.157 clat percentiles (usec): 00:18:43.157 | 1.00th=[ 898], 5.00th=[ 947], 10.00th=[ 988], 20.00th=[ 1037], 00:18:43.157 | 30.00th=[ 1074], 40.00th=[ 1090], 50.00th=[ 1106], 60.00th=[ 1123], 00:18:43.157 | 70.00th=[ 1123], 80.00th=[ 1139], 90.00th=[ 1188], 95.00th=[ 1254], 00:18:43.157 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:18:43.157 | 99.99th=[42730] 00:18:43.157 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:18:43.157 slat (nsec): min=9430, max=52438, avg=29862.52, stdev=7833.34 00:18:43.157 clat (usec): min=175, max=960, avg=616.41, stdev=138.41 00:18:43.157 lat (usec): min=186, max=991, avg=646.27, stdev=141.39 00:18:43.157 clat percentiles (usec): 00:18:43.157 | 1.00th=[ 285], 5.00th=[ 379], 10.00th=[ 429], 20.00th=[ 498], 00:18:43.157 | 30.00th=[ 545], 40.00th=[ 594], 50.00th=[ 627], 60.00th=[ 660], 00:18:43.157 | 70.00th=[ 693], 80.00th=[ 734], 90.00th=[ 791], 95.00th=[ 840], 00:18:43.157 | 99.00th=[ 906], 99.50th=[ 930], 99.90th=[ 963], 99.95th=[ 963], 00:18:43.157 | 99.99th=[ 963] 00:18:43.157 bw ( KiB/s): min= 4096, max= 4096, per=37.67%, avg=4096.00, stdev= 0.00, samples=1 00:18:43.157 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:43.157 lat (usec) : 250=0.27%, 500=13.75%, 750=43.26%, 1000=15.50% 00:18:43.157 lat (msec) : 2=25.88%, 50=1.35% 00:18:43.157 cpu : usr=1.20%, sys=2.00%, ctx=742, majf=0, minf=1 00:18:43.157 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:43.157 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:43.157 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:43.157 issued rwts: total=230,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:43.157 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:43.157 00:18:43.157 Run status group 0 (all jobs): 00:18:43.157 READ: bw=5000KiB/s (5120kB/s), 74.1KiB/s-2050KiB/s (75.9kB/s-2100kB/s), io=5160KiB (5284kB), run=1001-1032msec 00:18:43.157 WRITE: bw=10.6MiB/s (11.1MB/s), 1998KiB/s-3969KiB/s (2046kB/s-4064kB/s), io=11.0MiB (11.5MB), run=1001-1032msec 00:18:43.157 00:18:43.157 Disk stats (read/write): 00:18:43.158 nvme0n1: ios=560/512, merge=0/0, ticks=577/278, in_queue=855, util=87.98% 00:18:43.158 nvme0n2: ios=553/1024, merge=0/0, ticks=482/309, in_queue=791, util=87.84% 00:18:43.158 nvme0n3: ios=14/512, merge=0/0, ticks=593/203, in_queue=796, util=88.47% 00:18:43.158 nvme0n4: ios=109/512, merge=0/0, ticks=1236/308, in_queue=1544, util=96.68% 00:18:43.158 13:01:47 -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:18:43.158 [global] 00:18:43.158 thread=1 00:18:43.158 invalidate=1 00:18:43.158 rw=randwrite 00:18:43.158 time_based=1 00:18:43.158 runtime=1 00:18:43.158 ioengine=libaio 00:18:43.158 direct=1 00:18:43.158 bs=4096 00:18:43.158 iodepth=1 00:18:43.158 norandommap=0 00:18:43.158 numjobs=1 00:18:43.158 00:18:43.158 verify_dump=1 00:18:43.158 verify_backlog=512 00:18:43.158 verify_state_save=0 00:18:43.158 do_verify=1 00:18:43.158 verify=crc32c-intel 00:18:43.158 [job0] 00:18:43.158 filename=/dev/nvme0n1 00:18:43.158 [job1] 00:18:43.158 filename=/dev/nvme0n2 00:18:43.158 [job2] 00:18:43.158 filename=/dev/nvme0n3 00:18:43.158 [job3] 00:18:43.158 filename=/dev/nvme0n4 00:18:43.158 Could not set queue depth (nvme0n1) 00:18:43.158 Could not set queue depth (nvme0n2) 00:18:43.158 Could not set queue depth (nvme0n3) 00:18:43.158 Could not set queue depth (nvme0n4) 00:18:43.423 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:43.423 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:43.423 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:43.423 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:43.423 fio-3.35 00:18:43.423 Starting 4 threads 00:18:44.809 00:18:44.809 job0: (groupid=0, jobs=1): err= 0: pid=3984823: Fri Apr 26 13:01:49 2024 00:18:44.809 read: IOPS=456, BW=1827KiB/s (1870kB/s)(1896KiB/1038msec) 00:18:44.809 slat (nsec): min=8140, max=45560, avg=27395.95, stdev=3400.15 00:18:44.809 clat (usec): min=743, max=42844, avg=1476.99, stdev=4204.20 00:18:44.809 lat (usec): min=770, max=42870, avg=1504.38, stdev=4204.08 00:18:44.809 clat percentiles (usec): 00:18:44.809 | 1.00th=[ 799], 5.00th=[ 898], 10.00th=[ 922], 20.00th=[ 971], 00:18:44.809 | 30.00th=[ 996], 40.00th=[ 1020], 50.00th=[ 1029], 60.00th=[ 1057], 00:18:44.809 | 70.00th=[ 1074], 80.00th=[ 1139], 90.00th=[ 1205], 95.00th=[ 1237], 00:18:44.809 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:18:44.809 | 99.99th=[42730] 00:18:44.809 write: IOPS=493, BW=1973KiB/s (2020kB/s)(2048KiB/1038msec); 0 zone resets 00:18:44.809 slat (nsec): min=8801, max=55161, avg=31040.72, stdev=8622.51 00:18:44.809 clat (usec): min=256, max=984, avg=585.67, stdev=115.13 00:18:44.809 lat (usec): min=269, max=995, avg=616.71, stdev=117.46 00:18:44.809 clat percentiles (usec): 00:18:44.809 | 1.00th=[ 314], 5.00th=[ 379], 10.00th=[ 429], 20.00th=[ 486], 00:18:44.809 | 30.00th=[ 529], 40.00th=[ 562], 50.00th=[ 594], 60.00th=[ 619], 00:18:44.809 | 70.00th=[ 652], 80.00th=[ 685], 90.00th=[ 725], 95.00th=[ 766], 00:18:44.809 | 99.00th=[ 807], 99.50th=[ 857], 99.90th=[ 988], 99.95th=[ 988], 00:18:44.809 | 99.99th=[ 988] 00:18:44.809 bw ( KiB/s): min= 4096, max= 4096, per=47.99%, avg=4096.00, stdev= 0.00, samples=1 00:18:44.809 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:44.809 lat (usec) : 500=11.97%, 750=36.92%, 1000=17.75% 00:18:44.809 lat (msec) : 2=32.86%, 50=0.51% 00:18:44.809 cpu : usr=2.03%, sys=3.76%, ctx=988, majf=0, minf=1 00:18:44.809 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:44.809 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:44.809 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:44.809 issued rwts: total=474,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:44.809 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:44.809 job1: (groupid=0, jobs=1): err= 0: pid=3984824: Fri Apr 26 13:01:49 2024 00:18:44.809 read: IOPS=31, BW=127KiB/s (131kB/s)(128KiB/1004msec) 00:18:44.809 slat (nsec): min=8560, max=27851, avg=24161.25, stdev=3793.96 00:18:44.809 clat (usec): min=710, max=43004, avg=20436.48, stdev=20923.29 00:18:44.809 lat (usec): min=735, max=43029, avg=20460.64, stdev=20922.95 00:18:44.809 clat percentiles (usec): 00:18:44.809 | 1.00th=[ 709], 5.00th=[ 816], 10.00th=[ 947], 20.00th=[ 1012], 00:18:44.809 | 30.00th=[ 1090], 40.00th=[ 1139], 50.00th=[ 1549], 60.00th=[41681], 00:18:44.809 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42730], 95.00th=[43254], 00:18:44.810 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:18:44.810 | 99.99th=[43254] 00:18:44.810 write: IOPS=509, BW=2040KiB/s (2089kB/s)(2048KiB/1004msec); 0 zone resets 00:18:44.810 slat (nsec): min=9292, max=50673, avg=28353.26, stdev=8714.65 00:18:44.810 clat (usec): min=243, max=1086, avg=644.23, stdev=119.80 00:18:44.810 lat (usec): min=253, max=1135, avg=672.59, stdev=124.06 00:18:44.810 clat percentiles (usec): 00:18:44.810 | 1.00th=[ 363], 5.00th=[ 408], 10.00th=[ 482], 20.00th=[ 545], 00:18:44.810 | 30.00th=[ 594], 40.00th=[ 627], 50.00th=[ 652], 60.00th=[ 693], 00:18:44.810 | 70.00th=[ 717], 80.00th=[ 742], 90.00th=[ 783], 95.00th=[ 807], 00:18:44.810 | 99.00th=[ 873], 99.50th=[ 881], 99.90th=[ 1090], 99.95th=[ 1090], 00:18:44.810 | 99.99th=[ 1090] 00:18:44.810 bw ( KiB/s): min= 4096, max= 4096, per=47.99%, avg=4096.00, stdev= 0.00, samples=1 00:18:44.810 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:44.810 lat (usec) : 250=0.18%, 500=11.58%, 750=66.54%, 1000=16.54% 00:18:44.810 lat (msec) : 2=2.39%, 50=2.76% 00:18:44.810 cpu : usr=0.40%, sys=1.89%, ctx=545, majf=0, minf=1 00:18:44.810 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:44.810 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:44.810 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:44.810 issued rwts: total=32,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:44.810 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:44.810 job2: (groupid=0, jobs=1): err= 0: pid=3984825: Fri Apr 26 13:01:49 2024 00:18:44.810 read: IOPS=261, BW=1047KiB/s (1072kB/s)(1048KiB/1001msec) 00:18:44.810 slat (nsec): min=7682, max=59400, avg=25852.00, stdev=3751.43 00:18:44.810 clat (usec): min=739, max=43066, avg=2484.47, stdev=7549.42 00:18:44.810 lat (usec): min=765, max=43091, avg=2510.32, stdev=7549.40 00:18:44.810 clat percentiles (usec): 00:18:44.810 | 1.00th=[ 791], 5.00th=[ 873], 10.00th=[ 906], 20.00th=[ 979], 00:18:44.810 | 30.00th=[ 996], 40.00th=[ 1012], 50.00th=[ 1029], 60.00th=[ 1057], 00:18:44.810 | 70.00th=[ 1074], 80.00th=[ 1106], 90.00th=[ 1156], 95.00th=[ 1221], 00:18:44.810 | 99.00th=[42730], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:18:44.810 | 99.99th=[43254] 00:18:44.810 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:18:44.810 slat (nsec): min=9477, max=55278, avg=29737.59, stdev=8498.65 00:18:44.810 clat (usec): min=273, max=935, avg=626.78, stdev=122.31 00:18:44.810 lat (usec): min=283, max=968, avg=656.52, stdev=125.18 00:18:44.810 clat percentiles (usec): 00:18:44.810 | 1.00th=[ 338], 5.00th=[ 400], 10.00th=[ 465], 20.00th=[ 523], 00:18:44.810 | 30.00th=[ 578], 40.00th=[ 603], 50.00th=[ 627], 60.00th=[ 660], 00:18:44.810 | 70.00th=[ 693], 80.00th=[ 734], 90.00th=[ 783], 95.00th=[ 824], 00:18:44.810 | 99.00th=[ 881], 99.50th=[ 898], 99.90th=[ 938], 99.95th=[ 938], 00:18:44.810 | 99.99th=[ 938] 00:18:44.810 bw ( KiB/s): min= 4096, max= 4096, per=47.99%, avg=4096.00, stdev= 0.00, samples=1 00:18:44.810 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:44.810 lat (usec) : 500=10.72%, 750=44.57%, 1000=22.09% 00:18:44.810 lat (msec) : 2=21.32%, 10=0.13%, 50=1.16% 00:18:44.810 cpu : usr=1.00%, sys=2.40%, ctx=776, majf=0, minf=1 00:18:44.810 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:44.810 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:44.810 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:44.810 issued rwts: total=262,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:44.810 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:44.810 job3: (groupid=0, jobs=1): err= 0: pid=3984826: Fri Apr 26 13:01:49 2024 00:18:44.810 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:18:44.810 slat (nsec): min=7162, max=58166, avg=25456.60, stdev=2921.77 00:18:44.810 clat (usec): min=559, max=1203, avg=976.96, stdev=94.60 00:18:44.810 lat (usec): min=584, max=1228, avg=1002.41, stdev=95.00 00:18:44.810 clat percentiles (usec): 00:18:44.810 | 1.00th=[ 725], 5.00th=[ 799], 10.00th=[ 840], 20.00th=[ 914], 00:18:44.810 | 30.00th=[ 955], 40.00th=[ 979], 50.00th=[ 996], 60.00th=[ 1012], 00:18:44.810 | 70.00th=[ 1029], 80.00th=[ 1045], 90.00th=[ 1074], 95.00th=[ 1106], 00:18:44.810 | 99.00th=[ 1156], 99.50th=[ 1172], 99.90th=[ 1205], 99.95th=[ 1205], 00:18:44.810 | 99.99th=[ 1205] 00:18:44.810 write: IOPS=678, BW=2713KiB/s (2778kB/s)(2716KiB/1001msec); 0 zone resets 00:18:44.810 slat (nsec): min=9576, max=52972, avg=29998.29, stdev=8278.51 00:18:44.810 clat (usec): min=251, max=1072, avg=672.59, stdev=133.24 00:18:44.810 lat (usec): min=282, max=1104, avg=702.59, stdev=135.58 00:18:44.810 clat percentiles (usec): 00:18:44.810 | 1.00th=[ 347], 5.00th=[ 445], 10.00th=[ 498], 20.00th=[ 570], 00:18:44.810 | 30.00th=[ 611], 40.00th=[ 635], 50.00th=[ 685], 60.00th=[ 709], 00:18:44.810 | 70.00th=[ 734], 80.00th=[ 783], 90.00th=[ 848], 95.00th=[ 889], 00:18:44.810 | 99.00th=[ 963], 99.50th=[ 979], 99.90th=[ 1074], 99.95th=[ 1074], 00:18:44.810 | 99.99th=[ 1074] 00:18:44.810 bw ( KiB/s): min= 4096, max= 4096, per=47.99%, avg=4096.00, stdev= 0.00, samples=1 00:18:44.810 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:44.810 lat (usec) : 500=6.30%, 750=36.61%, 1000=36.19% 00:18:44.810 lat (msec) : 2=20.91% 00:18:44.810 cpu : usr=2.00%, sys=3.20%, ctx=1193, majf=0, minf=1 00:18:44.810 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:44.810 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:44.810 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:44.810 issued rwts: total=512,679,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:44.810 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:44.810 00:18:44.810 Run status group 0 (all jobs): 00:18:44.810 READ: bw=4933KiB/s (5051kB/s), 127KiB/s-2046KiB/s (131kB/s-2095kB/s), io=5120KiB (5243kB), run=1001-1038msec 00:18:44.810 WRITE: bw=8536KiB/s (8741kB/s), 1973KiB/s-2713KiB/s (2020kB/s-2778kB/s), io=8860KiB (9073kB), run=1001-1038msec 00:18:44.810 00:18:44.810 Disk stats (read/write): 00:18:44.810 nvme0n1: ios=515/512, merge=0/0, ticks=500/230, in_queue=730, util=87.17% 00:18:44.810 nvme0n2: ios=81/512, merge=0/0, ticks=719/329, in_queue=1048, util=88.90% 00:18:44.810 nvme0n3: ios=159/512, merge=0/0, ticks=592/303, in_queue=895, util=95.47% 00:18:44.810 nvme0n4: ios=505/512, merge=0/0, ticks=545/333, in_queue=878, util=97.02% 00:18:44.810 13:01:49 -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:18:44.810 [global] 00:18:44.810 thread=1 00:18:44.810 invalidate=1 00:18:44.810 rw=write 00:18:44.810 time_based=1 00:18:44.810 runtime=1 00:18:44.810 ioengine=libaio 00:18:44.810 direct=1 00:18:44.810 bs=4096 00:18:44.810 iodepth=128 00:18:44.810 norandommap=0 00:18:44.810 numjobs=1 00:18:44.810 00:18:44.810 verify_dump=1 00:18:44.810 verify_backlog=512 00:18:44.810 verify_state_save=0 00:18:44.810 do_verify=1 00:18:44.810 verify=crc32c-intel 00:18:44.810 [job0] 00:18:44.810 filename=/dev/nvme0n1 00:18:44.810 [job1] 00:18:44.810 filename=/dev/nvme0n2 00:18:44.810 [job2] 00:18:44.810 filename=/dev/nvme0n3 00:18:44.810 [job3] 00:18:44.810 filename=/dev/nvme0n4 00:18:44.810 Could not set queue depth (nvme0n1) 00:18:44.810 Could not set queue depth (nvme0n2) 00:18:44.810 Could not set queue depth (nvme0n3) 00:18:44.810 Could not set queue depth (nvme0n4) 00:18:45.081 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:45.081 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:45.081 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:45.081 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:45.081 fio-3.35 00:18:45.081 Starting 4 threads 00:18:46.468 00:18:46.468 job0: (groupid=0, jobs=1): err= 0: pid=3985338: Fri Apr 26 13:01:51 2024 00:18:46.468 read: IOPS=8143, BW=31.8MiB/s (33.4MB/s)(32.0MiB/1006msec) 00:18:46.468 slat (nsec): min=935, max=9286.4k, avg=61827.53, stdev=468035.31 00:18:46.468 clat (usec): min=2233, max=21040, avg=8284.69, stdev=2734.46 00:18:46.468 lat (usec): min=2236, max=21977, avg=8346.52, stdev=2767.95 00:18:46.468 clat percentiles (usec): 00:18:46.468 | 1.00th=[ 3916], 5.00th=[ 5080], 10.00th=[ 5669], 20.00th=[ 6194], 00:18:46.468 | 30.00th=[ 6718], 40.00th=[ 6980], 50.00th=[ 7504], 60.00th=[ 8160], 00:18:46.468 | 70.00th=[ 8979], 80.00th=[10552], 90.00th=[11994], 95.00th=[13960], 00:18:46.468 | 99.00th=[16188], 99.50th=[17433], 99.90th=[21103], 99.95th=[21103], 00:18:46.468 | 99.99th=[21103] 00:18:46.468 write: IOPS=8358, BW=32.6MiB/s (34.2MB/s)(32.9MiB/1007msec); 0 zone resets 00:18:46.468 slat (nsec): min=1637, max=7879.9k, avg=54061.82, stdev=390458.81 00:18:46.468 clat (usec): min=1142, max=18105, avg=7103.37, stdev=2934.32 00:18:46.468 lat (usec): min=1152, max=18112, avg=7157.43, stdev=2953.52 00:18:46.468 clat percentiles (usec): 00:18:46.468 | 1.00th=[ 2540], 5.00th=[ 3752], 10.00th=[ 4146], 20.00th=[ 4817], 00:18:46.468 | 30.00th=[ 5735], 40.00th=[ 6194], 50.00th=[ 6652], 60.00th=[ 6980], 00:18:46.468 | 70.00th=[ 7373], 80.00th=[ 8586], 90.00th=[11338], 95.00th=[14222], 00:18:46.468 | 99.00th=[16909], 99.50th=[17957], 99.90th=[17957], 99.95th=[18220], 00:18:46.468 | 99.99th=[18220] 00:18:46.468 bw ( KiB/s): min=29696, max=36616, per=36.50%, avg=33156.00, stdev=4893.18, samples=2 00:18:46.468 iops : min= 7424, max= 9154, avg=8289.00, stdev=1223.29, samples=2 00:18:46.468 lat (msec) : 2=0.19%, 4=4.71%, 10=76.80%, 20=18.19%, 50=0.10% 00:18:46.468 cpu : usr=7.16%, sys=6.86%, ctx=564, majf=0, minf=1 00:18:46.468 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:18:46.468 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:46.468 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:46.468 issued rwts: total=8192,8417,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:46.468 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:46.468 job1: (groupid=0, jobs=1): err= 0: pid=3985350: Fri Apr 26 13:01:51 2024 00:18:46.468 read: IOPS=4225, BW=16.5MiB/s (17.3MB/s)(16.6MiB/1007msec) 00:18:46.468 slat (nsec): min=910, max=16855k, avg=114858.36, stdev=870882.68 00:18:46.468 clat (usec): min=1483, max=56679, avg=14631.22, stdev=9298.87 00:18:46.468 lat (usec): min=3030, max=56711, avg=14746.07, stdev=9373.09 00:18:46.468 clat percentiles (usec): 00:18:46.468 | 1.00th=[ 4555], 5.00th=[ 5932], 10.00th=[ 6849], 20.00th=[ 7504], 00:18:46.468 | 30.00th=[ 9896], 40.00th=[10421], 50.00th=[11338], 60.00th=[12780], 00:18:46.468 | 70.00th=[15139], 80.00th=[18220], 90.00th=[30278], 95.00th=[34866], 00:18:46.468 | 99.00th=[43254], 99.50th=[49546], 99.90th=[54789], 99.95th=[54789], 00:18:46.468 | 99.99th=[56886] 00:18:46.468 write: IOPS=4575, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1007msec); 0 zone resets 00:18:46.468 slat (nsec): min=1595, max=22601k, avg=104156.57, stdev=904353.12 00:18:46.468 clat (usec): min=3717, max=48770, avg=14147.77, stdev=8503.65 00:18:46.468 lat (usec): min=3725, max=49924, avg=14251.93, stdev=8584.14 00:18:46.468 clat percentiles (usec): 00:18:46.468 | 1.00th=[ 3916], 5.00th=[ 4883], 10.00th=[ 6783], 20.00th=[ 7635], 00:18:46.468 | 30.00th=[ 8291], 40.00th=[ 9896], 50.00th=[10814], 60.00th=[12387], 00:18:46.468 | 70.00th=[16909], 80.00th=[21103], 90.00th=[27657], 95.00th=[31065], 00:18:46.468 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[43779], 00:18:46.468 | 99.99th=[49021] 00:18:46.468 bw ( KiB/s): min=16384, max=20480, per=20.29%, avg=18432.00, stdev=2896.31, samples=2 00:18:46.468 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:18:46.468 lat (msec) : 2=0.01%, 4=0.87%, 10=34.66%, 20=44.21%, 50=20.09% 00:18:46.468 lat (msec) : 100=0.16% 00:18:46.468 cpu : usr=3.58%, sys=4.27%, ctx=250, majf=0, minf=1 00:18:46.468 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:18:46.468 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:46.468 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:46.468 issued rwts: total=4255,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:46.468 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:46.469 job2: (groupid=0, jobs=1): err= 0: pid=3985351: Fri Apr 26 13:01:51 2024 00:18:46.469 read: IOPS=3650, BW=14.3MiB/s (15.0MB/s)(14.4MiB/1007msec) 00:18:46.469 slat (nsec): min=969, max=20815k, avg=131469.58, stdev=936480.11 00:18:46.469 clat (usec): min=2535, max=71461, avg=15638.43, stdev=8616.34 00:18:46.469 lat (usec): min=3607, max=71468, avg=15769.90, stdev=8692.31 00:18:46.469 clat percentiles (usec): 00:18:46.469 | 1.00th=[ 6128], 5.00th=[ 8291], 10.00th=[ 9503], 20.00th=[10683], 00:18:46.469 | 30.00th=[12387], 40.00th=[13173], 50.00th=[13829], 60.00th=[14615], 00:18:46.469 | 70.00th=[16057], 80.00th=[17695], 90.00th=[20841], 95.00th=[30278], 00:18:46.469 | 99.00th=[68682], 99.50th=[68682], 99.90th=[71828], 99.95th=[71828], 00:18:46.469 | 99.99th=[71828] 00:18:46.469 write: IOPS=4067, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1007msec); 0 zone resets 00:18:46.469 slat (nsec): min=1659, max=15748k, avg=119194.57, stdev=812679.19 00:18:46.469 clat (usec): min=1808, max=69117, avg=16870.84, stdev=11487.38 00:18:46.469 lat (usec): min=1818, max=69128, avg=16990.04, stdev=11550.09 00:18:46.469 clat percentiles (usec): 00:18:46.469 | 1.00th=[ 4490], 5.00th=[ 5080], 10.00th=[ 6521], 20.00th=[ 8291], 00:18:46.469 | 30.00th=[ 9765], 40.00th=[11207], 50.00th=[13829], 60.00th=[15139], 00:18:46.469 | 70.00th=[17695], 80.00th=[22676], 90.00th=[33817], 95.00th=[41157], 00:18:46.469 | 99.00th=[61604], 99.50th=[63177], 99.90th=[63177], 99.95th=[63177], 00:18:46.469 | 99.99th=[68682] 00:18:46.469 bw ( KiB/s): min=16096, max=16384, per=17.88%, avg=16240.00, stdev=203.65, samples=2 00:18:46.469 iops : min= 4024, max= 4096, avg=4060.00, stdev=50.91, samples=2 00:18:46.469 lat (msec) : 2=0.03%, 4=0.37%, 10=21.41%, 20=58.16%, 50=17.79% 00:18:46.469 lat (msec) : 100=2.24% 00:18:46.469 cpu : usr=3.58%, sys=3.48%, ctx=292, majf=0, minf=1 00:18:46.469 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:18:46.469 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:46.469 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:46.469 issued rwts: total=3676,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:46.469 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:46.469 job3: (groupid=0, jobs=1): err= 0: pid=3985352: Fri Apr 26 13:01:51 2024 00:18:46.469 read: IOPS=5603, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1005msec) 00:18:46.469 slat (nsec): min=926, max=15875k, avg=81560.52, stdev=636331.78 00:18:46.469 clat (usec): min=1070, max=37546, avg=11357.31, stdev=4879.92 00:18:46.469 lat (usec): min=1096, max=37556, avg=11438.87, stdev=4906.91 00:18:46.469 clat percentiles (usec): 00:18:46.469 | 1.00th=[ 5014], 5.00th=[ 5866], 10.00th=[ 6456], 20.00th=[ 8225], 00:18:46.469 | 30.00th=[ 8979], 40.00th=[ 9765], 50.00th=[10945], 60.00th=[11600], 00:18:46.469 | 70.00th=[11994], 80.00th=[12911], 90.00th=[16909], 95.00th=[20579], 00:18:46.469 | 99.00th=[30016], 99.50th=[37487], 99.90th=[37487], 99.95th=[37487], 00:18:46.469 | 99.99th=[37487] 00:18:46.469 write: IOPS=5719, BW=22.3MiB/s (23.4MB/s)(22.5MiB/1005msec); 0 zone resets 00:18:46.469 slat (nsec): min=1635, max=10141k, avg=80965.91, stdev=521519.19 00:18:46.469 clat (usec): min=988, max=47340, avg=10920.74, stdev=5151.30 00:18:46.469 lat (usec): min=1007, max=47342, avg=11001.71, stdev=5178.63 00:18:46.469 clat percentiles (usec): 00:18:46.469 | 1.00th=[ 2343], 5.00th=[ 4621], 10.00th=[ 5538], 20.00th=[ 7832], 00:18:46.469 | 30.00th=[ 8586], 40.00th=[ 9634], 50.00th=[10552], 60.00th=[11469], 00:18:46.469 | 70.00th=[11600], 80.00th=[12649], 90.00th=[15401], 95.00th=[20317], 00:18:46.469 | 99.00th=[32637], 99.50th=[37487], 99.90th=[42206], 99.95th=[42206], 00:18:46.469 | 99.99th=[47449] 00:18:46.469 bw ( KiB/s): min=20544, max=24576, per=24.83%, avg=22560.00, stdev=2851.05, samples=2 00:18:46.469 iops : min= 5136, max= 6144, avg=5640.00, stdev=712.76, samples=2 00:18:46.469 lat (usec) : 1000=0.02% 00:18:46.469 lat (msec) : 2=0.61%, 4=0.97%, 10=41.81%, 20=50.88%, 50=5.72% 00:18:46.469 cpu : usr=4.38%, sys=5.98%, ctx=432, majf=0, minf=1 00:18:46.469 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:18:46.469 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:46.469 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:46.469 issued rwts: total=5632,5748,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:46.469 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:46.469 00:18:46.469 Run status group 0 (all jobs): 00:18:46.469 READ: bw=84.4MiB/s (88.5MB/s), 14.3MiB/s-31.8MiB/s (15.0MB/s-33.4MB/s), io=85.0MiB (89.1MB), run=1005-1007msec 00:18:46.469 WRITE: bw=88.7MiB/s (93.0MB/s), 15.9MiB/s-32.6MiB/s (16.7MB/s-34.2MB/s), io=89.3MiB (93.7MB), run=1005-1007msec 00:18:46.469 00:18:46.469 Disk stats (read/write): 00:18:46.469 nvme0n1: ios=6709/6815, merge=0/0, ticks=53139/47615, in_queue=100754, util=84.97% 00:18:46.469 nvme0n2: ios=3637/3767, merge=0/0, ticks=31934/28642, in_queue=60576, util=88.79% 00:18:46.469 nvme0n3: ios=3385/3584, merge=0/0, ticks=42927/40378, in_queue=83305, util=93.04% 00:18:46.469 nvme0n4: ios=4659/4859, merge=0/0, ticks=31609/26827, in_queue=58436, util=96.91% 00:18:46.469 13:01:51 -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:18:46.469 [global] 00:18:46.469 thread=1 00:18:46.469 invalidate=1 00:18:46.469 rw=randwrite 00:18:46.469 time_based=1 00:18:46.469 runtime=1 00:18:46.469 ioengine=libaio 00:18:46.469 direct=1 00:18:46.469 bs=4096 00:18:46.469 iodepth=128 00:18:46.469 norandommap=0 00:18:46.469 numjobs=1 00:18:46.469 00:18:46.469 verify_dump=1 00:18:46.469 verify_backlog=512 00:18:46.469 verify_state_save=0 00:18:46.469 do_verify=1 00:18:46.469 verify=crc32c-intel 00:18:46.469 [job0] 00:18:46.469 filename=/dev/nvme0n1 00:18:46.469 [job1] 00:18:46.469 filename=/dev/nvme0n2 00:18:46.469 [job2] 00:18:46.469 filename=/dev/nvme0n3 00:18:46.469 [job3] 00:18:46.469 filename=/dev/nvme0n4 00:18:46.469 Could not set queue depth (nvme0n1) 00:18:46.469 Could not set queue depth (nvme0n2) 00:18:46.469 Could not set queue depth (nvme0n3) 00:18:46.469 Could not set queue depth (nvme0n4) 00:18:46.731 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:46.731 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:46.731 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:46.731 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:46.731 fio-3.35 00:18:46.731 Starting 4 threads 00:18:48.205 00:18:48.205 job0: (groupid=0, jobs=1): err= 0: pid=3985793: Fri Apr 26 13:01:52 2024 00:18:48.205 read: IOPS=2534, BW=9.90MiB/s (10.4MB/s)(10.0MiB/1010msec) 00:18:48.205 slat (nsec): min=973, max=21387k, avg=146625.88, stdev=1166952.25 00:18:48.205 clat (usec): min=5665, max=85831, avg=21552.88, stdev=11327.36 00:18:48.205 lat (usec): min=5673, max=85836, avg=21699.50, stdev=11383.04 00:18:48.205 clat percentiles (usec): 00:18:48.205 | 1.00th=[ 7832], 5.00th=[11469], 10.00th=[11600], 20.00th=[12256], 00:18:48.205 | 30.00th=[13435], 40.00th=[14484], 50.00th=[19530], 60.00th=[21890], 00:18:48.205 | 70.00th=[24511], 80.00th=[29230], 90.00th=[31589], 95.00th=[42730], 00:18:48.205 | 99.00th=[78119], 99.50th=[78119], 99.90th=[78119], 99.95th=[78119], 00:18:48.205 | 99.99th=[85459] 00:18:48.205 write: IOPS=2801, BW=10.9MiB/s (11.5MB/s)(11.1MiB/1010msec); 0 zone resets 00:18:48.205 slat (nsec): min=1649, max=16524k, avg=189043.45, stdev=1030546.75 00:18:48.205 clat (usec): min=1220, max=103846, avg=25813.99, stdev=19144.44 00:18:48.205 lat (usec): min=1263, max=103862, avg=26003.03, stdev=19241.68 00:18:48.205 clat percentiles (msec): 00:18:48.205 | 1.00th=[ 7], 5.00th=[ 7], 10.00th=[ 10], 20.00th=[ 14], 00:18:48.205 | 30.00th=[ 18], 40.00th=[ 21], 50.00th=[ 22], 60.00th=[ 22], 00:18:48.205 | 70.00th=[ 23], 80.00th=[ 28], 90.00th=[ 54], 95.00th=[ 74], 00:18:48.205 | 99.00th=[ 93], 99.50th=[ 96], 99.90th=[ 105], 99.95th=[ 105], 00:18:48.205 | 99.99th=[ 105] 00:18:48.205 bw ( KiB/s): min= 9408, max=12183, per=12.23%, avg=10795.50, stdev=1962.22, samples=2 00:18:48.205 iops : min= 2352, max= 3045, avg=2698.50, stdev=490.02, samples=2 00:18:48.205 lat (msec) : 2=0.06%, 4=0.11%, 10=6.92%, 20=38.66%, 50=46.86% 00:18:48.205 lat (msec) : 100=7.27%, 250=0.11% 00:18:48.205 cpu : usr=2.38%, sys=2.48%, ctx=293, majf=0, minf=1 00:18:48.205 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:18:48.205 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:48.205 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:48.205 issued rwts: total=2560,2830,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:48.205 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:48.205 job1: (groupid=0, jobs=1): err= 0: pid=3985811: Fri Apr 26 13:01:52 2024 00:18:48.205 read: IOPS=7837, BW=30.6MiB/s (32.1MB/s)(30.7MiB/1004msec) 00:18:48.205 slat (nsec): min=857, max=14337k, avg=60892.44, stdev=398139.04 00:18:48.205 clat (usec): min=1004, max=29462, avg=7935.89, stdev=2669.88 00:18:48.205 lat (usec): min=3842, max=40577, avg=7996.78, stdev=2696.45 00:18:48.205 clat percentiles (usec): 00:18:48.205 | 1.00th=[ 3949], 5.00th=[ 4948], 10.00th=[ 5997], 20.00th=[ 6587], 00:18:48.205 | 30.00th=[ 7177], 40.00th=[ 7373], 50.00th=[ 7570], 60.00th=[ 7832], 00:18:48.205 | 70.00th=[ 8160], 80.00th=[ 8586], 90.00th=[ 9765], 95.00th=[11338], 00:18:48.205 | 99.00th=[25297], 99.50th=[25560], 99.90th=[26346], 99.95th=[26346], 00:18:48.205 | 99.99th=[29492] 00:18:48.205 write: IOPS=8159, BW=31.9MiB/s (33.4MB/s)(32.0MiB/1004msec); 0 zone resets 00:18:48.205 slat (nsec): min=1433, max=14189k, avg=59096.70, stdev=426259.89 00:18:48.205 clat (usec): min=3086, max=23917, avg=7902.20, stdev=2305.62 00:18:48.205 lat (usec): min=3094, max=23934, avg=7961.29, stdev=2334.18 00:18:48.205 clat percentiles (usec): 00:18:48.205 | 1.00th=[ 3851], 5.00th=[ 4817], 10.00th=[ 5669], 20.00th=[ 6783], 00:18:48.205 | 30.00th=[ 7242], 40.00th=[ 7439], 50.00th=[ 7635], 60.00th=[ 7898], 00:18:48.205 | 70.00th=[ 8094], 80.00th=[ 8586], 90.00th=[ 9634], 95.00th=[12518], 00:18:48.205 | 99.00th=[18482], 99.50th=[21365], 99.90th=[21627], 99.95th=[21627], 00:18:48.205 | 99.99th=[23987] 00:18:48.205 bw ( KiB/s): min=32768, max=32768, per=37.13%, avg=32768.00, stdev= 0.00, samples=2 00:18:48.205 iops : min= 8192, max= 8192, avg=8192.00, stdev= 0.00, samples=2 00:18:48.205 lat (msec) : 2=0.01%, 4=1.57%, 10=90.20%, 20=7.36%, 50=0.87% 00:18:48.205 cpu : usr=6.08%, sys=6.28%, ctx=608, majf=0, minf=1 00:18:48.205 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:18:48.205 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:48.205 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:48.205 issued rwts: total=7869,8192,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:48.205 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:48.205 job2: (groupid=0, jobs=1): err= 0: pid=3985831: Fri Apr 26 13:01:52 2024 00:18:48.205 read: IOPS=3289, BW=12.8MiB/s (13.5MB/s)(12.9MiB/1005msec) 00:18:48.205 slat (nsec): min=999, max=26002k, avg=169690.14, stdev=1292161.55 00:18:48.205 clat (usec): min=3385, max=69144, avg=19781.49, stdev=11007.84 00:18:48.205 lat (usec): min=6588, max=69154, avg=19951.18, stdev=11135.53 00:18:48.205 clat percentiles (usec): 00:18:48.205 | 1.00th=[ 7570], 5.00th=[10028], 10.00th=[10552], 20.00th=[11731], 00:18:48.205 | 30.00th=[12387], 40.00th=[14091], 50.00th=[16712], 60.00th=[20055], 00:18:48.205 | 70.00th=[22676], 80.00th=[25035], 90.00th=[32900], 95.00th=[36439], 00:18:48.205 | 99.00th=[63701], 99.50th=[66323], 99.90th=[68682], 99.95th=[68682], 00:18:48.205 | 99.99th=[68682] 00:18:48.205 write: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec); 0 zone resets 00:18:48.205 slat (nsec): min=1648, max=15207k, avg=117706.47, stdev=756139.43 00:18:48.205 clat (usec): min=1151, max=69153, avg=17309.80, stdev=7752.83 00:18:48.205 lat (usec): min=1162, max=69162, avg=17427.51, stdev=7795.04 00:18:48.205 clat percentiles (usec): 00:18:48.205 | 1.00th=[ 5014], 5.00th=[ 7570], 10.00th=[10028], 20.00th=[11207], 00:18:48.205 | 30.00th=[11731], 40.00th=[14091], 50.00th=[16581], 60.00th=[19792], 00:18:48.205 | 70.00th=[21103], 80.00th=[21627], 90.00th=[23987], 95.00th=[27657], 00:18:48.205 | 99.00th=[50594], 99.50th=[52691], 99.90th=[63177], 99.95th=[68682], 00:18:48.205 | 99.99th=[68682] 00:18:48.205 bw ( KiB/s): min=13072, max=15600, per=16.24%, avg=14336.00, stdev=1787.57, samples=2 00:18:48.205 iops : min= 3268, max= 3900, avg=3584.00, stdev=446.89, samples=2 00:18:48.205 lat (msec) : 2=0.03%, 4=0.17%, 10=6.78%, 20=53.41%, 50=37.46% 00:18:48.205 lat (msec) : 100=2.15% 00:18:48.205 cpu : usr=2.59%, sys=3.49%, ctx=283, majf=0, minf=1 00:18:48.205 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:18:48.205 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:48.205 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:48.205 issued rwts: total=3306,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:48.205 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:48.205 job3: (groupid=0, jobs=1): err= 0: pid=3985837: Fri Apr 26 13:01:52 2024 00:18:48.205 read: IOPS=7164, BW=28.0MiB/s (29.3MB/s)(28.2MiB/1008msec) 00:18:48.205 slat (nsec): min=980, max=22051k, avg=70388.90, stdev=576445.09 00:18:48.205 clat (usec): min=2561, max=36881, avg=9545.16, stdev=4238.26 00:18:48.205 lat (usec): min=2565, max=36888, avg=9615.55, stdev=4265.67 00:18:48.205 clat percentiles (usec): 00:18:48.205 | 1.00th=[ 3851], 5.00th=[ 5932], 10.00th=[ 6587], 20.00th=[ 7308], 00:18:48.205 | 30.00th=[ 7701], 40.00th=[ 7898], 50.00th=[ 8225], 60.00th=[ 8717], 00:18:48.205 | 70.00th=[ 9634], 80.00th=[10814], 90.00th=[13173], 95.00th=[16712], 00:18:48.205 | 99.00th=[29492], 99.50th=[29492], 99.90th=[36963], 99.95th=[36963], 00:18:48.205 | 99.99th=[36963] 00:18:48.205 write: IOPS=7619, BW=29.8MiB/s (31.2MB/s)(30.0MiB/1008msec); 0 zone resets 00:18:48.205 slat (nsec): min=1598, max=16181k, avg=56753.78, stdev=430801.03 00:18:48.205 clat (usec): min=1187, max=28301, avg=7670.27, stdev=2742.06 00:18:48.205 lat (usec): min=1198, max=28320, avg=7727.02, stdev=2759.13 00:18:48.205 clat percentiles (usec): 00:18:48.205 | 1.00th=[ 2507], 5.00th=[ 4080], 10.00th=[ 4621], 20.00th=[ 5276], 00:18:48.205 | 30.00th=[ 6456], 40.00th=[ 7242], 50.00th=[ 7701], 60.00th=[ 7898], 00:18:48.205 | 70.00th=[ 8094], 80.00th=[ 8586], 90.00th=[10814], 95.00th=[12649], 00:18:48.205 | 99.00th=[17957], 99.50th=[17957], 99.90th=[18220], 99.95th=[18220], 00:18:48.205 | 99.99th=[28181] 00:18:48.205 bw ( KiB/s): min=28104, max=32752, per=34.47%, avg=30428.00, stdev=3286.63, samples=2 00:18:48.205 iops : min= 7026, max= 8188, avg=7607.00, stdev=821.66, samples=2 00:18:48.205 lat (msec) : 2=0.34%, 4=2.62%, 10=75.68%, 20=19.23%, 50=2.13% 00:18:48.205 cpu : usr=5.56%, sys=8.04%, ctx=527, majf=0, minf=1 00:18:48.205 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:18:48.205 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:48.205 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:48.205 issued rwts: total=7222,7680,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:48.205 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:48.205 00:18:48.205 Run status group 0 (all jobs): 00:18:48.205 READ: bw=81.1MiB/s (85.0MB/s), 9.90MiB/s-30.6MiB/s (10.4MB/s-32.1MB/s), io=81.9MiB (85.8MB), run=1004-1010msec 00:18:48.206 WRITE: bw=86.2MiB/s (90.4MB/s), 10.9MiB/s-31.9MiB/s (11.5MB/s-33.4MB/s), io=87.1MiB (91.3MB), run=1004-1010msec 00:18:48.206 00:18:48.206 Disk stats (read/write): 00:18:48.206 nvme0n1: ios=2304/2560, merge=0/0, ticks=47232/56963, in_queue=104195, util=99.80% 00:18:48.206 nvme0n2: ios=6705/6819, merge=0/0, ticks=23043/25313, in_queue=48356, util=89.60% 00:18:48.206 nvme0n3: ios=2590/2631, merge=0/0, ticks=54194/48935, in_queue=103129, util=94.52% 00:18:48.206 nvme0n4: ios=6369/6656, merge=0/0, ticks=52564/48132, in_queue=100696, util=98.08% 00:18:48.206 13:01:52 -- target/fio.sh@55 -- # sync 00:18:48.206 13:01:52 -- target/fio.sh@59 -- # fio_pid=3985922 00:18:48.206 13:01:52 -- target/fio.sh@61 -- # sleep 3 00:18:48.206 13:01:52 -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:18:48.206 [global] 00:18:48.206 thread=1 00:18:48.206 invalidate=1 00:18:48.206 rw=read 00:18:48.206 time_based=1 00:18:48.206 runtime=10 00:18:48.206 ioengine=libaio 00:18:48.206 direct=1 00:18:48.206 bs=4096 00:18:48.206 iodepth=1 00:18:48.206 norandommap=1 00:18:48.206 numjobs=1 00:18:48.206 00:18:48.206 [job0] 00:18:48.206 filename=/dev/nvme0n1 00:18:48.206 [job1] 00:18:48.206 filename=/dev/nvme0n2 00:18:48.206 [job2] 00:18:48.206 filename=/dev/nvme0n3 00:18:48.206 [job3] 00:18:48.206 filename=/dev/nvme0n4 00:18:48.206 Could not set queue depth (nvme0n1) 00:18:48.206 Could not set queue depth (nvme0n2) 00:18:48.206 Could not set queue depth (nvme0n3) 00:18:48.206 Could not set queue depth (nvme0n4) 00:18:48.473 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:48.473 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:48.473 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:48.473 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:48.473 fio-3.35 00:18:48.473 Starting 4 threads 00:18:51.015 13:01:55 -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:18:51.275 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=282624, buflen=4096 00:18:51.275 fio: pid=3986318, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:51.275 13:01:56 -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:18:51.275 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=13324288, buflen=4096 00:18:51.275 fio: pid=3986311, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:51.275 13:01:56 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:51.275 13:01:56 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:18:51.535 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=14422016, buflen=4096 00:18:51.535 fio: pid=3986283, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:51.535 13:01:56 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:51.535 13:01:56 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:18:51.796 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=10723328, buflen=4096 00:18:51.796 fio: pid=3986293, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:51.796 13:01:56 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:51.796 13:01:56 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:18:51.796 00:18:51.796 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3986283: Fri Apr 26 13:01:56 2024 00:18:51.796 read: IOPS=1210, BW=4842KiB/s (4958kB/s)(13.8MiB/2909msec) 00:18:51.796 slat (usec): min=6, max=14318, avg=29.36, stdev=304.15 00:18:51.796 clat (usec): min=251, max=42750, avg=785.31, stdev=714.30 00:18:51.796 lat (usec): min=259, max=42775, avg=814.67, stdev=776.26 00:18:51.796 clat percentiles (usec): 00:18:51.796 | 1.00th=[ 457], 5.00th=[ 570], 10.00th=[ 652], 20.00th=[ 709], 00:18:51.796 | 30.00th=[ 742], 40.00th=[ 783], 50.00th=[ 799], 60.00th=[ 816], 00:18:51.796 | 70.00th=[ 824], 80.00th=[ 840], 90.00th=[ 865], 95.00th=[ 889], 00:18:51.796 | 99.00th=[ 938], 99.50th=[ 971], 99.90th=[ 1221], 99.95th=[ 2024], 00:18:51.796 | 99.99th=[42730] 00:18:51.796 bw ( KiB/s): min= 4888, max= 4968, per=40.38%, avg=4936.00, stdev=31.50, samples=5 00:18:51.796 iops : min= 1222, max= 1242, avg=1234.00, stdev= 7.87, samples=5 00:18:51.796 lat (usec) : 500=1.59%, 750=29.56%, 1000=68.48% 00:18:51.796 lat (msec) : 2=0.28%, 4=0.03%, 50=0.03% 00:18:51.796 cpu : usr=1.20%, sys=3.09%, ctx=3524, majf=0, minf=1 00:18:51.796 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:51.796 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:51.796 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:51.796 issued rwts: total=3522,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:51.796 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:51.796 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3986293: Fri Apr 26 13:01:56 2024 00:18:51.796 read: IOPS=845, BW=3382KiB/s (3464kB/s)(10.2MiB/3096msec) 00:18:51.796 slat (usec): min=6, max=23953, avg=60.61, stdev=775.39 00:18:51.796 clat (usec): min=559, max=42695, avg=1105.41, stdev=834.36 00:18:51.796 lat (usec): min=568, max=42703, avg=1166.03, stdev=1137.97 00:18:51.796 clat percentiles (usec): 00:18:51.796 | 1.00th=[ 783], 5.00th=[ 898], 10.00th=[ 955], 20.00th=[ 1020], 00:18:51.796 | 30.00th=[ 1057], 40.00th=[ 1090], 50.00th=[ 1106], 60.00th=[ 1123], 00:18:51.796 | 70.00th=[ 1139], 80.00th=[ 1156], 90.00th=[ 1188], 95.00th=[ 1205], 00:18:51.796 | 99.00th=[ 1254], 99.50th=[ 1287], 99.90th=[ 6783], 99.95th=[ 7046], 00:18:51.796 | 99.99th=[42730] 00:18:51.796 bw ( KiB/s): min= 2707, max= 3576, per=27.92%, avg=3413.83, stdev=346.48, samples=6 00:18:51.796 iops : min= 676, max= 894, avg=853.33, stdev=86.93, samples=6 00:18:51.796 lat (usec) : 750=0.61%, 1000=15.39% 00:18:51.796 lat (msec) : 2=83.85%, 10=0.08%, 50=0.04% 00:18:51.796 cpu : usr=0.87%, sys=2.49%, ctx=2626, majf=0, minf=1 00:18:51.796 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:51.796 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:51.796 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:51.796 issued rwts: total=2619,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:51.796 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:51.796 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3986311: Fri Apr 26 13:01:56 2024 00:18:51.796 read: IOPS=1187, BW=4749KiB/s (4863kB/s)(12.7MiB/2740msec) 00:18:51.796 slat (usec): min=6, max=15179, avg=30.25, stdev=321.29 00:18:51.796 clat (usec): min=339, max=1164, avg=800.25, stdev=73.43 00:18:51.796 lat (usec): min=363, max=16033, avg=830.49, stdev=330.88 00:18:51.796 clat percentiles (usec): 00:18:51.796 | 1.00th=[ 578], 5.00th=[ 676], 10.00th=[ 701], 20.00th=[ 742], 00:18:51.796 | 30.00th=[ 783], 40.00th=[ 799], 50.00th=[ 807], 60.00th=[ 824], 00:18:51.796 | 70.00th=[ 840], 80.00th=[ 848], 90.00th=[ 873], 95.00th=[ 889], 00:18:51.796 | 99.00th=[ 996], 99.50th=[ 1057], 99.90th=[ 1123], 99.95th=[ 1156], 00:18:51.796 | 99.99th=[ 1172] 00:18:51.796 bw ( KiB/s): min= 4792, max= 4864, per=39.54%, avg=4833.60, stdev=28.51, samples=5 00:18:51.796 iops : min= 1198, max= 1216, avg=1208.40, stdev= 7.13, samples=5 00:18:51.796 lat (usec) : 500=0.25%, 750=20.74%, 1000=78.09% 00:18:51.796 lat (msec) : 2=0.89% 00:18:51.796 cpu : usr=1.24%, sys=3.03%, ctx=3257, majf=0, minf=1 00:18:51.796 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:51.796 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:51.796 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:51.796 issued rwts: total=3254,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:51.796 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:51.796 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3986318: Fri Apr 26 13:01:56 2024 00:18:51.796 read: IOPS=27, BW=107KiB/s (110kB/s)(276KiB/2571msec) 00:18:51.796 slat (nsec): min=7147, max=42527, avg=25253.61, stdev=4084.78 00:18:51.796 clat (usec): min=590, max=42950, avg=36930.02, stdev=13151.90 00:18:51.796 lat (usec): min=616, max=42975, avg=36955.27, stdev=13152.47 00:18:51.796 clat percentiles (usec): 00:18:51.796 | 1.00th=[ 594], 5.00th=[ 840], 10.00th=[ 1074], 20.00th=[41157], 00:18:51.796 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41681], 60.00th=[42206], 00:18:51.796 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:18:51.796 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:18:51.796 | 99.99th=[42730] 00:18:51.796 bw ( KiB/s): min= 96, max= 152, per=0.88%, avg=107.20, stdev=25.04, samples=5 00:18:51.796 iops : min= 24, max= 38, avg=26.80, stdev= 6.26, samples=5 00:18:51.796 lat (usec) : 750=2.86%, 1000=4.29% 00:18:51.796 lat (msec) : 2=4.29%, 50=87.14% 00:18:51.796 cpu : usr=0.00%, sys=0.12%, ctx=70, majf=0, minf=2 00:18:51.796 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:51.796 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:51.796 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:51.796 issued rwts: total=70,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:51.796 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:51.796 00:18:51.796 Run status group 0 (all jobs): 00:18:51.796 READ: bw=11.9MiB/s (12.5MB/s), 107KiB/s-4842KiB/s (110kB/s-4958kB/s), io=37.0MiB (38.8MB), run=2571-3096msec 00:18:51.796 00:18:51.796 Disk stats (read/write): 00:18:51.796 nvme0n1: ios=3459/0, merge=0/0, ticks=2629/0, in_queue=2629, util=93.96% 00:18:51.796 nvme0n2: ios=2619/0, merge=0/0, ticks=2834/0, in_queue=2834, util=92.60% 00:18:51.796 nvme0n3: ios=3123/0, merge=0/0, ticks=2439/0, in_queue=2439, util=96.00% 00:18:51.796 nvme0n4: ios=63/0, merge=0/0, ticks=2298/0, in_queue=2298, util=96.06% 00:18:51.796 13:01:56 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:51.796 13:01:56 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:18:52.057 13:01:56 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:52.057 13:01:56 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:18:52.318 13:01:57 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:52.318 13:01:57 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:18:52.318 13:01:57 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:52.318 13:01:57 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:18:52.578 13:01:57 -- target/fio.sh@69 -- # fio_status=0 00:18:52.578 13:01:57 -- target/fio.sh@70 -- # wait 3985922 00:18:52.578 13:01:57 -- target/fio.sh@70 -- # fio_status=4 00:18:52.578 13:01:57 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:52.578 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:52.578 13:01:57 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:52.578 13:01:57 -- common/autotest_common.sh@1205 -- # local i=0 00:18:52.578 13:01:57 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:18:52.578 13:01:57 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:52.578 13:01:57 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:18:52.578 13:01:57 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:52.578 13:01:57 -- common/autotest_common.sh@1217 -- # return 0 00:18:52.578 13:01:57 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:18:52.578 13:01:57 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:18:52.578 nvmf hotplug test: fio failed as expected 00:18:52.578 13:01:57 -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:52.837 13:01:57 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:18:52.837 13:01:57 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:18:52.837 13:01:57 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:18:52.837 13:01:57 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:18:52.837 13:01:57 -- target/fio.sh@91 -- # nvmftestfini 00:18:52.837 13:01:57 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:52.837 13:01:57 -- nvmf/common.sh@117 -- # sync 00:18:52.837 13:01:57 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:52.837 13:01:57 -- nvmf/common.sh@120 -- # set +e 00:18:52.837 13:01:57 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:52.837 13:01:57 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:52.837 rmmod nvme_tcp 00:18:52.837 rmmod nvme_fabrics 00:18:52.837 rmmod nvme_keyring 00:18:52.837 13:01:57 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:52.837 13:01:57 -- nvmf/common.sh@124 -- # set -e 00:18:52.837 13:01:57 -- nvmf/common.sh@125 -- # return 0 00:18:52.837 13:01:57 -- nvmf/common.sh@478 -- # '[' -n 3982416 ']' 00:18:52.837 13:01:57 -- nvmf/common.sh@479 -- # killprocess 3982416 00:18:52.837 13:01:57 -- common/autotest_common.sh@936 -- # '[' -z 3982416 ']' 00:18:52.837 13:01:57 -- common/autotest_common.sh@940 -- # kill -0 3982416 00:18:52.837 13:01:57 -- common/autotest_common.sh@941 -- # uname 00:18:52.837 13:01:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:52.837 13:01:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3982416 00:18:52.837 13:01:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:52.837 13:01:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:52.837 13:01:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3982416' 00:18:52.837 killing process with pid 3982416 00:18:52.837 13:01:57 -- common/autotest_common.sh@955 -- # kill 3982416 00:18:52.837 13:01:57 -- common/autotest_common.sh@960 -- # wait 3982416 00:18:53.097 13:01:58 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:53.097 13:01:58 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:18:53.097 13:01:58 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:18:53.097 13:01:58 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:53.097 13:01:58 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:53.097 13:01:58 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:53.097 13:01:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:53.097 13:01:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:55.637 13:02:00 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:55.637 00:18:55.637 real 0m28.610s 00:18:55.637 user 2m28.456s 00:18:55.637 sys 0m9.288s 00:18:55.637 13:02:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:55.637 13:02:00 -- common/autotest_common.sh@10 -- # set +x 00:18:55.637 ************************************ 00:18:55.637 END TEST nvmf_fio_target 00:18:55.637 ************************************ 00:18:55.637 13:02:00 -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:18:55.637 13:02:00 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:55.637 13:02:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:55.637 13:02:00 -- common/autotest_common.sh@10 -- # set +x 00:18:55.637 ************************************ 00:18:55.637 START TEST nvmf_bdevio 00:18:55.637 ************************************ 00:18:55.637 13:02:00 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:18:55.637 * Looking for test storage... 00:18:55.637 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:55.637 13:02:00 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:55.637 13:02:00 -- nvmf/common.sh@7 -- # uname -s 00:18:55.637 13:02:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:55.637 13:02:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:55.637 13:02:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:55.637 13:02:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:55.637 13:02:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:55.637 13:02:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:55.637 13:02:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:55.637 13:02:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:55.637 13:02:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:55.637 13:02:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:55.637 13:02:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:55.637 13:02:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:55.637 13:02:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:55.637 13:02:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:55.637 13:02:00 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:55.637 13:02:00 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:55.637 13:02:00 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:55.637 13:02:00 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:55.637 13:02:00 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:55.637 13:02:00 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:55.638 13:02:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:55.638 13:02:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:55.638 13:02:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:55.638 13:02:00 -- paths/export.sh@5 -- # export PATH 00:18:55.638 13:02:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:55.638 13:02:00 -- nvmf/common.sh@47 -- # : 0 00:18:55.638 13:02:00 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:55.638 13:02:00 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:55.638 13:02:00 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:55.638 13:02:00 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:55.638 13:02:00 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:55.638 13:02:00 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:55.638 13:02:00 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:55.638 13:02:00 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:55.638 13:02:00 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:55.638 13:02:00 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:55.638 13:02:00 -- target/bdevio.sh@14 -- # nvmftestinit 00:18:55.638 13:02:00 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:18:55.638 13:02:00 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:55.638 13:02:00 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:55.638 13:02:00 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:55.638 13:02:00 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:55.638 13:02:00 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:55.638 13:02:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:55.638 13:02:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:55.638 13:02:00 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:18:55.638 13:02:00 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:18:55.638 13:02:00 -- nvmf/common.sh@285 -- # xtrace_disable 00:18:55.638 13:02:00 -- common/autotest_common.sh@10 -- # set +x 00:19:02.222 13:02:06 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:02.222 13:02:06 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:02.222 13:02:06 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:02.222 13:02:06 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:02.222 13:02:06 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:02.222 13:02:06 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:02.222 13:02:06 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:02.222 13:02:06 -- nvmf/common.sh@295 -- # net_devs=() 00:19:02.222 13:02:06 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:02.222 13:02:06 -- nvmf/common.sh@296 -- # e810=() 00:19:02.222 13:02:06 -- nvmf/common.sh@296 -- # local -ga e810 00:19:02.222 13:02:06 -- nvmf/common.sh@297 -- # x722=() 00:19:02.222 13:02:06 -- nvmf/common.sh@297 -- # local -ga x722 00:19:02.222 13:02:06 -- nvmf/common.sh@298 -- # mlx=() 00:19:02.222 13:02:06 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:02.222 13:02:06 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:02.222 13:02:06 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:02.222 13:02:06 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:02.222 13:02:06 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:02.222 13:02:06 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:02.222 13:02:06 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:02.222 13:02:06 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:02.222 13:02:06 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:02.222 13:02:06 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:02.222 13:02:06 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:02.222 13:02:06 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:02.222 13:02:06 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:02.222 13:02:06 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:02.222 13:02:06 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:02.222 13:02:06 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:02.223 13:02:06 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:02.223 13:02:06 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:02.223 13:02:06 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:02.223 13:02:06 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:19:02.223 Found 0000:31:00.0 (0x8086 - 0x159b) 00:19:02.223 13:02:06 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:02.223 13:02:06 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:02.223 13:02:06 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:02.223 13:02:06 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:02.223 13:02:06 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:02.223 13:02:06 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:02.223 13:02:06 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:19:02.223 Found 0000:31:00.1 (0x8086 - 0x159b) 00:19:02.223 13:02:06 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:02.223 13:02:06 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:02.223 13:02:06 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:02.223 13:02:06 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:02.223 13:02:06 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:02.223 13:02:06 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:02.223 13:02:06 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:02.223 13:02:06 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:02.223 13:02:06 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:02.223 13:02:06 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:02.223 13:02:06 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:02.223 13:02:06 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:02.223 13:02:06 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:19:02.223 Found net devices under 0000:31:00.0: cvl_0_0 00:19:02.223 13:02:06 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:02.223 13:02:06 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:02.223 13:02:06 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:02.223 13:02:06 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:02.223 13:02:06 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:02.223 13:02:06 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:19:02.223 Found net devices under 0000:31:00.1: cvl_0_1 00:19:02.223 13:02:06 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:02.223 13:02:06 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:19:02.223 13:02:06 -- nvmf/common.sh@403 -- # is_hw=yes 00:19:02.223 13:02:06 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:19:02.223 13:02:06 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:19:02.223 13:02:06 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:19:02.223 13:02:06 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:02.223 13:02:06 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:02.223 13:02:06 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:02.223 13:02:06 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:02.223 13:02:06 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:02.223 13:02:06 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:02.223 13:02:06 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:02.223 13:02:06 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:02.223 13:02:06 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:02.223 13:02:06 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:02.223 13:02:06 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:02.223 13:02:06 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:02.223 13:02:06 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:02.223 13:02:07 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:02.223 13:02:07 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:02.223 13:02:07 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:02.223 13:02:07 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:02.223 13:02:07 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:02.223 13:02:07 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:02.223 13:02:07 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:02.223 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:02.223 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.631 ms 00:19:02.223 00:19:02.223 --- 10.0.0.2 ping statistics --- 00:19:02.223 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:02.223 rtt min/avg/max/mdev = 0.631/0.631/0.631/0.000 ms 00:19:02.223 13:02:07 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:02.485 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:02.485 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.225 ms 00:19:02.485 00:19:02.485 --- 10.0.0.1 ping statistics --- 00:19:02.485 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:02.485 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:19:02.485 13:02:07 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:02.485 13:02:07 -- nvmf/common.sh@411 -- # return 0 00:19:02.485 13:02:07 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:02.485 13:02:07 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:02.485 13:02:07 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:19:02.485 13:02:07 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:19:02.485 13:02:07 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:02.485 13:02:07 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:19:02.485 13:02:07 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:19:02.485 13:02:07 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:02.485 13:02:07 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:02.485 13:02:07 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:02.485 13:02:07 -- common/autotest_common.sh@10 -- # set +x 00:19:02.485 13:02:07 -- nvmf/common.sh@470 -- # nvmfpid=3991271 00:19:02.485 13:02:07 -- nvmf/common.sh@471 -- # waitforlisten 3991271 00:19:02.485 13:02:07 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:19:02.485 13:02:07 -- common/autotest_common.sh@817 -- # '[' -z 3991271 ']' 00:19:02.485 13:02:07 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:02.485 13:02:07 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:02.485 13:02:07 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:02.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:02.485 13:02:07 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:02.485 13:02:07 -- common/autotest_common.sh@10 -- # set +x 00:19:02.485 [2024-04-26 13:02:07.388511] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:19:02.485 [2024-04-26 13:02:07.388609] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:02.485 EAL: No free 2048 kB hugepages reported on node 1 00:19:02.485 [2024-04-26 13:02:07.476977] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:02.747 [2024-04-26 13:02:07.567464] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:02.747 [2024-04-26 13:02:07.567523] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:02.747 [2024-04-26 13:02:07.567531] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:02.747 [2024-04-26 13:02:07.567538] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:02.747 [2024-04-26 13:02:07.567545] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:02.747 [2024-04-26 13:02:07.567706] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:19:02.747 [2024-04-26 13:02:07.567882] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:19:02.747 [2024-04-26 13:02:07.567997] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:02.747 [2024-04-26 13:02:07.567998] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:19:03.318 13:02:08 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:03.318 13:02:08 -- common/autotest_common.sh@850 -- # return 0 00:19:03.318 13:02:08 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:03.318 13:02:08 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:03.318 13:02:08 -- common/autotest_common.sh@10 -- # set +x 00:19:03.318 13:02:08 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:03.318 13:02:08 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:03.318 13:02:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:03.318 13:02:08 -- common/autotest_common.sh@10 -- # set +x 00:19:03.318 [2024-04-26 13:02:08.234192] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:03.318 13:02:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:03.318 13:02:08 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:03.318 13:02:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:03.318 13:02:08 -- common/autotest_common.sh@10 -- # set +x 00:19:03.318 Malloc0 00:19:03.318 13:02:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:03.318 13:02:08 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:03.318 13:02:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:03.318 13:02:08 -- common/autotest_common.sh@10 -- # set +x 00:19:03.318 13:02:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:03.318 13:02:08 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:03.318 13:02:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:03.318 13:02:08 -- common/autotest_common.sh@10 -- # set +x 00:19:03.318 13:02:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:03.318 13:02:08 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:03.318 13:02:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:03.318 13:02:08 -- common/autotest_common.sh@10 -- # set +x 00:19:03.318 [2024-04-26 13:02:08.299049] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:03.318 13:02:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:03.318 13:02:08 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:19:03.318 13:02:08 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:03.318 13:02:08 -- nvmf/common.sh@521 -- # config=() 00:19:03.318 13:02:08 -- nvmf/common.sh@521 -- # local subsystem config 00:19:03.318 13:02:08 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:03.318 13:02:08 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:03.318 { 00:19:03.318 "params": { 00:19:03.318 "name": "Nvme$subsystem", 00:19:03.318 "trtype": "$TEST_TRANSPORT", 00:19:03.318 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:03.318 "adrfam": "ipv4", 00:19:03.318 "trsvcid": "$NVMF_PORT", 00:19:03.318 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:03.318 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:03.318 "hdgst": ${hdgst:-false}, 00:19:03.318 "ddgst": ${ddgst:-false} 00:19:03.318 }, 00:19:03.318 "method": "bdev_nvme_attach_controller" 00:19:03.318 } 00:19:03.318 EOF 00:19:03.318 )") 00:19:03.318 13:02:08 -- nvmf/common.sh@543 -- # cat 00:19:03.318 13:02:08 -- nvmf/common.sh@545 -- # jq . 00:19:03.318 13:02:08 -- nvmf/common.sh@546 -- # IFS=, 00:19:03.318 13:02:08 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:19:03.318 "params": { 00:19:03.318 "name": "Nvme1", 00:19:03.318 "trtype": "tcp", 00:19:03.318 "traddr": "10.0.0.2", 00:19:03.318 "adrfam": "ipv4", 00:19:03.318 "trsvcid": "4420", 00:19:03.318 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:03.318 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:03.318 "hdgst": false, 00:19:03.318 "ddgst": false 00:19:03.318 }, 00:19:03.318 "method": "bdev_nvme_attach_controller" 00:19:03.318 }' 00:19:03.318 [2024-04-26 13:02:08.358056] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:19:03.318 [2024-04-26 13:02:08.358146] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3991525 ] 00:19:03.578 EAL: No free 2048 kB hugepages reported on node 1 00:19:03.578 [2024-04-26 13:02:08.425950] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:03.578 [2024-04-26 13:02:08.499122] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:03.578 [2024-04-26 13:02:08.499242] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:03.578 [2024-04-26 13:02:08.499246] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:03.839 I/O targets: 00:19:03.839 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:03.839 00:19:03.839 00:19:03.839 CUnit - A unit testing framework for C - Version 2.1-3 00:19:03.839 http://cunit.sourceforge.net/ 00:19:03.839 00:19:03.839 00:19:03.839 Suite: bdevio tests on: Nvme1n1 00:19:03.839 Test: blockdev write read block ...passed 00:19:03.839 Test: blockdev write zeroes read block ...passed 00:19:03.839 Test: blockdev write zeroes read no split ...passed 00:19:03.839 Test: blockdev write zeroes read split ...passed 00:19:03.839 Test: blockdev write zeroes read split partial ...passed 00:19:03.839 Test: blockdev reset ...[2024-04-26 13:02:08.756457] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:03.839 [2024-04-26 13:02:08.756518] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1667fb0 (9): Bad file descriptor 00:19:03.839 [2024-04-26 13:02:08.768729] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:03.839 passed 00:19:03.839 Test: blockdev write read 8 blocks ...passed 00:19:03.839 Test: blockdev write read size > 128k ...passed 00:19:03.839 Test: blockdev write read invalid size ...passed 00:19:03.839 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:03.839 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:03.839 Test: blockdev write read max offset ...passed 00:19:04.100 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:04.100 Test: blockdev writev readv 8 blocks ...passed 00:19:04.100 Test: blockdev writev readv 30 x 1block ...passed 00:19:04.100 Test: blockdev writev readv block ...passed 00:19:04.100 Test: blockdev writev readv size > 128k ...passed 00:19:04.100 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:04.100 Test: blockdev comparev and writev ...[2024-04-26 13:02:09.034358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:04.100 [2024-04-26 13:02:09.034381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:04.100 [2024-04-26 13:02:09.034392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:04.100 [2024-04-26 13:02:09.034398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:04.100 [2024-04-26 13:02:09.034936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:04.100 [2024-04-26 13:02:09.034944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:04.100 [2024-04-26 13:02:09.034954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:04.100 [2024-04-26 13:02:09.034960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:04.100 [2024-04-26 13:02:09.035475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:04.100 [2024-04-26 13:02:09.035483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:04.100 [2024-04-26 13:02:09.035492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:04.100 [2024-04-26 13:02:09.035498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:04.100 [2024-04-26 13:02:09.035988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:04.100 [2024-04-26 13:02:09.035996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:04.100 [2024-04-26 13:02:09.036006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:04.100 [2024-04-26 13:02:09.036011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:04.100 passed 00:19:04.100 Test: blockdev nvme passthru rw ...passed 00:19:04.100 Test: blockdev nvme passthru vendor specific ...[2024-04-26 13:02:09.120735] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:04.100 [2024-04-26 13:02:09.120747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:04.100 [2024-04-26 13:02:09.121111] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:04.100 [2024-04-26 13:02:09.121120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:04.100 [2024-04-26 13:02:09.121444] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:04.100 [2024-04-26 13:02:09.121451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:04.100 [2024-04-26 13:02:09.121775] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:04.100 [2024-04-26 13:02:09.121782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:04.100 passed 00:19:04.100 Test: blockdev nvme admin passthru ...passed 00:19:04.360 Test: blockdev copy ...passed 00:19:04.360 00:19:04.360 Run Summary: Type Total Ran Passed Failed Inactive 00:19:04.360 suites 1 1 n/a 0 0 00:19:04.360 tests 23 23 23 0 0 00:19:04.360 asserts 152 152 152 0 n/a 00:19:04.360 00:19:04.360 Elapsed time = 1.089 seconds 00:19:04.360 13:02:09 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:04.360 13:02:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:04.360 13:02:09 -- common/autotest_common.sh@10 -- # set +x 00:19:04.360 13:02:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:04.360 13:02:09 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:04.360 13:02:09 -- target/bdevio.sh@30 -- # nvmftestfini 00:19:04.360 13:02:09 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:04.360 13:02:09 -- nvmf/common.sh@117 -- # sync 00:19:04.360 13:02:09 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:04.360 13:02:09 -- nvmf/common.sh@120 -- # set +e 00:19:04.360 13:02:09 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:04.360 13:02:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:04.360 rmmod nvme_tcp 00:19:04.360 rmmod nvme_fabrics 00:19:04.360 rmmod nvme_keyring 00:19:04.360 13:02:09 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:04.360 13:02:09 -- nvmf/common.sh@124 -- # set -e 00:19:04.360 13:02:09 -- nvmf/common.sh@125 -- # return 0 00:19:04.360 13:02:09 -- nvmf/common.sh@478 -- # '[' -n 3991271 ']' 00:19:04.360 13:02:09 -- nvmf/common.sh@479 -- # killprocess 3991271 00:19:04.360 13:02:09 -- common/autotest_common.sh@936 -- # '[' -z 3991271 ']' 00:19:04.360 13:02:09 -- common/autotest_common.sh@940 -- # kill -0 3991271 00:19:04.360 13:02:09 -- common/autotest_common.sh@941 -- # uname 00:19:04.360 13:02:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:04.360 13:02:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3991271 00:19:04.619 13:02:09 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:19:04.619 13:02:09 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:19:04.619 13:02:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3991271' 00:19:04.619 killing process with pid 3991271 00:19:04.619 13:02:09 -- common/autotest_common.sh@955 -- # kill 3991271 00:19:04.619 13:02:09 -- common/autotest_common.sh@960 -- # wait 3991271 00:19:04.619 13:02:09 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:04.619 13:02:09 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:19:04.619 13:02:09 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:19:04.619 13:02:09 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:04.619 13:02:09 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:04.619 13:02:09 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:04.619 13:02:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:04.619 13:02:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:07.166 13:02:11 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:07.166 00:19:07.166 real 0m11.414s 00:19:07.166 user 0m11.869s 00:19:07.166 sys 0m5.765s 00:19:07.166 13:02:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:07.166 13:02:11 -- common/autotest_common.sh@10 -- # set +x 00:19:07.166 ************************************ 00:19:07.166 END TEST nvmf_bdevio 00:19:07.166 ************************************ 00:19:07.166 13:02:11 -- nvmf/nvmf.sh@58 -- # '[' tcp = tcp ']' 00:19:07.166 13:02:11 -- nvmf/nvmf.sh@59 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:07.166 13:02:11 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:19:07.166 13:02:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:07.166 13:02:11 -- common/autotest_common.sh@10 -- # set +x 00:19:07.166 ************************************ 00:19:07.166 START TEST nvmf_bdevio_no_huge 00:19:07.166 ************************************ 00:19:07.166 13:02:11 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:07.166 * Looking for test storage... 00:19:07.166 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:07.166 13:02:11 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:07.166 13:02:11 -- nvmf/common.sh@7 -- # uname -s 00:19:07.166 13:02:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:07.166 13:02:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:07.166 13:02:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:07.166 13:02:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:07.166 13:02:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:07.166 13:02:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:07.166 13:02:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:07.166 13:02:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:07.166 13:02:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:07.166 13:02:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:07.166 13:02:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:07.166 13:02:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:07.166 13:02:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:07.166 13:02:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:07.166 13:02:11 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:07.166 13:02:11 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:07.166 13:02:11 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:07.166 13:02:12 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:07.166 13:02:12 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:07.166 13:02:12 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:07.166 13:02:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:07.166 13:02:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:07.166 13:02:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:07.166 13:02:12 -- paths/export.sh@5 -- # export PATH 00:19:07.166 13:02:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:07.166 13:02:12 -- nvmf/common.sh@47 -- # : 0 00:19:07.166 13:02:12 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:07.166 13:02:12 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:07.166 13:02:12 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:07.166 13:02:12 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:07.166 13:02:12 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:07.166 13:02:12 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:07.166 13:02:12 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:07.166 13:02:12 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:07.166 13:02:12 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:07.166 13:02:12 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:07.166 13:02:12 -- target/bdevio.sh@14 -- # nvmftestinit 00:19:07.166 13:02:12 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:19:07.166 13:02:12 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:07.166 13:02:12 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:07.166 13:02:12 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:07.166 13:02:12 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:07.166 13:02:12 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:07.166 13:02:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:07.166 13:02:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:07.166 13:02:12 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:19:07.166 13:02:12 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:19:07.166 13:02:12 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:07.166 13:02:12 -- common/autotest_common.sh@10 -- # set +x 00:19:15.311 13:02:18 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:15.311 13:02:18 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:15.311 13:02:18 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:15.311 13:02:18 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:15.311 13:02:18 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:15.311 13:02:18 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:15.311 13:02:18 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:15.311 13:02:18 -- nvmf/common.sh@295 -- # net_devs=() 00:19:15.311 13:02:18 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:15.311 13:02:18 -- nvmf/common.sh@296 -- # e810=() 00:19:15.311 13:02:18 -- nvmf/common.sh@296 -- # local -ga e810 00:19:15.311 13:02:18 -- nvmf/common.sh@297 -- # x722=() 00:19:15.311 13:02:18 -- nvmf/common.sh@297 -- # local -ga x722 00:19:15.311 13:02:18 -- nvmf/common.sh@298 -- # mlx=() 00:19:15.311 13:02:18 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:15.311 13:02:18 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:15.311 13:02:18 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:15.311 13:02:18 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:15.312 13:02:18 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:15.312 13:02:18 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:15.312 13:02:18 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:15.312 13:02:18 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:15.312 13:02:18 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:15.312 13:02:18 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:15.312 13:02:18 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:15.312 13:02:18 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:15.312 13:02:18 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:15.312 13:02:18 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:15.312 13:02:18 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:15.312 13:02:18 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:15.312 13:02:18 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:15.312 13:02:18 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:15.312 13:02:18 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:15.312 13:02:18 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:19:15.312 Found 0000:31:00.0 (0x8086 - 0x159b) 00:19:15.312 13:02:18 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:15.312 13:02:18 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:15.312 13:02:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:15.312 13:02:18 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:15.312 13:02:18 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:15.312 13:02:18 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:15.312 13:02:18 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:19:15.312 Found 0000:31:00.1 (0x8086 - 0x159b) 00:19:15.312 13:02:18 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:15.312 13:02:18 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:15.312 13:02:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:15.312 13:02:18 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:15.312 13:02:18 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:15.312 13:02:18 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:15.312 13:02:18 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:15.312 13:02:18 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:15.312 13:02:18 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:15.312 13:02:18 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:15.312 13:02:18 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:15.312 13:02:18 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:15.312 13:02:18 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:19:15.312 Found net devices under 0000:31:00.0: cvl_0_0 00:19:15.312 13:02:18 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:15.312 13:02:18 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:15.312 13:02:18 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:15.312 13:02:18 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:15.312 13:02:18 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:15.312 13:02:18 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:19:15.312 Found net devices under 0000:31:00.1: cvl_0_1 00:19:15.312 13:02:18 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:15.312 13:02:18 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:19:15.312 13:02:18 -- nvmf/common.sh@403 -- # is_hw=yes 00:19:15.312 13:02:18 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:19:15.312 13:02:18 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:19:15.312 13:02:18 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:19:15.312 13:02:18 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:15.312 13:02:18 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:15.312 13:02:18 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:15.312 13:02:18 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:15.312 13:02:18 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:15.312 13:02:18 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:15.312 13:02:18 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:15.312 13:02:18 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:15.312 13:02:18 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:15.312 13:02:18 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:15.312 13:02:18 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:15.312 13:02:18 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:15.312 13:02:18 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:15.312 13:02:18 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:15.312 13:02:18 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:15.312 13:02:19 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:15.312 13:02:19 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:15.312 13:02:19 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:15.312 13:02:19 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:15.312 13:02:19 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:15.312 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:15.312 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.586 ms 00:19:15.312 00:19:15.312 --- 10.0.0.2 ping statistics --- 00:19:15.312 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:15.312 rtt min/avg/max/mdev = 0.586/0.586/0.586/0.000 ms 00:19:15.312 13:02:19 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:15.312 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:15.312 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.290 ms 00:19:15.312 00:19:15.312 --- 10.0.0.1 ping statistics --- 00:19:15.312 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:15.312 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:19:15.312 13:02:19 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:15.312 13:02:19 -- nvmf/common.sh@411 -- # return 0 00:19:15.312 13:02:19 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:15.312 13:02:19 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:15.312 13:02:19 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:19:15.312 13:02:19 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:19:15.312 13:02:19 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:15.312 13:02:19 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:19:15.312 13:02:19 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:19:15.312 13:02:19 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:15.312 13:02:19 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:15.312 13:02:19 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:15.312 13:02:19 -- common/autotest_common.sh@10 -- # set +x 00:19:15.312 13:02:19 -- nvmf/common.sh@470 -- # nvmfpid=3995931 00:19:15.312 13:02:19 -- nvmf/common.sh@471 -- # waitforlisten 3995931 00:19:15.312 13:02:19 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:19:15.312 13:02:19 -- common/autotest_common.sh@817 -- # '[' -z 3995931 ']' 00:19:15.312 13:02:19 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:15.312 13:02:19 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:15.312 13:02:19 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:15.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:15.312 13:02:19 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:15.312 13:02:19 -- common/autotest_common.sh@10 -- # set +x 00:19:15.312 [2024-04-26 13:02:19.257234] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:19:15.312 [2024-04-26 13:02:19.257301] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:19:15.312 [2024-04-26 13:02:19.352337] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:15.312 [2024-04-26 13:02:19.456255] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:15.312 [2024-04-26 13:02:19.456307] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:15.312 [2024-04-26 13:02:19.456315] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:15.313 [2024-04-26 13:02:19.456322] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:15.313 [2024-04-26 13:02:19.456329] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:15.313 [2024-04-26 13:02:19.456495] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:19:15.313 [2024-04-26 13:02:19.456644] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:19:15.313 [2024-04-26 13:02:19.456697] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:15.313 [2024-04-26 13:02:19.456697] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:19:15.313 13:02:20 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:15.313 13:02:20 -- common/autotest_common.sh@850 -- # return 0 00:19:15.313 13:02:20 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:15.313 13:02:20 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:15.313 13:02:20 -- common/autotest_common.sh@10 -- # set +x 00:19:15.313 13:02:20 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:15.313 13:02:20 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:15.313 13:02:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:15.313 13:02:20 -- common/autotest_common.sh@10 -- # set +x 00:19:15.313 [2024-04-26 13:02:20.107888] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:15.313 13:02:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:15.313 13:02:20 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:15.313 13:02:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:15.313 13:02:20 -- common/autotest_common.sh@10 -- # set +x 00:19:15.313 Malloc0 00:19:15.313 13:02:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:15.313 13:02:20 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:15.313 13:02:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:15.313 13:02:20 -- common/autotest_common.sh@10 -- # set +x 00:19:15.313 13:02:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:15.313 13:02:20 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:15.313 13:02:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:15.313 13:02:20 -- common/autotest_common.sh@10 -- # set +x 00:19:15.313 13:02:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:15.313 13:02:20 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:15.313 13:02:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:15.313 13:02:20 -- common/autotest_common.sh@10 -- # set +x 00:19:15.313 [2024-04-26 13:02:20.161633] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:15.313 13:02:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:15.313 13:02:20 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:19:15.313 13:02:20 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:15.313 13:02:20 -- nvmf/common.sh@521 -- # config=() 00:19:15.313 13:02:20 -- nvmf/common.sh@521 -- # local subsystem config 00:19:15.313 13:02:20 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:15.313 13:02:20 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:15.313 { 00:19:15.313 "params": { 00:19:15.313 "name": "Nvme$subsystem", 00:19:15.313 "trtype": "$TEST_TRANSPORT", 00:19:15.313 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:15.313 "adrfam": "ipv4", 00:19:15.313 "trsvcid": "$NVMF_PORT", 00:19:15.313 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:15.313 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:15.313 "hdgst": ${hdgst:-false}, 00:19:15.313 "ddgst": ${ddgst:-false} 00:19:15.313 }, 00:19:15.313 "method": "bdev_nvme_attach_controller" 00:19:15.313 } 00:19:15.313 EOF 00:19:15.313 )") 00:19:15.313 13:02:20 -- nvmf/common.sh@543 -- # cat 00:19:15.313 13:02:20 -- nvmf/common.sh@545 -- # jq . 00:19:15.313 13:02:20 -- nvmf/common.sh@546 -- # IFS=, 00:19:15.313 13:02:20 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:19:15.313 "params": { 00:19:15.313 "name": "Nvme1", 00:19:15.313 "trtype": "tcp", 00:19:15.313 "traddr": "10.0.0.2", 00:19:15.313 "adrfam": "ipv4", 00:19:15.313 "trsvcid": "4420", 00:19:15.313 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:15.313 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:15.313 "hdgst": false, 00:19:15.313 "ddgst": false 00:19:15.313 }, 00:19:15.313 "method": "bdev_nvme_attach_controller" 00:19:15.313 }' 00:19:15.313 [2024-04-26 13:02:20.214485] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:19:15.313 [2024-04-26 13:02:20.214563] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid3996206 ] 00:19:15.313 [2024-04-26 13:02:20.286200] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:15.574 [2024-04-26 13:02:20.381694] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:15.574 [2024-04-26 13:02:20.381780] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:15.574 [2024-04-26 13:02:20.381783] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:15.835 I/O targets: 00:19:15.835 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:15.835 00:19:15.835 00:19:15.835 CUnit - A unit testing framework for C - Version 2.1-3 00:19:15.835 http://cunit.sourceforge.net/ 00:19:15.835 00:19:15.835 00:19:15.835 Suite: bdevio tests on: Nvme1n1 00:19:15.835 Test: blockdev write read block ...passed 00:19:15.835 Test: blockdev write zeroes read block ...passed 00:19:15.835 Test: blockdev write zeroes read no split ...passed 00:19:15.835 Test: blockdev write zeroes read split ...passed 00:19:15.835 Test: blockdev write zeroes read split partial ...passed 00:19:15.835 Test: blockdev reset ...[2024-04-26 13:02:20.821188] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:15.835 [2024-04-26 13:02:20.821252] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf154e0 (9): Bad file descriptor 00:19:15.835 [2024-04-26 13:02:20.840890] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:15.835 passed 00:19:15.835 Test: blockdev write read 8 blocks ...passed 00:19:15.835 Test: blockdev write read size > 128k ...passed 00:19:15.835 Test: blockdev write read invalid size ...passed 00:19:15.835 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:15.835 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:15.835 Test: blockdev write read max offset ...passed 00:19:16.097 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:16.097 Test: blockdev writev readv 8 blocks ...passed 00:19:16.097 Test: blockdev writev readv 30 x 1block ...passed 00:19:16.097 Test: blockdev writev readv block ...passed 00:19:16.097 Test: blockdev writev readv size > 128k ...passed 00:19:16.097 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:16.097 Test: blockdev comparev and writev ...[2024-04-26 13:02:21.062057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:16.097 [2024-04-26 13:02:21.062080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:16.097 [2024-04-26 13:02:21.062091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:16.097 [2024-04-26 13:02:21.062097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:16.097 [2024-04-26 13:02:21.062465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:16.097 [2024-04-26 13:02:21.062473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:16.097 [2024-04-26 13:02:21.062483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:16.097 [2024-04-26 13:02:21.062488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:16.097 [2024-04-26 13:02:21.062808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:16.097 [2024-04-26 13:02:21.062816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:16.097 [2024-04-26 13:02:21.062825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:16.097 [2024-04-26 13:02:21.062830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:16.097 [2024-04-26 13:02:21.063222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:16.097 [2024-04-26 13:02:21.063230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:16.097 [2024-04-26 13:02:21.063239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:16.097 [2024-04-26 13:02:21.063245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:16.097 passed 00:19:16.097 Test: blockdev nvme passthru rw ...passed 00:19:16.097 Test: blockdev nvme passthru vendor specific ...[2024-04-26 13:02:21.147385] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:16.097 [2024-04-26 13:02:21.147398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:16.097 [2024-04-26 13:02:21.147619] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:16.097 [2024-04-26 13:02:21.147627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:16.097 [2024-04-26 13:02:21.147858] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:16.097 [2024-04-26 13:02:21.147866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:16.097 [2024-04-26 13:02:21.148099] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:16.097 [2024-04-26 13:02:21.148107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:16.097 passed 00:19:16.357 Test: blockdev nvme admin passthru ...passed 00:19:16.357 Test: blockdev copy ...passed 00:19:16.357 00:19:16.357 Run Summary: Type Total Ran Passed Failed Inactive 00:19:16.357 suites 1 1 n/a 0 0 00:19:16.357 tests 23 23 23 0 0 00:19:16.357 asserts 152 152 152 0 n/a 00:19:16.357 00:19:16.357 Elapsed time = 1.138 seconds 00:19:16.618 13:02:21 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:16.618 13:02:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:16.618 13:02:21 -- common/autotest_common.sh@10 -- # set +x 00:19:16.618 13:02:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:16.618 13:02:21 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:16.618 13:02:21 -- target/bdevio.sh@30 -- # nvmftestfini 00:19:16.618 13:02:21 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:16.618 13:02:21 -- nvmf/common.sh@117 -- # sync 00:19:16.618 13:02:21 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:16.618 13:02:21 -- nvmf/common.sh@120 -- # set +e 00:19:16.618 13:02:21 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:16.618 13:02:21 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:16.618 rmmod nvme_tcp 00:19:16.618 rmmod nvme_fabrics 00:19:16.618 rmmod nvme_keyring 00:19:16.618 13:02:21 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:16.618 13:02:21 -- nvmf/common.sh@124 -- # set -e 00:19:16.618 13:02:21 -- nvmf/common.sh@125 -- # return 0 00:19:16.618 13:02:21 -- nvmf/common.sh@478 -- # '[' -n 3995931 ']' 00:19:16.618 13:02:21 -- nvmf/common.sh@479 -- # killprocess 3995931 00:19:16.619 13:02:21 -- common/autotest_common.sh@936 -- # '[' -z 3995931 ']' 00:19:16.619 13:02:21 -- common/autotest_common.sh@940 -- # kill -0 3995931 00:19:16.619 13:02:21 -- common/autotest_common.sh@941 -- # uname 00:19:16.619 13:02:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:16.619 13:02:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3995931 00:19:16.619 13:02:21 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:19:16.619 13:02:21 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:19:16.619 13:02:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3995931' 00:19:16.619 killing process with pid 3995931 00:19:16.619 13:02:21 -- common/autotest_common.sh@955 -- # kill 3995931 00:19:16.619 13:02:21 -- common/autotest_common.sh@960 -- # wait 3995931 00:19:16.880 13:02:21 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:16.880 13:02:21 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:19:16.880 13:02:21 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:19:16.880 13:02:21 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:16.880 13:02:21 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:16.880 13:02:21 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:16.880 13:02:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:16.880 13:02:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:19.443 13:02:23 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:19.443 00:19:19.443 real 0m12.011s 00:19:19.443 user 0m13.727s 00:19:19.444 sys 0m6.180s 00:19:19.444 13:02:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:19.444 13:02:23 -- common/autotest_common.sh@10 -- # set +x 00:19:19.444 ************************************ 00:19:19.444 END TEST nvmf_bdevio_no_huge 00:19:19.444 ************************************ 00:19:19.444 13:02:23 -- nvmf/nvmf.sh@60 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:19.444 13:02:23 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:19.444 13:02:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:19.444 13:02:23 -- common/autotest_common.sh@10 -- # set +x 00:19:19.444 ************************************ 00:19:19.444 START TEST nvmf_tls 00:19:19.444 ************************************ 00:19:19.444 13:02:24 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:19.444 * Looking for test storage... 00:19:19.444 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:19.444 13:02:24 -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:19.444 13:02:24 -- nvmf/common.sh@7 -- # uname -s 00:19:19.444 13:02:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:19.444 13:02:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:19.444 13:02:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:19.444 13:02:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:19.444 13:02:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:19.444 13:02:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:19.444 13:02:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:19.444 13:02:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:19.444 13:02:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:19.444 13:02:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:19.444 13:02:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:19.444 13:02:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:19.444 13:02:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:19.444 13:02:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:19.444 13:02:24 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:19.444 13:02:24 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:19.444 13:02:24 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:19.444 13:02:24 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:19.444 13:02:24 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:19.444 13:02:24 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:19.444 13:02:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:19.444 13:02:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:19.444 13:02:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:19.444 13:02:24 -- paths/export.sh@5 -- # export PATH 00:19:19.444 13:02:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:19.444 13:02:24 -- nvmf/common.sh@47 -- # : 0 00:19:19.444 13:02:24 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:19.444 13:02:24 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:19.444 13:02:24 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:19.444 13:02:24 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:19.444 13:02:24 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:19.444 13:02:24 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:19.444 13:02:24 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:19.444 13:02:24 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:19.444 13:02:24 -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:19.444 13:02:24 -- target/tls.sh@62 -- # nvmftestinit 00:19:19.444 13:02:24 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:19:19.444 13:02:24 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:19.444 13:02:24 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:19.444 13:02:24 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:19.444 13:02:24 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:19.444 13:02:24 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:19.444 13:02:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:19.444 13:02:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:19.444 13:02:24 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:19:19.444 13:02:24 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:19:19.444 13:02:24 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:19.444 13:02:24 -- common/autotest_common.sh@10 -- # set +x 00:19:26.032 13:02:30 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:26.032 13:02:30 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:26.032 13:02:30 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:26.032 13:02:30 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:26.032 13:02:30 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:26.032 13:02:30 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:26.032 13:02:30 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:26.032 13:02:30 -- nvmf/common.sh@295 -- # net_devs=() 00:19:26.032 13:02:30 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:26.032 13:02:30 -- nvmf/common.sh@296 -- # e810=() 00:19:26.032 13:02:30 -- nvmf/common.sh@296 -- # local -ga e810 00:19:26.032 13:02:30 -- nvmf/common.sh@297 -- # x722=() 00:19:26.032 13:02:30 -- nvmf/common.sh@297 -- # local -ga x722 00:19:26.032 13:02:30 -- nvmf/common.sh@298 -- # mlx=() 00:19:26.032 13:02:30 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:26.032 13:02:30 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:26.032 13:02:30 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:26.032 13:02:30 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:26.032 13:02:30 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:26.032 13:02:30 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:26.032 13:02:30 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:26.032 13:02:30 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:26.032 13:02:30 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:26.032 13:02:30 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:26.032 13:02:30 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:26.032 13:02:30 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:26.032 13:02:30 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:26.032 13:02:30 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:26.032 13:02:30 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:26.032 13:02:30 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:26.032 13:02:30 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:26.032 13:02:30 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:26.032 13:02:30 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:26.032 13:02:30 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:19:26.032 Found 0000:31:00.0 (0x8086 - 0x159b) 00:19:26.032 13:02:30 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:26.033 13:02:30 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:26.033 13:02:30 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:26.033 13:02:30 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:26.033 13:02:30 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:26.033 13:02:30 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:26.033 13:02:30 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:19:26.033 Found 0000:31:00.1 (0x8086 - 0x159b) 00:19:26.033 13:02:30 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:26.033 13:02:30 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:26.033 13:02:30 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:26.033 13:02:30 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:26.033 13:02:30 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:26.033 13:02:30 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:26.033 13:02:30 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:26.033 13:02:30 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:26.033 13:02:30 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:26.033 13:02:30 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:26.033 13:02:30 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:26.033 13:02:30 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:26.033 13:02:30 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:19:26.033 Found net devices under 0000:31:00.0: cvl_0_0 00:19:26.033 13:02:30 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:26.033 13:02:30 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:26.033 13:02:30 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:26.033 13:02:30 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:26.033 13:02:30 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:26.033 13:02:30 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:19:26.033 Found net devices under 0000:31:00.1: cvl_0_1 00:19:26.033 13:02:30 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:26.033 13:02:30 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:19:26.033 13:02:30 -- nvmf/common.sh@403 -- # is_hw=yes 00:19:26.033 13:02:30 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:19:26.033 13:02:30 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:19:26.033 13:02:30 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:19:26.033 13:02:30 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:26.033 13:02:30 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:26.033 13:02:30 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:26.033 13:02:30 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:26.033 13:02:30 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:26.033 13:02:30 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:26.033 13:02:30 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:26.033 13:02:30 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:26.033 13:02:30 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:26.033 13:02:30 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:26.033 13:02:30 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:26.033 13:02:30 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:26.033 13:02:30 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:26.033 13:02:31 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:26.033 13:02:31 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:26.033 13:02:31 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:26.033 13:02:31 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:26.295 13:02:31 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:26.295 13:02:31 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:26.295 13:02:31 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:26.295 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:26.295 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.543 ms 00:19:26.295 00:19:26.295 --- 10.0.0.2 ping statistics --- 00:19:26.295 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:26.295 rtt min/avg/max/mdev = 0.543/0.543/0.543/0.000 ms 00:19:26.295 13:02:31 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:26.295 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:26.295 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.276 ms 00:19:26.295 00:19:26.295 --- 10.0.0.1 ping statistics --- 00:19:26.295 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:26.296 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:19:26.296 13:02:31 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:26.296 13:02:31 -- nvmf/common.sh@411 -- # return 0 00:19:26.296 13:02:31 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:26.296 13:02:31 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:26.296 13:02:31 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:19:26.296 13:02:31 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:19:26.296 13:02:31 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:26.296 13:02:31 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:19:26.296 13:02:31 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:19:26.296 13:02:31 -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:19:26.296 13:02:31 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:26.296 13:02:31 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:26.296 13:02:31 -- common/autotest_common.sh@10 -- # set +x 00:19:26.296 13:02:31 -- nvmf/common.sh@470 -- # nvmfpid=4000676 00:19:26.296 13:02:31 -- nvmf/common.sh@471 -- # waitforlisten 4000676 00:19:26.296 13:02:31 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:19:26.296 13:02:31 -- common/autotest_common.sh@817 -- # '[' -z 4000676 ']' 00:19:26.296 13:02:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:26.296 13:02:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:26.296 13:02:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:26.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:26.296 13:02:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:26.296 13:02:31 -- common/autotest_common.sh@10 -- # set +x 00:19:26.296 [2024-04-26 13:02:31.317826] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:19:26.296 [2024-04-26 13:02:31.317881] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:26.296 EAL: No free 2048 kB hugepages reported on node 1 00:19:26.557 [2024-04-26 13:02:31.402478] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:26.557 [2024-04-26 13:02:31.474875] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:26.557 [2024-04-26 13:02:31.474925] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:26.557 [2024-04-26 13:02:31.474933] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:26.557 [2024-04-26 13:02:31.474940] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:26.557 [2024-04-26 13:02:31.474946] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:26.558 [2024-04-26 13:02:31.474976] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:27.130 13:02:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:27.130 13:02:32 -- common/autotest_common.sh@850 -- # return 0 00:19:27.130 13:02:32 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:27.130 13:02:32 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:27.130 13:02:32 -- common/autotest_common.sh@10 -- # set +x 00:19:27.130 13:02:32 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:27.130 13:02:32 -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:19:27.130 13:02:32 -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:19:27.393 true 00:19:27.393 13:02:32 -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:27.393 13:02:32 -- target/tls.sh@73 -- # jq -r .tls_version 00:19:27.654 13:02:32 -- target/tls.sh@73 -- # version=0 00:19:27.654 13:02:32 -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:19:27.654 13:02:32 -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:27.654 13:02:32 -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:27.654 13:02:32 -- target/tls.sh@81 -- # jq -r .tls_version 00:19:27.915 13:02:32 -- target/tls.sh@81 -- # version=13 00:19:27.916 13:02:32 -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:19:27.916 13:02:32 -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:19:28.177 13:02:32 -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:28.177 13:02:32 -- target/tls.sh@89 -- # jq -r .tls_version 00:19:28.177 13:02:33 -- target/tls.sh@89 -- # version=7 00:19:28.177 13:02:33 -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:19:28.177 13:02:33 -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:28.177 13:02:33 -- target/tls.sh@96 -- # jq -r .enable_ktls 00:19:28.439 13:02:33 -- target/tls.sh@96 -- # ktls=false 00:19:28.439 13:02:33 -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:19:28.439 13:02:33 -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:19:28.701 13:02:33 -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:28.701 13:02:33 -- target/tls.sh@104 -- # jq -r .enable_ktls 00:19:28.701 13:02:33 -- target/tls.sh@104 -- # ktls=true 00:19:28.701 13:02:33 -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:19:28.701 13:02:33 -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:19:28.961 13:02:33 -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:28.961 13:02:33 -- target/tls.sh@112 -- # jq -r .enable_ktls 00:19:28.961 13:02:34 -- target/tls.sh@112 -- # ktls=false 00:19:28.961 13:02:34 -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:19:28.961 13:02:34 -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:19:28.961 13:02:34 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:19:28.962 13:02:34 -- nvmf/common.sh@691 -- # local prefix key digest 00:19:28.962 13:02:34 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:19:28.962 13:02:34 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:19:28.962 13:02:34 -- nvmf/common.sh@693 -- # digest=1 00:19:28.962 13:02:34 -- nvmf/common.sh@694 -- # python - 00:19:29.222 13:02:34 -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:29.222 13:02:34 -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:19:29.222 13:02:34 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:19:29.222 13:02:34 -- nvmf/common.sh@691 -- # local prefix key digest 00:19:29.222 13:02:34 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:19:29.222 13:02:34 -- nvmf/common.sh@693 -- # key=ffeeddccbbaa99887766554433221100 00:19:29.222 13:02:34 -- nvmf/common.sh@693 -- # digest=1 00:19:29.222 13:02:34 -- nvmf/common.sh@694 -- # python - 00:19:29.222 13:02:34 -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:29.222 13:02:34 -- target/tls.sh@121 -- # mktemp 00:19:29.222 13:02:34 -- target/tls.sh@121 -- # key_path=/tmp/tmp.OrdMxzELuR 00:19:29.222 13:02:34 -- target/tls.sh@122 -- # mktemp 00:19:29.222 13:02:34 -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.WvMYWm7Vfp 00:19:29.222 13:02:34 -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:29.222 13:02:34 -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:29.222 13:02:34 -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.OrdMxzELuR 00:19:29.222 13:02:34 -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.WvMYWm7Vfp 00:19:29.222 13:02:34 -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:29.485 13:02:34 -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:19:29.485 13:02:34 -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.OrdMxzELuR 00:19:29.485 13:02:34 -- target/tls.sh@49 -- # local key=/tmp/tmp.OrdMxzELuR 00:19:29.485 13:02:34 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:29.749 [2024-04-26 13:02:34.675329] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:29.749 13:02:34 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:30.056 13:02:34 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:30.056 [2024-04-26 13:02:34.964029] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:30.056 [2024-04-26 13:02:34.964199] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:30.056 13:02:34 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:30.339 malloc0 00:19:30.339 13:02:35 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:30.339 13:02:35 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.OrdMxzELuR 00:19:30.601 [2024-04-26 13:02:35.407105] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:30.601 13:02:35 -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.OrdMxzELuR 00:19:30.601 EAL: No free 2048 kB hugepages reported on node 1 00:19:40.606 Initializing NVMe Controllers 00:19:40.606 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:40.606 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:40.606 Initialization complete. Launching workers. 00:19:40.606 ======================================================== 00:19:40.606 Latency(us) 00:19:40.606 Device Information : IOPS MiB/s Average min max 00:19:40.606 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18725.96 73.15 3417.76 1060.01 4158.97 00:19:40.606 ======================================================== 00:19:40.606 Total : 18725.96 73.15 3417.76 1060.01 4158.97 00:19:40.606 00:19:40.606 13:02:45 -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.OrdMxzELuR 00:19:40.606 13:02:45 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:40.606 13:02:45 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:40.606 13:02:45 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:40.606 13:02:45 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.OrdMxzELuR' 00:19:40.606 13:02:45 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:40.606 13:02:45 -- target/tls.sh@28 -- # bdevperf_pid=4003414 00:19:40.606 13:02:45 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:40.606 13:02:45 -- target/tls.sh@31 -- # waitforlisten 4003414 /var/tmp/bdevperf.sock 00:19:40.606 13:02:45 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:40.606 13:02:45 -- common/autotest_common.sh@817 -- # '[' -z 4003414 ']' 00:19:40.606 13:02:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:40.606 13:02:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:40.606 13:02:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:40.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:40.606 13:02:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:40.606 13:02:45 -- common/autotest_common.sh@10 -- # set +x 00:19:40.606 [2024-04-26 13:02:45.586597] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:19:40.606 [2024-04-26 13:02:45.586666] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4003414 ] 00:19:40.606 EAL: No free 2048 kB hugepages reported on node 1 00:19:40.606 [2024-04-26 13:02:45.637000] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:40.866 [2024-04-26 13:02:45.687413] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:41.437 13:02:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:41.437 13:02:46 -- common/autotest_common.sh@850 -- # return 0 00:19:41.437 13:02:46 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.OrdMxzELuR 00:19:41.437 [2024-04-26 13:02:46.464285] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:41.437 [2024-04-26 13:02:46.464338] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:41.698 TLSTESTn1 00:19:41.698 13:02:46 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:41.698 Running I/O for 10 seconds... 00:19:51.691 00:19:51.691 Latency(us) 00:19:51.691 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:51.691 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:51.691 Verification LBA range: start 0x0 length 0x2000 00:19:51.691 TLSTESTn1 : 10.02 5819.68 22.73 0.00 0.00 21956.33 4642.13 40195.41 00:19:51.691 =================================================================================================================== 00:19:51.691 Total : 5819.68 22.73 0.00 0.00 21956.33 4642.13 40195.41 00:19:51.691 0 00:19:51.691 13:02:56 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:51.691 13:02:56 -- target/tls.sh@45 -- # killprocess 4003414 00:19:51.691 13:02:56 -- common/autotest_common.sh@936 -- # '[' -z 4003414 ']' 00:19:51.691 13:02:56 -- common/autotest_common.sh@940 -- # kill -0 4003414 00:19:51.691 13:02:56 -- common/autotest_common.sh@941 -- # uname 00:19:51.691 13:02:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:51.691 13:02:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4003414 00:19:51.952 13:02:56 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:19:51.952 13:02:56 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:19:51.952 13:02:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4003414' 00:19:51.952 killing process with pid 4003414 00:19:51.952 13:02:56 -- common/autotest_common.sh@955 -- # kill 4003414 00:19:51.952 Received shutdown signal, test time was about 10.000000 seconds 00:19:51.952 00:19:51.952 Latency(us) 00:19:51.952 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:51.952 =================================================================================================================== 00:19:51.952 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:51.952 [2024-04-26 13:02:56.763768] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:51.952 13:02:56 -- common/autotest_common.sh@960 -- # wait 4003414 00:19:51.952 13:02:56 -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.WvMYWm7Vfp 00:19:51.952 13:02:56 -- common/autotest_common.sh@638 -- # local es=0 00:19:51.952 13:02:56 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.WvMYWm7Vfp 00:19:51.952 13:02:56 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:19:51.952 13:02:56 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:51.952 13:02:56 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:19:51.952 13:02:56 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:51.952 13:02:56 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.WvMYWm7Vfp 00:19:51.952 13:02:56 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:51.952 13:02:56 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:51.952 13:02:56 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:51.952 13:02:56 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.WvMYWm7Vfp' 00:19:51.952 13:02:56 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:51.952 13:02:56 -- target/tls.sh@28 -- # bdevperf_pid=4005653 00:19:51.952 13:02:56 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:51.952 13:02:56 -- target/tls.sh@31 -- # waitforlisten 4005653 /var/tmp/bdevperf.sock 00:19:51.952 13:02:56 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:51.952 13:02:56 -- common/autotest_common.sh@817 -- # '[' -z 4005653 ']' 00:19:51.952 13:02:56 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:51.952 13:02:56 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:51.952 13:02:56 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:51.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:51.952 13:02:56 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:51.952 13:02:56 -- common/autotest_common.sh@10 -- # set +x 00:19:51.952 [2024-04-26 13:02:56.921762] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:19:51.952 [2024-04-26 13:02:56.921820] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4005653 ] 00:19:51.952 EAL: No free 2048 kB hugepages reported on node 1 00:19:51.952 [2024-04-26 13:02:56.971259] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:52.213 [2024-04-26 13:02:57.021580] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:52.785 13:02:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:52.785 13:02:57 -- common/autotest_common.sh@850 -- # return 0 00:19:52.785 13:02:57 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.WvMYWm7Vfp 00:19:52.785 [2024-04-26 13:02:57.830652] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:52.785 [2024-04-26 13:02:57.830707] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:52.785 [2024-04-26 13:02:57.838769] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:52.785 [2024-04-26 13:02:57.839475] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b39b0 (107): Transport endpoint is not connected 00:19:52.785 [2024-04-26 13:02:57.840471] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b39b0 (9): Bad file descriptor 00:19:52.785 [2024-04-26 13:02:57.841473] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:52.785 [2024-04-26 13:02:57.841480] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:52.785 [2024-04-26 13:02:57.841486] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:52.785 request: 00:19:52.785 { 00:19:52.785 "name": "TLSTEST", 00:19:52.785 "trtype": "tcp", 00:19:52.785 "traddr": "10.0.0.2", 00:19:52.785 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:52.785 "adrfam": "ipv4", 00:19:52.785 "trsvcid": "4420", 00:19:52.785 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:52.785 "psk": "/tmp/tmp.WvMYWm7Vfp", 00:19:52.785 "method": "bdev_nvme_attach_controller", 00:19:52.785 "req_id": 1 00:19:52.785 } 00:19:52.785 Got JSON-RPC error response 00:19:52.785 response: 00:19:52.785 { 00:19:52.785 "code": -32602, 00:19:52.785 "message": "Invalid parameters" 00:19:52.785 } 00:19:53.047 13:02:57 -- target/tls.sh@36 -- # killprocess 4005653 00:19:53.047 13:02:57 -- common/autotest_common.sh@936 -- # '[' -z 4005653 ']' 00:19:53.047 13:02:57 -- common/autotest_common.sh@940 -- # kill -0 4005653 00:19:53.047 13:02:57 -- common/autotest_common.sh@941 -- # uname 00:19:53.047 13:02:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:53.047 13:02:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4005653 00:19:53.047 13:02:57 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:19:53.047 13:02:57 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:19:53.047 13:02:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4005653' 00:19:53.047 killing process with pid 4005653 00:19:53.047 13:02:57 -- common/autotest_common.sh@955 -- # kill 4005653 00:19:53.047 Received shutdown signal, test time was about 10.000000 seconds 00:19:53.047 00:19:53.048 Latency(us) 00:19:53.048 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:53.048 =================================================================================================================== 00:19:53.048 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:53.048 [2024-04-26 13:02:57.926103] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:53.048 13:02:57 -- common/autotest_common.sh@960 -- # wait 4005653 00:19:53.048 13:02:58 -- target/tls.sh@37 -- # return 1 00:19:53.048 13:02:58 -- common/autotest_common.sh@641 -- # es=1 00:19:53.048 13:02:58 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:19:53.048 13:02:58 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:19:53.048 13:02:58 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:19:53.048 13:02:58 -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.OrdMxzELuR 00:19:53.048 13:02:58 -- common/autotest_common.sh@638 -- # local es=0 00:19:53.048 13:02:58 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.OrdMxzELuR 00:19:53.048 13:02:58 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:19:53.048 13:02:58 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:53.048 13:02:58 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:19:53.048 13:02:58 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:53.048 13:02:58 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.OrdMxzELuR 00:19:53.048 13:02:58 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:53.048 13:02:58 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:53.048 13:02:58 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:19:53.048 13:02:58 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.OrdMxzELuR' 00:19:53.048 13:02:58 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:53.048 13:02:58 -- target/tls.sh@28 -- # bdevperf_pid=4005780 00:19:53.048 13:02:58 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:53.048 13:02:58 -- target/tls.sh@31 -- # waitforlisten 4005780 /var/tmp/bdevperf.sock 00:19:53.048 13:02:58 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:53.048 13:02:58 -- common/autotest_common.sh@817 -- # '[' -z 4005780 ']' 00:19:53.048 13:02:58 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:53.048 13:02:58 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:53.048 13:02:58 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:53.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:53.048 13:02:58 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:53.048 13:02:58 -- common/autotest_common.sh@10 -- # set +x 00:19:53.048 [2024-04-26 13:02:58.076723] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:19:53.048 [2024-04-26 13:02:58.076774] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4005780 ] 00:19:53.048 EAL: No free 2048 kB hugepages reported on node 1 00:19:53.309 [2024-04-26 13:02:58.128310] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:53.309 [2024-04-26 13:02:58.177536] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:53.881 13:02:58 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:53.881 13:02:58 -- common/autotest_common.sh@850 -- # return 0 00:19:53.881 13:02:58 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.OrdMxzELuR 00:19:54.142 [2024-04-26 13:02:58.982442] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:54.142 [2024-04-26 13:02:58.982509] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:54.142 [2024-04-26 13:02:58.991496] tcp.c: 878:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:54.142 [2024-04-26 13:02:58.991515] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:54.142 [2024-04-26 13:02:58.991534] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:54.142 [2024-04-26 13:02:58.992491] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd409b0 (107): Transport endpoint is not connected 00:19:54.142 [2024-04-26 13:02:58.993486] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd409b0 (9): Bad file descriptor 00:19:54.142 [2024-04-26 13:02:58.994488] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:54.142 [2024-04-26 13:02:58.994495] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:54.142 [2024-04-26 13:02:58.994501] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:54.142 request: 00:19:54.142 { 00:19:54.142 "name": "TLSTEST", 00:19:54.142 "trtype": "tcp", 00:19:54.142 "traddr": "10.0.0.2", 00:19:54.142 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:54.142 "adrfam": "ipv4", 00:19:54.142 "trsvcid": "4420", 00:19:54.142 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:54.142 "psk": "/tmp/tmp.OrdMxzELuR", 00:19:54.142 "method": "bdev_nvme_attach_controller", 00:19:54.142 "req_id": 1 00:19:54.142 } 00:19:54.142 Got JSON-RPC error response 00:19:54.142 response: 00:19:54.142 { 00:19:54.142 "code": -32602, 00:19:54.142 "message": "Invalid parameters" 00:19:54.142 } 00:19:54.142 13:02:59 -- target/tls.sh@36 -- # killprocess 4005780 00:19:54.142 13:02:59 -- common/autotest_common.sh@936 -- # '[' -z 4005780 ']' 00:19:54.142 13:02:59 -- common/autotest_common.sh@940 -- # kill -0 4005780 00:19:54.142 13:02:59 -- common/autotest_common.sh@941 -- # uname 00:19:54.142 13:02:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:54.142 13:02:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4005780 00:19:54.142 13:02:59 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:19:54.142 13:02:59 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:19:54.142 13:02:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4005780' 00:19:54.142 killing process with pid 4005780 00:19:54.142 13:02:59 -- common/autotest_common.sh@955 -- # kill 4005780 00:19:54.142 Received shutdown signal, test time was about 10.000000 seconds 00:19:54.142 00:19:54.142 Latency(us) 00:19:54.142 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:54.142 =================================================================================================================== 00:19:54.142 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:54.142 [2024-04-26 13:02:59.079488] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:54.142 13:02:59 -- common/autotest_common.sh@960 -- # wait 4005780 00:19:54.142 13:02:59 -- target/tls.sh@37 -- # return 1 00:19:54.142 13:02:59 -- common/autotest_common.sh@641 -- # es=1 00:19:54.142 13:02:59 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:19:54.142 13:02:59 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:19:54.142 13:02:59 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:19:54.142 13:02:59 -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.OrdMxzELuR 00:19:54.142 13:02:59 -- common/autotest_common.sh@638 -- # local es=0 00:19:54.142 13:02:59 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.OrdMxzELuR 00:19:54.143 13:02:59 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:19:54.143 13:02:59 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:54.143 13:02:59 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:19:54.143 13:02:59 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:54.143 13:02:59 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.OrdMxzELuR 00:19:54.143 13:02:59 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:54.143 13:02:59 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:19:54.143 13:02:59 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:54.143 13:02:59 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.OrdMxzELuR' 00:19:54.143 13:02:59 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:54.143 13:02:59 -- target/tls.sh@28 -- # bdevperf_pid=4006117 00:19:54.143 13:02:59 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:54.143 13:02:59 -- target/tls.sh@31 -- # waitforlisten 4006117 /var/tmp/bdevperf.sock 00:19:54.143 13:02:59 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:54.143 13:02:59 -- common/autotest_common.sh@817 -- # '[' -z 4006117 ']' 00:19:54.143 13:02:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:54.143 13:02:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:54.143 13:02:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:54.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:54.143 13:02:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:54.143 13:02:59 -- common/autotest_common.sh@10 -- # set +x 00:19:54.403 [2024-04-26 13:02:59.231255] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:19:54.403 [2024-04-26 13:02:59.231310] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4006117 ] 00:19:54.403 EAL: No free 2048 kB hugepages reported on node 1 00:19:54.403 [2024-04-26 13:02:59.284409] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:54.403 [2024-04-26 13:02:59.334398] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:54.403 13:02:59 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:54.403 13:02:59 -- common/autotest_common.sh@850 -- # return 0 00:19:54.403 13:02:59 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.OrdMxzELuR 00:19:54.664 [2024-04-26 13:02:59.542098] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:54.664 [2024-04-26 13:02:59.542163] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:54.664 [2024-04-26 13:02:59.552860] tcp.c: 878:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:54.664 [2024-04-26 13:02:59.552878] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:54.664 [2024-04-26 13:02:59.552897] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:54.664 [2024-04-26 13:02:59.553126] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc889b0 (107): Transport endpoint is not connected 00:19:54.664 [2024-04-26 13:02:59.554121] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc889b0 (9): Bad file descriptor 00:19:54.664 [2024-04-26 13:02:59.555123] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:19:54.664 [2024-04-26 13:02:59.555129] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:54.664 [2024-04-26 13:02:59.555137] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:19:54.664 request: 00:19:54.664 { 00:19:54.664 "name": "TLSTEST", 00:19:54.664 "trtype": "tcp", 00:19:54.664 "traddr": "10.0.0.2", 00:19:54.664 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:54.664 "adrfam": "ipv4", 00:19:54.664 "trsvcid": "4420", 00:19:54.664 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:54.664 "psk": "/tmp/tmp.OrdMxzELuR", 00:19:54.664 "method": "bdev_nvme_attach_controller", 00:19:54.664 "req_id": 1 00:19:54.664 } 00:19:54.664 Got JSON-RPC error response 00:19:54.664 response: 00:19:54.664 { 00:19:54.664 "code": -32602, 00:19:54.664 "message": "Invalid parameters" 00:19:54.664 } 00:19:54.664 13:02:59 -- target/tls.sh@36 -- # killprocess 4006117 00:19:54.664 13:02:59 -- common/autotest_common.sh@936 -- # '[' -z 4006117 ']' 00:19:54.664 13:02:59 -- common/autotest_common.sh@940 -- # kill -0 4006117 00:19:54.664 13:02:59 -- common/autotest_common.sh@941 -- # uname 00:19:54.664 13:02:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:54.664 13:02:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4006117 00:19:54.664 13:02:59 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:19:54.664 13:02:59 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:19:54.664 13:02:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4006117' 00:19:54.664 killing process with pid 4006117 00:19:54.664 13:02:59 -- common/autotest_common.sh@955 -- # kill 4006117 00:19:54.664 Received shutdown signal, test time was about 10.000000 seconds 00:19:54.664 00:19:54.664 Latency(us) 00:19:54.664 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:54.664 =================================================================================================================== 00:19:54.664 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:54.664 [2024-04-26 13:02:59.648970] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:54.664 13:02:59 -- common/autotest_common.sh@960 -- # wait 4006117 00:19:54.925 13:02:59 -- target/tls.sh@37 -- # return 1 00:19:54.925 13:02:59 -- common/autotest_common.sh@641 -- # es=1 00:19:54.925 13:02:59 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:19:54.925 13:02:59 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:19:54.925 13:02:59 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:19:54.925 13:02:59 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:54.925 13:02:59 -- common/autotest_common.sh@638 -- # local es=0 00:19:54.925 13:02:59 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:54.925 13:02:59 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:19:54.925 13:02:59 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:54.925 13:02:59 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:19:54.925 13:02:59 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:54.925 13:02:59 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:54.925 13:02:59 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:54.925 13:02:59 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:54.925 13:02:59 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:54.925 13:02:59 -- target/tls.sh@23 -- # psk= 00:19:54.925 13:02:59 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:54.925 13:02:59 -- target/tls.sh@28 -- # bdevperf_pid=4006129 00:19:54.925 13:02:59 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:54.925 13:02:59 -- target/tls.sh@31 -- # waitforlisten 4006129 /var/tmp/bdevperf.sock 00:19:54.925 13:02:59 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:54.925 13:02:59 -- common/autotest_common.sh@817 -- # '[' -z 4006129 ']' 00:19:54.925 13:02:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:54.925 13:02:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:54.925 13:02:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:54.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:54.925 13:02:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:54.925 13:02:59 -- common/autotest_common.sh@10 -- # set +x 00:19:54.925 [2024-04-26 13:02:59.802509] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:19:54.925 [2024-04-26 13:02:59.802562] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4006129 ] 00:19:54.925 EAL: No free 2048 kB hugepages reported on node 1 00:19:54.925 [2024-04-26 13:02:59.853224] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:54.925 [2024-04-26 13:02:59.903059] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:55.866 13:03:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:55.866 13:03:00 -- common/autotest_common.sh@850 -- # return 0 00:19:55.866 13:03:00 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:55.866 [2024-04-26 13:03:00.726815] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:55.866 [2024-04-26 13:03:00.728642] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa17240 (9): Bad file descriptor 00:19:55.866 [2024-04-26 13:03:00.729642] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:55.866 [2024-04-26 13:03:00.729649] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:55.866 [2024-04-26 13:03:00.729656] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:55.866 request: 00:19:55.867 { 00:19:55.867 "name": "TLSTEST", 00:19:55.867 "trtype": "tcp", 00:19:55.867 "traddr": "10.0.0.2", 00:19:55.867 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:55.867 "adrfam": "ipv4", 00:19:55.867 "trsvcid": "4420", 00:19:55.867 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:55.867 "method": "bdev_nvme_attach_controller", 00:19:55.867 "req_id": 1 00:19:55.867 } 00:19:55.867 Got JSON-RPC error response 00:19:55.867 response: 00:19:55.867 { 00:19:55.867 "code": -32602, 00:19:55.867 "message": "Invalid parameters" 00:19:55.867 } 00:19:55.867 13:03:00 -- target/tls.sh@36 -- # killprocess 4006129 00:19:55.867 13:03:00 -- common/autotest_common.sh@936 -- # '[' -z 4006129 ']' 00:19:55.867 13:03:00 -- common/autotest_common.sh@940 -- # kill -0 4006129 00:19:55.867 13:03:00 -- common/autotest_common.sh@941 -- # uname 00:19:55.867 13:03:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:55.867 13:03:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4006129 00:19:55.867 13:03:00 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:19:55.867 13:03:00 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:19:55.867 13:03:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4006129' 00:19:55.867 killing process with pid 4006129 00:19:55.867 13:03:00 -- common/autotest_common.sh@955 -- # kill 4006129 00:19:55.867 Received shutdown signal, test time was about 10.000000 seconds 00:19:55.867 00:19:55.867 Latency(us) 00:19:55.867 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:55.867 =================================================================================================================== 00:19:55.867 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:55.867 13:03:00 -- common/autotest_common.sh@960 -- # wait 4006129 00:19:55.867 13:03:00 -- target/tls.sh@37 -- # return 1 00:19:55.867 13:03:00 -- common/autotest_common.sh@641 -- # es=1 00:19:55.867 13:03:00 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:19:55.867 13:03:00 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:19:55.867 13:03:00 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:19:55.867 13:03:00 -- target/tls.sh@158 -- # killprocess 4000676 00:19:55.867 13:03:00 -- common/autotest_common.sh@936 -- # '[' -z 4000676 ']' 00:19:55.867 13:03:00 -- common/autotest_common.sh@940 -- # kill -0 4000676 00:19:55.867 13:03:00 -- common/autotest_common.sh@941 -- # uname 00:19:55.867 13:03:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:56.128 13:03:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4000676 00:19:56.128 13:03:00 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:19:56.128 13:03:00 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:19:56.128 13:03:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4000676' 00:19:56.128 killing process with pid 4000676 00:19:56.128 13:03:00 -- common/autotest_common.sh@955 -- # kill 4000676 00:19:56.128 [2024-04-26 13:03:00.976899] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:56.128 13:03:00 -- common/autotest_common.sh@960 -- # wait 4000676 00:19:56.128 13:03:01 -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:19:56.128 13:03:01 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:19:56.128 13:03:01 -- nvmf/common.sh@691 -- # local prefix key digest 00:19:56.128 13:03:01 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:19:56.128 13:03:01 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:19:56.128 13:03:01 -- nvmf/common.sh@693 -- # digest=2 00:19:56.128 13:03:01 -- nvmf/common.sh@694 -- # python - 00:19:56.128 13:03:01 -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:56.128 13:03:01 -- target/tls.sh@160 -- # mktemp 00:19:56.128 13:03:01 -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.SKN1k2KAyO 00:19:56.128 13:03:01 -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:56.128 13:03:01 -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.SKN1k2KAyO 00:19:56.128 13:03:01 -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:19:56.128 13:03:01 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:56.128 13:03:01 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:56.128 13:03:01 -- common/autotest_common.sh@10 -- # set +x 00:19:56.128 13:03:01 -- nvmf/common.sh@470 -- # nvmfpid=4006562 00:19:56.128 13:03:01 -- nvmf/common.sh@471 -- # waitforlisten 4006562 00:19:56.128 13:03:01 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:56.128 13:03:01 -- common/autotest_common.sh@817 -- # '[' -z 4006562 ']' 00:19:56.128 13:03:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:56.128 13:03:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:56.128 13:03:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:56.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:56.128 13:03:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:56.128 13:03:01 -- common/autotest_common.sh@10 -- # set +x 00:19:56.389 [2024-04-26 13:03:01.205284] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:19:56.389 [2024-04-26 13:03:01.205352] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:56.389 EAL: No free 2048 kB hugepages reported on node 1 00:19:56.389 [2024-04-26 13:03:01.290961] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:56.389 [2024-04-26 13:03:01.348224] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:56.389 [2024-04-26 13:03:01.348259] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:56.389 [2024-04-26 13:03:01.348265] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:56.389 [2024-04-26 13:03:01.348273] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:56.389 [2024-04-26 13:03:01.348277] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:56.389 [2024-04-26 13:03:01.348292] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:56.961 13:03:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:56.961 13:03:02 -- common/autotest_common.sh@850 -- # return 0 00:19:56.961 13:03:02 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:56.961 13:03:02 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:56.961 13:03:02 -- common/autotest_common.sh@10 -- # set +x 00:19:57.280 13:03:02 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:57.280 13:03:02 -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.SKN1k2KAyO 00:19:57.280 13:03:02 -- target/tls.sh@49 -- # local key=/tmp/tmp.SKN1k2KAyO 00:19:57.280 13:03:02 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:57.280 [2024-04-26 13:03:02.199361] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:57.280 13:03:02 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:57.541 13:03:02 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:57.541 [2024-04-26 13:03:02.504096] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:57.541 [2024-04-26 13:03:02.504291] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:57.541 13:03:02 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:57.802 malloc0 00:19:57.802 13:03:02 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:57.802 13:03:02 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.SKN1k2KAyO 00:19:58.062 [2024-04-26 13:03:02.963054] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:58.062 13:03:02 -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.SKN1k2KAyO 00:19:58.062 13:03:02 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:58.062 13:03:02 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:58.062 13:03:02 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:58.062 13:03:02 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.SKN1k2KAyO' 00:19:58.062 13:03:02 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:58.062 13:03:02 -- target/tls.sh@28 -- # bdevperf_pid=4006954 00:19:58.062 13:03:02 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:58.062 13:03:02 -- target/tls.sh@31 -- # waitforlisten 4006954 /var/tmp/bdevperf.sock 00:19:58.062 13:03:02 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:58.062 13:03:02 -- common/autotest_common.sh@817 -- # '[' -z 4006954 ']' 00:19:58.062 13:03:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:58.062 13:03:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:58.062 13:03:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:58.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:58.062 13:03:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:58.062 13:03:02 -- common/autotest_common.sh@10 -- # set +x 00:19:58.062 [2024-04-26 13:03:03.025304] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:19:58.062 [2024-04-26 13:03:03.025359] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4006954 ] 00:19:58.062 EAL: No free 2048 kB hugepages reported on node 1 00:19:58.062 [2024-04-26 13:03:03.076032] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:58.323 [2024-04-26 13:03:03.126673] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:58.895 13:03:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:58.895 13:03:03 -- common/autotest_common.sh@850 -- # return 0 00:19:58.895 13:03:03 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.SKN1k2KAyO 00:19:58.895 [2024-04-26 13:03:03.947666] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:58.895 [2024-04-26 13:03:03.947727] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:59.155 TLSTESTn1 00:19:59.155 13:03:04 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:59.155 Running I/O for 10 seconds... 00:20:09.149 00:20:09.149 Latency(us) 00:20:09.149 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:09.150 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:09.150 Verification LBA range: start 0x0 length 0x2000 00:20:09.150 TLSTESTn1 : 10.02 4613.43 18.02 0.00 0.00 27697.29 6034.77 235929.60 00:20:09.150 =================================================================================================================== 00:20:09.150 Total : 4613.43 18.02 0.00 0.00 27697.29 6034.77 235929.60 00:20:09.150 0 00:20:09.150 13:03:14 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:09.150 13:03:14 -- target/tls.sh@45 -- # killprocess 4006954 00:20:09.150 13:03:14 -- common/autotest_common.sh@936 -- # '[' -z 4006954 ']' 00:20:09.150 13:03:14 -- common/autotest_common.sh@940 -- # kill -0 4006954 00:20:09.150 13:03:14 -- common/autotest_common.sh@941 -- # uname 00:20:09.150 13:03:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:09.411 13:03:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4006954 00:20:09.411 13:03:14 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:20:09.411 13:03:14 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:20:09.411 13:03:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4006954' 00:20:09.411 killing process with pid 4006954 00:20:09.411 13:03:14 -- common/autotest_common.sh@955 -- # kill 4006954 00:20:09.411 Received shutdown signal, test time was about 10.000000 seconds 00:20:09.411 00:20:09.411 Latency(us) 00:20:09.411 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:09.411 =================================================================================================================== 00:20:09.411 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:09.411 [2024-04-26 13:03:14.258285] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:09.411 13:03:14 -- common/autotest_common.sh@960 -- # wait 4006954 00:20:09.411 13:03:14 -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.SKN1k2KAyO 00:20:09.411 13:03:14 -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.SKN1k2KAyO 00:20:09.411 13:03:14 -- common/autotest_common.sh@638 -- # local es=0 00:20:09.411 13:03:14 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.SKN1k2KAyO 00:20:09.411 13:03:14 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:20:09.411 13:03:14 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:09.411 13:03:14 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:20:09.411 13:03:14 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:09.411 13:03:14 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.SKN1k2KAyO 00:20:09.411 13:03:14 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:09.411 13:03:14 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:09.411 13:03:14 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:09.411 13:03:14 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.SKN1k2KAyO' 00:20:09.411 13:03:14 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:09.411 13:03:14 -- target/tls.sh@28 -- # bdevperf_pid=4009717 00:20:09.411 13:03:14 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:09.411 13:03:14 -- target/tls.sh@31 -- # waitforlisten 4009717 /var/tmp/bdevperf.sock 00:20:09.411 13:03:14 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:09.411 13:03:14 -- common/autotest_common.sh@817 -- # '[' -z 4009717 ']' 00:20:09.411 13:03:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:09.411 13:03:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:09.411 13:03:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:09.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:09.411 13:03:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:09.411 13:03:14 -- common/autotest_common.sh@10 -- # set +x 00:20:09.411 [2024-04-26 13:03:14.422291] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:20:09.411 [2024-04-26 13:03:14.422347] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4009717 ] 00:20:09.411 EAL: No free 2048 kB hugepages reported on node 1 00:20:09.672 [2024-04-26 13:03:14.472123] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:09.672 [2024-04-26 13:03:14.522002] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:10.242 13:03:15 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:10.242 13:03:15 -- common/autotest_common.sh@850 -- # return 0 00:20:10.242 13:03:15 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.SKN1k2KAyO 00:20:10.503 [2024-04-26 13:03:15.323051] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:10.503 [2024-04-26 13:03:15.323085] bdev_nvme.c:6071:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:20:10.503 [2024-04-26 13:03:15.323090] bdev_nvme.c:6180:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.SKN1k2KAyO 00:20:10.503 request: 00:20:10.503 { 00:20:10.503 "name": "TLSTEST", 00:20:10.503 "trtype": "tcp", 00:20:10.503 "traddr": "10.0.0.2", 00:20:10.503 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:10.503 "adrfam": "ipv4", 00:20:10.503 "trsvcid": "4420", 00:20:10.503 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:10.503 "psk": "/tmp/tmp.SKN1k2KAyO", 00:20:10.503 "method": "bdev_nvme_attach_controller", 00:20:10.503 "req_id": 1 00:20:10.503 } 00:20:10.503 Got JSON-RPC error response 00:20:10.503 response: 00:20:10.503 { 00:20:10.503 "code": -1, 00:20:10.503 "message": "Operation not permitted" 00:20:10.503 } 00:20:10.503 13:03:15 -- target/tls.sh@36 -- # killprocess 4009717 00:20:10.503 13:03:15 -- common/autotest_common.sh@936 -- # '[' -z 4009717 ']' 00:20:10.503 13:03:15 -- common/autotest_common.sh@940 -- # kill -0 4009717 00:20:10.503 13:03:15 -- common/autotest_common.sh@941 -- # uname 00:20:10.503 13:03:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:10.503 13:03:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4009717 00:20:10.503 13:03:15 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:20:10.503 13:03:15 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:20:10.503 13:03:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4009717' 00:20:10.503 killing process with pid 4009717 00:20:10.503 13:03:15 -- common/autotest_common.sh@955 -- # kill 4009717 00:20:10.503 Received shutdown signal, test time was about 10.000000 seconds 00:20:10.503 00:20:10.503 Latency(us) 00:20:10.503 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:10.503 =================================================================================================================== 00:20:10.503 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:10.503 13:03:15 -- common/autotest_common.sh@960 -- # wait 4009717 00:20:10.503 13:03:15 -- target/tls.sh@37 -- # return 1 00:20:10.503 13:03:15 -- common/autotest_common.sh@641 -- # es=1 00:20:10.503 13:03:15 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:20:10.503 13:03:15 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:20:10.504 13:03:15 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:20:10.504 13:03:15 -- target/tls.sh@174 -- # killprocess 4006562 00:20:10.504 13:03:15 -- common/autotest_common.sh@936 -- # '[' -z 4006562 ']' 00:20:10.504 13:03:15 -- common/autotest_common.sh@940 -- # kill -0 4006562 00:20:10.504 13:03:15 -- common/autotest_common.sh@941 -- # uname 00:20:10.504 13:03:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:10.504 13:03:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4006562 00:20:10.764 13:03:15 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:10.764 13:03:15 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:10.764 13:03:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4006562' 00:20:10.764 killing process with pid 4006562 00:20:10.764 13:03:15 -- common/autotest_common.sh@955 -- # kill 4006562 00:20:10.764 [2024-04-26 13:03:15.569238] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:10.764 13:03:15 -- common/autotest_common.sh@960 -- # wait 4006562 00:20:10.764 13:03:15 -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:20:10.764 13:03:15 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:10.764 13:03:15 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:10.764 13:03:15 -- common/autotest_common.sh@10 -- # set +x 00:20:10.765 13:03:15 -- nvmf/common.sh@470 -- # nvmfpid=4009866 00:20:10.765 13:03:15 -- nvmf/common.sh@471 -- # waitforlisten 4009866 00:20:10.765 13:03:15 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:10.765 13:03:15 -- common/autotest_common.sh@817 -- # '[' -z 4009866 ']' 00:20:10.765 13:03:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:10.765 13:03:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:10.765 13:03:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:10.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:10.765 13:03:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:10.765 13:03:15 -- common/autotest_common.sh@10 -- # set +x 00:20:10.765 [2024-04-26 13:03:15.742920] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:20:10.765 [2024-04-26 13:03:15.742970] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:10.765 EAL: No free 2048 kB hugepages reported on node 1 00:20:10.765 [2024-04-26 13:03:15.823057] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:11.025 [2024-04-26 13:03:15.879042] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:11.025 [2024-04-26 13:03:15.879075] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:11.025 [2024-04-26 13:03:15.879081] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:11.025 [2024-04-26 13:03:15.879086] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:11.025 [2024-04-26 13:03:15.879090] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:11.025 [2024-04-26 13:03:15.879110] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:11.595 13:03:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:11.595 13:03:16 -- common/autotest_common.sh@850 -- # return 0 00:20:11.595 13:03:16 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:11.595 13:03:16 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:11.595 13:03:16 -- common/autotest_common.sh@10 -- # set +x 00:20:11.595 13:03:16 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:11.595 13:03:16 -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.SKN1k2KAyO 00:20:11.595 13:03:16 -- common/autotest_common.sh@638 -- # local es=0 00:20:11.595 13:03:16 -- common/autotest_common.sh@640 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.SKN1k2KAyO 00:20:11.595 13:03:16 -- common/autotest_common.sh@626 -- # local arg=setup_nvmf_tgt 00:20:11.596 13:03:16 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:11.596 13:03:16 -- common/autotest_common.sh@630 -- # type -t setup_nvmf_tgt 00:20:11.596 13:03:16 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:11.596 13:03:16 -- common/autotest_common.sh@641 -- # setup_nvmf_tgt /tmp/tmp.SKN1k2KAyO 00:20:11.596 13:03:16 -- target/tls.sh@49 -- # local key=/tmp/tmp.SKN1k2KAyO 00:20:11.596 13:03:16 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:11.856 [2024-04-26 13:03:16.686025] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:11.856 13:03:16 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:11.856 13:03:16 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:12.117 [2024-04-26 13:03:16.978722] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:12.117 [2024-04-26 13:03:16.978889] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:12.117 13:03:16 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:12.117 malloc0 00:20:12.117 13:03:17 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:12.377 13:03:17 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.SKN1k2KAyO 00:20:12.377 [2024-04-26 13:03:17.409476] tcp.c:3562:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:20:12.377 [2024-04-26 13:03:17.409495] tcp.c:3648:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:20:12.377 [2024-04-26 13:03:17.409511] subsystem.c: 971:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:20:12.377 request: 00:20:12.377 { 00:20:12.377 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:12.377 "host": "nqn.2016-06.io.spdk:host1", 00:20:12.377 "psk": "/tmp/tmp.SKN1k2KAyO", 00:20:12.377 "method": "nvmf_subsystem_add_host", 00:20:12.377 "req_id": 1 00:20:12.377 } 00:20:12.377 Got JSON-RPC error response 00:20:12.377 response: 00:20:12.377 { 00:20:12.377 "code": -32603, 00:20:12.377 "message": "Internal error" 00:20:12.377 } 00:20:12.377 13:03:17 -- common/autotest_common.sh@641 -- # es=1 00:20:12.378 13:03:17 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:20:12.378 13:03:17 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:20:12.378 13:03:17 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:20:12.378 13:03:17 -- target/tls.sh@180 -- # killprocess 4009866 00:20:12.378 13:03:17 -- common/autotest_common.sh@936 -- # '[' -z 4009866 ']' 00:20:12.378 13:03:17 -- common/autotest_common.sh@940 -- # kill -0 4009866 00:20:12.378 13:03:17 -- common/autotest_common.sh@941 -- # uname 00:20:12.378 13:03:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:12.378 13:03:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4009866 00:20:12.640 13:03:17 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:12.640 13:03:17 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:12.640 13:03:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4009866' 00:20:12.640 killing process with pid 4009866 00:20:12.640 13:03:17 -- common/autotest_common.sh@955 -- # kill 4009866 00:20:12.640 13:03:17 -- common/autotest_common.sh@960 -- # wait 4009866 00:20:12.640 13:03:17 -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.SKN1k2KAyO 00:20:12.640 13:03:17 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:20:12.640 13:03:17 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:12.640 13:03:17 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:12.640 13:03:17 -- common/autotest_common.sh@10 -- # set +x 00:20:12.640 13:03:17 -- nvmf/common.sh@470 -- # nvmfpid=4010321 00:20:12.640 13:03:17 -- nvmf/common.sh@471 -- # waitforlisten 4010321 00:20:12.640 13:03:17 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:12.640 13:03:17 -- common/autotest_common.sh@817 -- # '[' -z 4010321 ']' 00:20:12.640 13:03:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:12.640 13:03:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:12.640 13:03:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:12.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:12.640 13:03:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:12.640 13:03:17 -- common/autotest_common.sh@10 -- # set +x 00:20:12.640 [2024-04-26 13:03:17.672564] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:20:12.640 [2024-04-26 13:03:17.672619] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:12.900 EAL: No free 2048 kB hugepages reported on node 1 00:20:12.900 [2024-04-26 13:03:17.755577] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:12.900 [2024-04-26 13:03:17.812184] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:12.900 [2024-04-26 13:03:17.812224] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:12.900 [2024-04-26 13:03:17.812229] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:12.900 [2024-04-26 13:03:17.812234] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:12.900 [2024-04-26 13:03:17.812238] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:12.900 [2024-04-26 13:03:17.812254] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:13.471 13:03:18 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:13.471 13:03:18 -- common/autotest_common.sh@850 -- # return 0 00:20:13.471 13:03:18 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:13.471 13:03:18 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:13.471 13:03:18 -- common/autotest_common.sh@10 -- # set +x 00:20:13.471 13:03:18 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:13.471 13:03:18 -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.SKN1k2KAyO 00:20:13.471 13:03:18 -- target/tls.sh@49 -- # local key=/tmp/tmp.SKN1k2KAyO 00:20:13.471 13:03:18 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:13.731 [2024-04-26 13:03:18.595301] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:13.731 13:03:18 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:13.731 13:03:18 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:13.992 [2024-04-26 13:03:18.904056] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:13.992 [2024-04-26 13:03:18.904233] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:13.992 13:03:18 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:14.253 malloc0 00:20:14.253 13:03:19 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:14.253 13:03:19 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.SKN1k2KAyO 00:20:14.514 [2024-04-26 13:03:19.351217] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:14.514 13:03:19 -- target/tls.sh@188 -- # bdevperf_pid=4010700 00:20:14.514 13:03:19 -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:14.514 13:03:19 -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:14.514 13:03:19 -- target/tls.sh@191 -- # waitforlisten 4010700 /var/tmp/bdevperf.sock 00:20:14.514 13:03:19 -- common/autotest_common.sh@817 -- # '[' -z 4010700 ']' 00:20:14.514 13:03:19 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:14.514 13:03:19 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:14.514 13:03:19 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:14.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:14.514 13:03:19 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:14.514 13:03:19 -- common/autotest_common.sh@10 -- # set +x 00:20:14.514 [2024-04-26 13:03:19.417346] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:20:14.514 [2024-04-26 13:03:19.417417] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4010700 ] 00:20:14.514 EAL: No free 2048 kB hugepages reported on node 1 00:20:14.514 [2024-04-26 13:03:19.468034] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:14.514 [2024-04-26 13:03:19.518629] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:15.454 13:03:20 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:15.454 13:03:20 -- common/autotest_common.sh@850 -- # return 0 00:20:15.454 13:03:20 -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.SKN1k2KAyO 00:20:15.454 [2024-04-26 13:03:20.311593] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:15.454 [2024-04-26 13:03:20.311653] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:15.454 TLSTESTn1 00:20:15.454 13:03:20 -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:20:15.714 13:03:20 -- target/tls.sh@196 -- # tgtconf='{ 00:20:15.714 "subsystems": [ 00:20:15.714 { 00:20:15.714 "subsystem": "keyring", 00:20:15.714 "config": [] 00:20:15.714 }, 00:20:15.714 { 00:20:15.714 "subsystem": "iobuf", 00:20:15.714 "config": [ 00:20:15.714 { 00:20:15.714 "method": "iobuf_set_options", 00:20:15.714 "params": { 00:20:15.714 "small_pool_count": 8192, 00:20:15.714 "large_pool_count": 1024, 00:20:15.714 "small_bufsize": 8192, 00:20:15.714 "large_bufsize": 135168 00:20:15.714 } 00:20:15.714 } 00:20:15.714 ] 00:20:15.714 }, 00:20:15.714 { 00:20:15.714 "subsystem": "sock", 00:20:15.714 "config": [ 00:20:15.714 { 00:20:15.714 "method": "sock_impl_set_options", 00:20:15.714 "params": { 00:20:15.714 "impl_name": "posix", 00:20:15.714 "recv_buf_size": 2097152, 00:20:15.714 "send_buf_size": 2097152, 00:20:15.714 "enable_recv_pipe": true, 00:20:15.714 "enable_quickack": false, 00:20:15.714 "enable_placement_id": 0, 00:20:15.714 "enable_zerocopy_send_server": true, 00:20:15.714 "enable_zerocopy_send_client": false, 00:20:15.714 "zerocopy_threshold": 0, 00:20:15.714 "tls_version": 0, 00:20:15.714 "enable_ktls": false 00:20:15.714 } 00:20:15.714 }, 00:20:15.714 { 00:20:15.714 "method": "sock_impl_set_options", 00:20:15.714 "params": { 00:20:15.714 "impl_name": "ssl", 00:20:15.714 "recv_buf_size": 4096, 00:20:15.714 "send_buf_size": 4096, 00:20:15.714 "enable_recv_pipe": true, 00:20:15.714 "enable_quickack": false, 00:20:15.714 "enable_placement_id": 0, 00:20:15.714 "enable_zerocopy_send_server": true, 00:20:15.714 "enable_zerocopy_send_client": false, 00:20:15.714 "zerocopy_threshold": 0, 00:20:15.714 "tls_version": 0, 00:20:15.714 "enable_ktls": false 00:20:15.714 } 00:20:15.714 } 00:20:15.714 ] 00:20:15.714 }, 00:20:15.714 { 00:20:15.714 "subsystem": "vmd", 00:20:15.714 "config": [] 00:20:15.714 }, 00:20:15.714 { 00:20:15.714 "subsystem": "accel", 00:20:15.714 "config": [ 00:20:15.714 { 00:20:15.714 "method": "accel_set_options", 00:20:15.714 "params": { 00:20:15.714 "small_cache_size": 128, 00:20:15.714 "large_cache_size": 16, 00:20:15.714 "task_count": 2048, 00:20:15.714 "sequence_count": 2048, 00:20:15.714 "buf_count": 2048 00:20:15.714 } 00:20:15.714 } 00:20:15.714 ] 00:20:15.714 }, 00:20:15.714 { 00:20:15.714 "subsystem": "bdev", 00:20:15.714 "config": [ 00:20:15.714 { 00:20:15.714 "method": "bdev_set_options", 00:20:15.714 "params": { 00:20:15.714 "bdev_io_pool_size": 65535, 00:20:15.714 "bdev_io_cache_size": 256, 00:20:15.714 "bdev_auto_examine": true, 00:20:15.714 "iobuf_small_cache_size": 128, 00:20:15.714 "iobuf_large_cache_size": 16 00:20:15.714 } 00:20:15.714 }, 00:20:15.714 { 00:20:15.714 "method": "bdev_raid_set_options", 00:20:15.714 "params": { 00:20:15.714 "process_window_size_kb": 1024 00:20:15.714 } 00:20:15.714 }, 00:20:15.714 { 00:20:15.714 "method": "bdev_iscsi_set_options", 00:20:15.714 "params": { 00:20:15.714 "timeout_sec": 30 00:20:15.714 } 00:20:15.714 }, 00:20:15.714 { 00:20:15.714 "method": "bdev_nvme_set_options", 00:20:15.714 "params": { 00:20:15.714 "action_on_timeout": "none", 00:20:15.714 "timeout_us": 0, 00:20:15.714 "timeout_admin_us": 0, 00:20:15.714 "keep_alive_timeout_ms": 10000, 00:20:15.714 "arbitration_burst": 0, 00:20:15.714 "low_priority_weight": 0, 00:20:15.714 "medium_priority_weight": 0, 00:20:15.714 "high_priority_weight": 0, 00:20:15.714 "nvme_adminq_poll_period_us": 10000, 00:20:15.714 "nvme_ioq_poll_period_us": 0, 00:20:15.714 "io_queue_requests": 0, 00:20:15.714 "delay_cmd_submit": true, 00:20:15.714 "transport_retry_count": 4, 00:20:15.714 "bdev_retry_count": 3, 00:20:15.714 "transport_ack_timeout": 0, 00:20:15.714 "ctrlr_loss_timeout_sec": 0, 00:20:15.714 "reconnect_delay_sec": 0, 00:20:15.714 "fast_io_fail_timeout_sec": 0, 00:20:15.714 "disable_auto_failback": false, 00:20:15.714 "generate_uuids": false, 00:20:15.714 "transport_tos": 0, 00:20:15.714 "nvme_error_stat": false, 00:20:15.714 "rdma_srq_size": 0, 00:20:15.714 "io_path_stat": false, 00:20:15.714 "allow_accel_sequence": false, 00:20:15.714 "rdma_max_cq_size": 0, 00:20:15.714 "rdma_cm_event_timeout_ms": 0, 00:20:15.714 "dhchap_digests": [ 00:20:15.714 "sha256", 00:20:15.714 "sha384", 00:20:15.714 "sha512" 00:20:15.714 ], 00:20:15.714 "dhchap_dhgroups": [ 00:20:15.714 "null", 00:20:15.714 "ffdhe2048", 00:20:15.714 "ffdhe3072", 00:20:15.714 "ffdhe4096", 00:20:15.714 "ffdhe6144", 00:20:15.714 "ffdhe8192" 00:20:15.714 ] 00:20:15.715 } 00:20:15.715 }, 00:20:15.715 { 00:20:15.715 "method": "bdev_nvme_set_hotplug", 00:20:15.715 "params": { 00:20:15.715 "period_us": 100000, 00:20:15.715 "enable": false 00:20:15.715 } 00:20:15.715 }, 00:20:15.715 { 00:20:15.715 "method": "bdev_malloc_create", 00:20:15.715 "params": { 00:20:15.715 "name": "malloc0", 00:20:15.715 "num_blocks": 8192, 00:20:15.715 "block_size": 4096, 00:20:15.715 "physical_block_size": 4096, 00:20:15.715 "uuid": "a41ca54e-e8ec-435f-9dbd-3cb2aa5c6c50", 00:20:15.715 "optimal_io_boundary": 0 00:20:15.715 } 00:20:15.715 }, 00:20:15.715 { 00:20:15.715 "method": "bdev_wait_for_examine" 00:20:15.715 } 00:20:15.715 ] 00:20:15.715 }, 00:20:15.715 { 00:20:15.715 "subsystem": "nbd", 00:20:15.715 "config": [] 00:20:15.715 }, 00:20:15.715 { 00:20:15.715 "subsystem": "scheduler", 00:20:15.715 "config": [ 00:20:15.715 { 00:20:15.715 "method": "framework_set_scheduler", 00:20:15.715 "params": { 00:20:15.715 "name": "static" 00:20:15.715 } 00:20:15.715 } 00:20:15.715 ] 00:20:15.715 }, 00:20:15.715 { 00:20:15.715 "subsystem": "nvmf", 00:20:15.715 "config": [ 00:20:15.715 { 00:20:15.715 "method": "nvmf_set_config", 00:20:15.715 "params": { 00:20:15.715 "discovery_filter": "match_any", 00:20:15.715 "admin_cmd_passthru": { 00:20:15.715 "identify_ctrlr": false 00:20:15.715 } 00:20:15.715 } 00:20:15.715 }, 00:20:15.715 { 00:20:15.715 "method": "nvmf_set_max_subsystems", 00:20:15.715 "params": { 00:20:15.715 "max_subsystems": 1024 00:20:15.715 } 00:20:15.715 }, 00:20:15.715 { 00:20:15.715 "method": "nvmf_set_crdt", 00:20:15.715 "params": { 00:20:15.715 "crdt1": 0, 00:20:15.715 "crdt2": 0, 00:20:15.715 "crdt3": 0 00:20:15.715 } 00:20:15.715 }, 00:20:15.715 { 00:20:15.715 "method": "nvmf_create_transport", 00:20:15.715 "params": { 00:20:15.715 "trtype": "TCP", 00:20:15.715 "max_queue_depth": 128, 00:20:15.715 "max_io_qpairs_per_ctrlr": 127, 00:20:15.715 "in_capsule_data_size": 4096, 00:20:15.715 "max_io_size": 131072, 00:20:15.715 "io_unit_size": 131072, 00:20:15.715 "max_aq_depth": 128, 00:20:15.715 "num_shared_buffers": 511, 00:20:15.715 "buf_cache_size": 4294967295, 00:20:15.715 "dif_insert_or_strip": false, 00:20:15.715 "zcopy": false, 00:20:15.715 "c2h_success": false, 00:20:15.715 "sock_priority": 0, 00:20:15.715 "abort_timeout_sec": 1, 00:20:15.715 "ack_timeout": 0, 00:20:15.715 "data_wr_pool_size": 0 00:20:15.715 } 00:20:15.715 }, 00:20:15.715 { 00:20:15.715 "method": "nvmf_create_subsystem", 00:20:15.715 "params": { 00:20:15.715 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:15.715 "allow_any_host": false, 00:20:15.715 "serial_number": "SPDK00000000000001", 00:20:15.715 "model_number": "SPDK bdev Controller", 00:20:15.715 "max_namespaces": 10, 00:20:15.715 "min_cntlid": 1, 00:20:15.715 "max_cntlid": 65519, 00:20:15.715 "ana_reporting": false 00:20:15.715 } 00:20:15.715 }, 00:20:15.715 { 00:20:15.715 "method": "nvmf_subsystem_add_host", 00:20:15.715 "params": { 00:20:15.715 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:15.715 "host": "nqn.2016-06.io.spdk:host1", 00:20:15.715 "psk": "/tmp/tmp.SKN1k2KAyO" 00:20:15.715 } 00:20:15.715 }, 00:20:15.715 { 00:20:15.715 "method": "nvmf_subsystem_add_ns", 00:20:15.715 "params": { 00:20:15.715 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:15.715 "namespace": { 00:20:15.715 "nsid": 1, 00:20:15.715 "bdev_name": "malloc0", 00:20:15.715 "nguid": "A41CA54EE8EC435F9DBD3CB2AA5C6C50", 00:20:15.715 "uuid": "a41ca54e-e8ec-435f-9dbd-3cb2aa5c6c50", 00:20:15.715 "no_auto_visible": false 00:20:15.715 } 00:20:15.715 } 00:20:15.715 }, 00:20:15.715 { 00:20:15.715 "method": "nvmf_subsystem_add_listener", 00:20:15.715 "params": { 00:20:15.715 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:15.715 "listen_address": { 00:20:15.715 "trtype": "TCP", 00:20:15.715 "adrfam": "IPv4", 00:20:15.715 "traddr": "10.0.0.2", 00:20:15.715 "trsvcid": "4420" 00:20:15.715 }, 00:20:15.715 "secure_channel": true 00:20:15.715 } 00:20:15.715 } 00:20:15.715 ] 00:20:15.715 } 00:20:15.715 ] 00:20:15.715 }' 00:20:15.715 13:03:20 -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:15.976 13:03:20 -- target/tls.sh@197 -- # bdevperfconf='{ 00:20:15.976 "subsystems": [ 00:20:15.976 { 00:20:15.976 "subsystem": "keyring", 00:20:15.976 "config": [] 00:20:15.976 }, 00:20:15.976 { 00:20:15.976 "subsystem": "iobuf", 00:20:15.976 "config": [ 00:20:15.976 { 00:20:15.976 "method": "iobuf_set_options", 00:20:15.976 "params": { 00:20:15.976 "small_pool_count": 8192, 00:20:15.976 "large_pool_count": 1024, 00:20:15.976 "small_bufsize": 8192, 00:20:15.976 "large_bufsize": 135168 00:20:15.976 } 00:20:15.976 } 00:20:15.976 ] 00:20:15.976 }, 00:20:15.976 { 00:20:15.976 "subsystem": "sock", 00:20:15.976 "config": [ 00:20:15.976 { 00:20:15.976 "method": "sock_impl_set_options", 00:20:15.976 "params": { 00:20:15.976 "impl_name": "posix", 00:20:15.976 "recv_buf_size": 2097152, 00:20:15.976 "send_buf_size": 2097152, 00:20:15.976 "enable_recv_pipe": true, 00:20:15.976 "enable_quickack": false, 00:20:15.976 "enable_placement_id": 0, 00:20:15.976 "enable_zerocopy_send_server": true, 00:20:15.976 "enable_zerocopy_send_client": false, 00:20:15.976 "zerocopy_threshold": 0, 00:20:15.976 "tls_version": 0, 00:20:15.976 "enable_ktls": false 00:20:15.976 } 00:20:15.976 }, 00:20:15.976 { 00:20:15.976 "method": "sock_impl_set_options", 00:20:15.976 "params": { 00:20:15.976 "impl_name": "ssl", 00:20:15.976 "recv_buf_size": 4096, 00:20:15.976 "send_buf_size": 4096, 00:20:15.976 "enable_recv_pipe": true, 00:20:15.976 "enable_quickack": false, 00:20:15.976 "enable_placement_id": 0, 00:20:15.976 "enable_zerocopy_send_server": true, 00:20:15.976 "enable_zerocopy_send_client": false, 00:20:15.976 "zerocopy_threshold": 0, 00:20:15.976 "tls_version": 0, 00:20:15.976 "enable_ktls": false 00:20:15.976 } 00:20:15.976 } 00:20:15.976 ] 00:20:15.976 }, 00:20:15.976 { 00:20:15.976 "subsystem": "vmd", 00:20:15.976 "config": [] 00:20:15.976 }, 00:20:15.976 { 00:20:15.976 "subsystem": "accel", 00:20:15.976 "config": [ 00:20:15.976 { 00:20:15.976 "method": "accel_set_options", 00:20:15.976 "params": { 00:20:15.976 "small_cache_size": 128, 00:20:15.976 "large_cache_size": 16, 00:20:15.976 "task_count": 2048, 00:20:15.976 "sequence_count": 2048, 00:20:15.976 "buf_count": 2048 00:20:15.976 } 00:20:15.976 } 00:20:15.976 ] 00:20:15.976 }, 00:20:15.976 { 00:20:15.976 "subsystem": "bdev", 00:20:15.976 "config": [ 00:20:15.976 { 00:20:15.976 "method": "bdev_set_options", 00:20:15.976 "params": { 00:20:15.976 "bdev_io_pool_size": 65535, 00:20:15.976 "bdev_io_cache_size": 256, 00:20:15.976 "bdev_auto_examine": true, 00:20:15.976 "iobuf_small_cache_size": 128, 00:20:15.976 "iobuf_large_cache_size": 16 00:20:15.976 } 00:20:15.976 }, 00:20:15.976 { 00:20:15.976 "method": "bdev_raid_set_options", 00:20:15.976 "params": { 00:20:15.976 "process_window_size_kb": 1024 00:20:15.976 } 00:20:15.976 }, 00:20:15.976 { 00:20:15.976 "method": "bdev_iscsi_set_options", 00:20:15.976 "params": { 00:20:15.976 "timeout_sec": 30 00:20:15.976 } 00:20:15.976 }, 00:20:15.976 { 00:20:15.976 "method": "bdev_nvme_set_options", 00:20:15.976 "params": { 00:20:15.976 "action_on_timeout": "none", 00:20:15.976 "timeout_us": 0, 00:20:15.976 "timeout_admin_us": 0, 00:20:15.976 "keep_alive_timeout_ms": 10000, 00:20:15.976 "arbitration_burst": 0, 00:20:15.976 "low_priority_weight": 0, 00:20:15.976 "medium_priority_weight": 0, 00:20:15.976 "high_priority_weight": 0, 00:20:15.976 "nvme_adminq_poll_period_us": 10000, 00:20:15.976 "nvme_ioq_poll_period_us": 0, 00:20:15.976 "io_queue_requests": 512, 00:20:15.976 "delay_cmd_submit": true, 00:20:15.976 "transport_retry_count": 4, 00:20:15.976 "bdev_retry_count": 3, 00:20:15.976 "transport_ack_timeout": 0, 00:20:15.976 "ctrlr_loss_timeout_sec": 0, 00:20:15.976 "reconnect_delay_sec": 0, 00:20:15.976 "fast_io_fail_timeout_sec": 0, 00:20:15.976 "disable_auto_failback": false, 00:20:15.976 "generate_uuids": false, 00:20:15.976 "transport_tos": 0, 00:20:15.976 "nvme_error_stat": false, 00:20:15.976 "rdma_srq_size": 0, 00:20:15.976 "io_path_stat": false, 00:20:15.976 "allow_accel_sequence": false, 00:20:15.976 "rdma_max_cq_size": 0, 00:20:15.976 "rdma_cm_event_timeout_ms": 0, 00:20:15.976 "dhchap_digests": [ 00:20:15.976 "sha256", 00:20:15.976 "sha384", 00:20:15.976 "sha512" 00:20:15.976 ], 00:20:15.976 "dhchap_dhgroups": [ 00:20:15.976 "null", 00:20:15.976 "ffdhe2048", 00:20:15.976 "ffdhe3072", 00:20:15.976 "ffdhe4096", 00:20:15.976 "ffdhe6144", 00:20:15.976 "ffdhe8192" 00:20:15.976 ] 00:20:15.976 } 00:20:15.976 }, 00:20:15.976 { 00:20:15.976 "method": "bdev_nvme_attach_controller", 00:20:15.976 "params": { 00:20:15.976 "name": "TLSTEST", 00:20:15.976 "trtype": "TCP", 00:20:15.976 "adrfam": "IPv4", 00:20:15.976 "traddr": "10.0.0.2", 00:20:15.976 "trsvcid": "4420", 00:20:15.976 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:15.976 "prchk_reftag": false, 00:20:15.976 "prchk_guard": false, 00:20:15.976 "ctrlr_loss_timeout_sec": 0, 00:20:15.976 "reconnect_delay_sec": 0, 00:20:15.976 "fast_io_fail_timeout_sec": 0, 00:20:15.976 "psk": "/tmp/tmp.SKN1k2KAyO", 00:20:15.976 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:15.976 "hdgst": false, 00:20:15.976 "ddgst": false 00:20:15.976 } 00:20:15.976 }, 00:20:15.976 { 00:20:15.976 "method": "bdev_nvme_set_hotplug", 00:20:15.976 "params": { 00:20:15.976 "period_us": 100000, 00:20:15.976 "enable": false 00:20:15.976 } 00:20:15.976 }, 00:20:15.976 { 00:20:15.976 "method": "bdev_wait_for_examine" 00:20:15.976 } 00:20:15.976 ] 00:20:15.976 }, 00:20:15.976 { 00:20:15.976 "subsystem": "nbd", 00:20:15.976 "config": [] 00:20:15.976 } 00:20:15.976 ] 00:20:15.976 }' 00:20:15.976 13:03:20 -- target/tls.sh@199 -- # killprocess 4010700 00:20:15.976 13:03:20 -- common/autotest_common.sh@936 -- # '[' -z 4010700 ']' 00:20:15.976 13:03:20 -- common/autotest_common.sh@940 -- # kill -0 4010700 00:20:15.976 13:03:20 -- common/autotest_common.sh@941 -- # uname 00:20:15.976 13:03:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:15.976 13:03:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4010700 00:20:15.976 13:03:20 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:20:15.976 13:03:20 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:20:15.976 13:03:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4010700' 00:20:15.976 killing process with pid 4010700 00:20:15.977 13:03:20 -- common/autotest_common.sh@955 -- # kill 4010700 00:20:15.977 Received shutdown signal, test time was about 10.000000 seconds 00:20:15.977 00:20:15.977 Latency(us) 00:20:15.977 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:15.977 =================================================================================================================== 00:20:15.977 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:15.977 [2024-04-26 13:03:20.935909] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:15.977 13:03:20 -- common/autotest_common.sh@960 -- # wait 4010700 00:20:16.238 13:03:21 -- target/tls.sh@200 -- # killprocess 4010321 00:20:16.238 13:03:21 -- common/autotest_common.sh@936 -- # '[' -z 4010321 ']' 00:20:16.238 13:03:21 -- common/autotest_common.sh@940 -- # kill -0 4010321 00:20:16.238 13:03:21 -- common/autotest_common.sh@941 -- # uname 00:20:16.238 13:03:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:16.238 13:03:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4010321 00:20:16.238 13:03:21 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:16.238 13:03:21 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:16.238 13:03:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4010321' 00:20:16.238 killing process with pid 4010321 00:20:16.238 13:03:21 -- common/autotest_common.sh@955 -- # kill 4010321 00:20:16.238 [2024-04-26 13:03:21.102473] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:16.238 13:03:21 -- common/autotest_common.sh@960 -- # wait 4010321 00:20:16.238 13:03:21 -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:20:16.238 13:03:21 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:16.238 13:03:21 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:16.238 13:03:21 -- common/autotest_common.sh@10 -- # set +x 00:20:16.238 13:03:21 -- target/tls.sh@203 -- # echo '{ 00:20:16.238 "subsystems": [ 00:20:16.238 { 00:20:16.238 "subsystem": "keyring", 00:20:16.238 "config": [] 00:20:16.238 }, 00:20:16.238 { 00:20:16.238 "subsystem": "iobuf", 00:20:16.238 "config": [ 00:20:16.238 { 00:20:16.238 "method": "iobuf_set_options", 00:20:16.238 "params": { 00:20:16.238 "small_pool_count": 8192, 00:20:16.238 "large_pool_count": 1024, 00:20:16.238 "small_bufsize": 8192, 00:20:16.238 "large_bufsize": 135168 00:20:16.238 } 00:20:16.238 } 00:20:16.238 ] 00:20:16.238 }, 00:20:16.238 { 00:20:16.238 "subsystem": "sock", 00:20:16.238 "config": [ 00:20:16.238 { 00:20:16.238 "method": "sock_impl_set_options", 00:20:16.238 "params": { 00:20:16.238 "impl_name": "posix", 00:20:16.238 "recv_buf_size": 2097152, 00:20:16.238 "send_buf_size": 2097152, 00:20:16.238 "enable_recv_pipe": true, 00:20:16.238 "enable_quickack": false, 00:20:16.238 "enable_placement_id": 0, 00:20:16.238 "enable_zerocopy_send_server": true, 00:20:16.238 "enable_zerocopy_send_client": false, 00:20:16.238 "zerocopy_threshold": 0, 00:20:16.238 "tls_version": 0, 00:20:16.238 "enable_ktls": false 00:20:16.238 } 00:20:16.238 }, 00:20:16.238 { 00:20:16.238 "method": "sock_impl_set_options", 00:20:16.238 "params": { 00:20:16.238 "impl_name": "ssl", 00:20:16.238 "recv_buf_size": 4096, 00:20:16.238 "send_buf_size": 4096, 00:20:16.238 "enable_recv_pipe": true, 00:20:16.238 "enable_quickack": false, 00:20:16.238 "enable_placement_id": 0, 00:20:16.238 "enable_zerocopy_send_server": true, 00:20:16.238 "enable_zerocopy_send_client": false, 00:20:16.238 "zerocopy_threshold": 0, 00:20:16.238 "tls_version": 0, 00:20:16.238 "enable_ktls": false 00:20:16.238 } 00:20:16.238 } 00:20:16.238 ] 00:20:16.238 }, 00:20:16.238 { 00:20:16.238 "subsystem": "vmd", 00:20:16.238 "config": [] 00:20:16.239 }, 00:20:16.239 { 00:20:16.239 "subsystem": "accel", 00:20:16.239 "config": [ 00:20:16.239 { 00:20:16.239 "method": "accel_set_options", 00:20:16.239 "params": { 00:20:16.239 "small_cache_size": 128, 00:20:16.239 "large_cache_size": 16, 00:20:16.239 "task_count": 2048, 00:20:16.239 "sequence_count": 2048, 00:20:16.239 "buf_count": 2048 00:20:16.239 } 00:20:16.239 } 00:20:16.239 ] 00:20:16.239 }, 00:20:16.239 { 00:20:16.239 "subsystem": "bdev", 00:20:16.239 "config": [ 00:20:16.239 { 00:20:16.239 "method": "bdev_set_options", 00:20:16.239 "params": { 00:20:16.239 "bdev_io_pool_size": 65535, 00:20:16.239 "bdev_io_cache_size": 256, 00:20:16.239 "bdev_auto_examine": true, 00:20:16.239 "iobuf_small_cache_size": 128, 00:20:16.239 "iobuf_large_cache_size": 16 00:20:16.239 } 00:20:16.239 }, 00:20:16.239 { 00:20:16.239 "method": "bdev_raid_set_options", 00:20:16.239 "params": { 00:20:16.239 "process_window_size_kb": 1024 00:20:16.239 } 00:20:16.239 }, 00:20:16.239 { 00:20:16.239 "method": "bdev_iscsi_set_options", 00:20:16.239 "params": { 00:20:16.239 "timeout_sec": 30 00:20:16.239 } 00:20:16.239 }, 00:20:16.239 { 00:20:16.239 "method": "bdev_nvme_set_options", 00:20:16.239 "params": { 00:20:16.239 "action_on_timeout": "none", 00:20:16.239 "timeout_us": 0, 00:20:16.239 "timeout_admin_us": 0, 00:20:16.239 "keep_alive_timeout_ms": 10000, 00:20:16.239 "arbitration_burst": 0, 00:20:16.239 "low_priority_weight": 0, 00:20:16.239 "medium_priority_weight": 0, 00:20:16.239 "high_priority_weight": 0, 00:20:16.239 "nvme_adminq_poll_period_us": 10000, 00:20:16.239 "nvme_ioq_poll_period_us": 0, 00:20:16.239 "io_queue_requests": 0, 00:20:16.239 "delay_cmd_submit": true, 00:20:16.239 "transport_retry_count": 4, 00:20:16.239 "bdev_retry_count": 3, 00:20:16.239 "transport_ack_timeout": 0, 00:20:16.239 "ctrlr_loss_timeout_sec": 0, 00:20:16.239 "reconnect_delay_sec": 0, 00:20:16.239 "fast_io_fail_timeout_sec": 0, 00:20:16.239 "disable_auto_failback": false, 00:20:16.239 "generate_uuids": false, 00:20:16.239 "transport_tos": 0, 00:20:16.239 "nvme_error_stat": false, 00:20:16.239 "rdma_srq_size": 0, 00:20:16.239 "io_path_stat": false, 00:20:16.239 "allow_accel_sequence": false, 00:20:16.239 "rdma_max_cq_size": 0, 00:20:16.239 "rdma_cm_event_timeout_ms": 0, 00:20:16.239 "dhchap_digests": [ 00:20:16.239 "sha256", 00:20:16.239 "sha384", 00:20:16.239 "sha512" 00:20:16.239 ], 00:20:16.239 "dhchap_dhgroups": [ 00:20:16.239 "null", 00:20:16.239 "ffdhe2048", 00:20:16.239 "ffdhe3072", 00:20:16.239 "ffdhe4096", 00:20:16.239 "ffdhe6144", 00:20:16.239 "ffdhe8192" 00:20:16.239 ] 00:20:16.239 } 00:20:16.239 }, 00:20:16.239 { 00:20:16.239 "method": "bdev_nvme_set_hotplug", 00:20:16.239 "params": { 00:20:16.239 "period_us": 100000, 00:20:16.239 "enable": false 00:20:16.239 } 00:20:16.239 }, 00:20:16.239 { 00:20:16.239 "method": "bdev_malloc_create", 00:20:16.239 "params": { 00:20:16.239 "name": "malloc0", 00:20:16.239 "num_blocks": 8192, 00:20:16.239 "block_size": 4096, 00:20:16.239 "physical_block_size": 4096, 00:20:16.239 "uuid": "a41ca54e-e8ec-435f-9dbd-3cb2aa5c6c50", 00:20:16.239 "optimal_io_boundary": 0 00:20:16.239 } 00:20:16.239 }, 00:20:16.239 { 00:20:16.239 "method": "bdev_wait_for_examine" 00:20:16.239 } 00:20:16.239 ] 00:20:16.239 }, 00:20:16.239 { 00:20:16.239 "subsystem": "nbd", 00:20:16.239 "config": [] 00:20:16.239 }, 00:20:16.239 { 00:20:16.239 "subsystem": "scheduler", 00:20:16.239 "config": [ 00:20:16.239 { 00:20:16.239 "method": "framework_set_scheduler", 00:20:16.239 "params": { 00:20:16.239 "name": "static" 00:20:16.239 } 00:20:16.239 } 00:20:16.239 ] 00:20:16.239 }, 00:20:16.239 { 00:20:16.239 "subsystem": "nvmf", 00:20:16.239 "config": [ 00:20:16.239 { 00:20:16.239 "method": "nvmf_set_config", 00:20:16.239 "params": { 00:20:16.239 "discovery_filter": "match_any", 00:20:16.239 "admin_cmd_passthru": { 00:20:16.239 "identify_ctrlr": false 00:20:16.239 } 00:20:16.239 } 00:20:16.239 }, 00:20:16.239 { 00:20:16.239 "method": "nvmf_set_max_subsystems", 00:20:16.239 "params": { 00:20:16.239 "max_subsystems": 1024 00:20:16.239 } 00:20:16.239 }, 00:20:16.239 { 00:20:16.239 "method": "nvmf_set_crdt", 00:20:16.239 "params": { 00:20:16.239 "crdt1": 0, 00:20:16.239 "crdt2": 0, 00:20:16.239 "crdt3": 0 00:20:16.239 } 00:20:16.239 }, 00:20:16.239 { 00:20:16.239 "method": "nvmf_create_transport", 00:20:16.239 "params": { 00:20:16.239 "trtype": "TCP", 00:20:16.239 "max_queue_depth": 128, 00:20:16.239 "max_io_qpairs_per_ctrlr": 127, 00:20:16.239 "in_capsule_data_size": 4096, 00:20:16.239 "max_io_size": 131072, 00:20:16.239 "io_unit_size": 131072, 00:20:16.239 "max_aq_depth": 128, 00:20:16.239 "num_shared_buffers": 511, 00:20:16.239 "buf_cache_size": 4294967295, 00:20:16.239 "dif_insert_or_strip": false, 00:20:16.239 "zcopy": false, 00:20:16.239 "c2h_success": false, 00:20:16.239 "sock_priority": 0, 00:20:16.239 "abort_timeout_sec": 1, 00:20:16.239 "ack_timeout": 0, 00:20:16.239 "data_wr_pool_size": 0 00:20:16.239 } 00:20:16.239 }, 00:20:16.239 { 00:20:16.239 "method": "nvmf_create_subsystem", 00:20:16.239 "params": { 00:20:16.239 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:16.239 "allow_any_host": false, 00:20:16.239 "serial_number": "SPDK00000000000001", 00:20:16.239 "model_number": "SPDK bdev Controller", 00:20:16.239 "max_namespaces": 10, 00:20:16.239 "min_cntlid": 1, 00:20:16.239 "max_cntlid": 65519, 00:20:16.239 "ana_reporting": false 00:20:16.239 } 00:20:16.239 }, 00:20:16.239 { 00:20:16.239 "method": "nvmf_subsystem_add_host", 00:20:16.239 "params": { 00:20:16.239 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:16.239 "host": "nqn.2016-06.io.spdk:host1", 00:20:16.239 "psk": "/tmp/tmp.SKN1k2KAyO" 00:20:16.239 } 00:20:16.239 }, 00:20:16.239 { 00:20:16.239 "method": "nvmf_subsystem_add_ns", 00:20:16.239 "params": { 00:20:16.239 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:16.239 "namespace": { 00:20:16.239 "nsid": 1, 00:20:16.239 "bdev_name": "malloc0", 00:20:16.239 "nguid": "A41CA54EE8EC435F9DBD3CB2AA5C6C50", 00:20:16.239 "uuid": "a41ca54e-e8ec-435f-9dbd-3cb2aa5c6c50", 00:20:16.239 "no_auto_visible": false 00:20:16.239 } 00:20:16.239 } 00:20:16.240 }, 00:20:16.240 { 00:20:16.240 "method": "nvmf_subsystem_add_listener", 00:20:16.240 "params": { 00:20:16.240 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:16.240 "listen_address": { 00:20:16.240 "trtype": "TCP", 00:20:16.240 "adrfam": "IPv4", 00:20:16.240 "traddr": "10.0.0.2", 00:20:16.240 "trsvcid": "4420" 00:20:16.240 }, 00:20:16.240 "secure_channel": true 00:20:16.240 } 00:20:16.240 } 00:20:16.240 ] 00:20:16.240 } 00:20:16.240 ] 00:20:16.240 }' 00:20:16.240 13:03:21 -- nvmf/common.sh@470 -- # nvmfpid=4011125 00:20:16.240 13:03:21 -- nvmf/common.sh@471 -- # waitforlisten 4011125 00:20:16.240 13:03:21 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:20:16.240 13:03:21 -- common/autotest_common.sh@817 -- # '[' -z 4011125 ']' 00:20:16.240 13:03:21 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:16.240 13:03:21 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:16.240 13:03:21 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:16.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:16.240 13:03:21 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:16.240 13:03:21 -- common/autotest_common.sh@10 -- # set +x 00:20:16.240 [2024-04-26 13:03:21.277201] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:20:16.240 [2024-04-26 13:03:21.277254] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:16.501 EAL: No free 2048 kB hugepages reported on node 1 00:20:16.501 [2024-04-26 13:03:21.360426] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:16.501 [2024-04-26 13:03:21.413173] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:16.501 [2024-04-26 13:03:21.413205] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:16.501 [2024-04-26 13:03:21.413210] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:16.501 [2024-04-26 13:03:21.413215] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:16.501 [2024-04-26 13:03:21.413219] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:16.501 [2024-04-26 13:03:21.413264] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:16.762 [2024-04-26 13:03:21.588694] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:16.762 [2024-04-26 13:03:21.604666] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:16.762 [2024-04-26 13:03:21.620714] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:16.762 [2024-04-26 13:03:21.629132] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:17.052 13:03:22 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:17.052 13:03:22 -- common/autotest_common.sh@850 -- # return 0 00:20:17.052 13:03:22 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:17.052 13:03:22 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:17.052 13:03:22 -- common/autotest_common.sh@10 -- # set +x 00:20:17.052 13:03:22 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:17.052 13:03:22 -- target/tls.sh@207 -- # bdevperf_pid=4011209 00:20:17.052 13:03:22 -- target/tls.sh@208 -- # waitforlisten 4011209 /var/tmp/bdevperf.sock 00:20:17.052 13:03:22 -- common/autotest_common.sh@817 -- # '[' -z 4011209 ']' 00:20:17.052 13:03:22 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:17.052 13:03:22 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:17.052 13:03:22 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:17.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:17.052 13:03:22 -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:20:17.052 13:03:22 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:17.052 13:03:22 -- common/autotest_common.sh@10 -- # set +x 00:20:17.052 13:03:22 -- target/tls.sh@204 -- # echo '{ 00:20:17.052 "subsystems": [ 00:20:17.053 { 00:20:17.053 "subsystem": "keyring", 00:20:17.053 "config": [] 00:20:17.053 }, 00:20:17.053 { 00:20:17.053 "subsystem": "iobuf", 00:20:17.053 "config": [ 00:20:17.053 { 00:20:17.053 "method": "iobuf_set_options", 00:20:17.053 "params": { 00:20:17.053 "small_pool_count": 8192, 00:20:17.053 "large_pool_count": 1024, 00:20:17.053 "small_bufsize": 8192, 00:20:17.053 "large_bufsize": 135168 00:20:17.053 } 00:20:17.053 } 00:20:17.053 ] 00:20:17.053 }, 00:20:17.053 { 00:20:17.053 "subsystem": "sock", 00:20:17.053 "config": [ 00:20:17.053 { 00:20:17.053 "method": "sock_impl_set_options", 00:20:17.053 "params": { 00:20:17.053 "impl_name": "posix", 00:20:17.053 "recv_buf_size": 2097152, 00:20:17.053 "send_buf_size": 2097152, 00:20:17.053 "enable_recv_pipe": true, 00:20:17.053 "enable_quickack": false, 00:20:17.053 "enable_placement_id": 0, 00:20:17.053 "enable_zerocopy_send_server": true, 00:20:17.053 "enable_zerocopy_send_client": false, 00:20:17.053 "zerocopy_threshold": 0, 00:20:17.053 "tls_version": 0, 00:20:17.053 "enable_ktls": false 00:20:17.053 } 00:20:17.053 }, 00:20:17.053 { 00:20:17.053 "method": "sock_impl_set_options", 00:20:17.053 "params": { 00:20:17.053 "impl_name": "ssl", 00:20:17.053 "recv_buf_size": 4096, 00:20:17.053 "send_buf_size": 4096, 00:20:17.053 "enable_recv_pipe": true, 00:20:17.053 "enable_quickack": false, 00:20:17.053 "enable_placement_id": 0, 00:20:17.053 "enable_zerocopy_send_server": true, 00:20:17.053 "enable_zerocopy_send_client": false, 00:20:17.053 "zerocopy_threshold": 0, 00:20:17.053 "tls_version": 0, 00:20:17.053 "enable_ktls": false 00:20:17.053 } 00:20:17.053 } 00:20:17.053 ] 00:20:17.053 }, 00:20:17.053 { 00:20:17.053 "subsystem": "vmd", 00:20:17.053 "config": [] 00:20:17.053 }, 00:20:17.053 { 00:20:17.053 "subsystem": "accel", 00:20:17.053 "config": [ 00:20:17.053 { 00:20:17.053 "method": "accel_set_options", 00:20:17.053 "params": { 00:20:17.053 "small_cache_size": 128, 00:20:17.053 "large_cache_size": 16, 00:20:17.053 "task_count": 2048, 00:20:17.053 "sequence_count": 2048, 00:20:17.053 "buf_count": 2048 00:20:17.053 } 00:20:17.053 } 00:20:17.053 ] 00:20:17.053 }, 00:20:17.053 { 00:20:17.053 "subsystem": "bdev", 00:20:17.053 "config": [ 00:20:17.053 { 00:20:17.053 "method": "bdev_set_options", 00:20:17.053 "params": { 00:20:17.053 "bdev_io_pool_size": 65535, 00:20:17.053 "bdev_io_cache_size": 256, 00:20:17.053 "bdev_auto_examine": true, 00:20:17.053 "iobuf_small_cache_size": 128, 00:20:17.053 "iobuf_large_cache_size": 16 00:20:17.053 } 00:20:17.053 }, 00:20:17.053 { 00:20:17.053 "method": "bdev_raid_set_options", 00:20:17.053 "params": { 00:20:17.053 "process_window_size_kb": 1024 00:20:17.053 } 00:20:17.053 }, 00:20:17.053 { 00:20:17.053 "method": "bdev_iscsi_set_options", 00:20:17.053 "params": { 00:20:17.053 "timeout_sec": 30 00:20:17.053 } 00:20:17.053 }, 00:20:17.053 { 00:20:17.053 "method": "bdev_nvme_set_options", 00:20:17.053 "params": { 00:20:17.053 "action_on_timeout": "none", 00:20:17.053 "timeout_us": 0, 00:20:17.053 "timeout_admin_us": 0, 00:20:17.053 "keep_alive_timeout_ms": 10000, 00:20:17.053 "arbitration_burst": 0, 00:20:17.053 "low_priority_weight": 0, 00:20:17.053 "medium_priority_weight": 0, 00:20:17.053 "high_priority_weight": 0, 00:20:17.053 "nvme_adminq_poll_period_us": 10000, 00:20:17.053 "nvme_ioq_poll_period_us": 0, 00:20:17.053 "io_queue_requests": 512, 00:20:17.053 "delay_cmd_submit": true, 00:20:17.053 "transport_retry_count": 4, 00:20:17.053 "bdev_retry_count": 3, 00:20:17.053 "transport_ack_timeout": 0, 00:20:17.053 "ctrlr_loss_timeout_sec": 0, 00:20:17.053 "reconnect_delay_sec": 0, 00:20:17.053 "fast_io_fail_timeout_sec": 0, 00:20:17.053 "disable_auto_failback": false, 00:20:17.053 "generate_uuids": false, 00:20:17.053 "transport_tos": 0, 00:20:17.053 "nvme_error_stat": false, 00:20:17.053 "rdma_srq_size": 0, 00:20:17.053 "io_path_stat": false, 00:20:17.053 "allow_accel_sequence": false, 00:20:17.053 "rdma_max_cq_size": 0, 00:20:17.053 "rdma_cm_event_timeout_ms": 0, 00:20:17.053 "dhchap_digests": [ 00:20:17.053 "sha256", 00:20:17.053 "sha384", 00:20:17.053 "sha512" 00:20:17.053 ], 00:20:17.053 "dhchap_dhgroups": [ 00:20:17.053 "null", 00:20:17.053 "ffdhe2048", 00:20:17.053 "ffdhe3072", 00:20:17.053 "ffdhe4096", 00:20:17.053 "ffdhe6144", 00:20:17.053 "ffdhe8192" 00:20:17.053 ] 00:20:17.053 } 00:20:17.053 }, 00:20:17.053 { 00:20:17.053 "method": "bdev_nvme_attach_controller", 00:20:17.053 "params": { 00:20:17.053 "name": "TLSTEST", 00:20:17.053 "trtype": "TCP", 00:20:17.053 "adrfam": "IPv4", 00:20:17.053 "traddr": "10.0.0.2", 00:20:17.053 "trsvcid": "4420", 00:20:17.053 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:17.053 "prchk_reftag": false, 00:20:17.053 "prchk_guard": false, 00:20:17.053 "ctrlr_loss_timeout_sec": 0, 00:20:17.053 "reconnect_delay_sec": 0, 00:20:17.053 "fast_io_fail_timeout_sec": 0, 00:20:17.053 "psk": "/tmp/tmp.SKN1k2KAyO", 00:20:17.053 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:17.053 "hdgst": false, 00:20:17.053 "ddgst": false 00:20:17.053 } 00:20:17.053 }, 00:20:17.053 { 00:20:17.053 "method": "bdev_nvme_set_hotplug", 00:20:17.053 "params": { 00:20:17.053 "period_us": 100000, 00:20:17.053 "enable": false 00:20:17.053 } 00:20:17.053 }, 00:20:17.053 { 00:20:17.054 "method": "bdev_wait_for_examine" 00:20:17.054 } 00:20:17.054 ] 00:20:17.054 }, 00:20:17.054 { 00:20:17.054 "subsystem": "nbd", 00:20:17.054 "config": [] 00:20:17.054 } 00:20:17.054 ] 00:20:17.054 }' 00:20:17.342 [2024-04-26 13:03:22.121712] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:20:17.342 [2024-04-26 13:03:22.121763] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4011209 ] 00:20:17.342 EAL: No free 2048 kB hugepages reported on node 1 00:20:17.342 [2024-04-26 13:03:22.172355] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:17.342 [2024-04-26 13:03:22.223591] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:17.342 [2024-04-26 13:03:22.340156] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:17.342 [2024-04-26 13:03:22.340222] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:17.914 13:03:22 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:17.914 13:03:22 -- common/autotest_common.sh@850 -- # return 0 00:20:17.914 13:03:22 -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:18.175 Running I/O for 10 seconds... 00:20:28.177 00:20:28.177 Latency(us) 00:20:28.177 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:28.177 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:28.177 Verification LBA range: start 0x0 length 0x2000 00:20:28.177 TLSTESTn1 : 10.01 5937.57 23.19 0.00 0.00 21528.08 4614.83 22937.60 00:20:28.177 =================================================================================================================== 00:20:28.177 Total : 5937.57 23.19 0.00 0.00 21528.08 4614.83 22937.60 00:20:28.177 0 00:20:28.177 13:03:33 -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:28.177 13:03:33 -- target/tls.sh@214 -- # killprocess 4011209 00:20:28.177 13:03:33 -- common/autotest_common.sh@936 -- # '[' -z 4011209 ']' 00:20:28.177 13:03:33 -- common/autotest_common.sh@940 -- # kill -0 4011209 00:20:28.177 13:03:33 -- common/autotest_common.sh@941 -- # uname 00:20:28.177 13:03:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:28.177 13:03:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4011209 00:20:28.177 13:03:33 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:20:28.177 13:03:33 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:20:28.177 13:03:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4011209' 00:20:28.177 killing process with pid 4011209 00:20:28.177 13:03:33 -- common/autotest_common.sh@955 -- # kill 4011209 00:20:28.177 Received shutdown signal, test time was about 10.000000 seconds 00:20:28.177 00:20:28.177 Latency(us) 00:20:28.177 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:28.177 =================================================================================================================== 00:20:28.177 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:28.177 [2024-04-26 13:03:33.095168] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:28.177 13:03:33 -- common/autotest_common.sh@960 -- # wait 4011209 00:20:28.177 13:03:33 -- target/tls.sh@215 -- # killprocess 4011125 00:20:28.177 13:03:33 -- common/autotest_common.sh@936 -- # '[' -z 4011125 ']' 00:20:28.177 13:03:33 -- common/autotest_common.sh@940 -- # kill -0 4011125 00:20:28.177 13:03:33 -- common/autotest_common.sh@941 -- # uname 00:20:28.177 13:03:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:28.177 13:03:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4011125 00:20:28.439 13:03:33 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:28.439 13:03:33 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:28.439 13:03:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4011125' 00:20:28.439 killing process with pid 4011125 00:20:28.439 13:03:33 -- common/autotest_common.sh@955 -- # kill 4011125 00:20:28.439 [2024-04-26 13:03:33.260679] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:28.439 13:03:33 -- common/autotest_common.sh@960 -- # wait 4011125 00:20:28.439 13:03:33 -- target/tls.sh@218 -- # nvmfappstart 00:20:28.439 13:03:33 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:28.439 13:03:33 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:28.439 13:03:33 -- common/autotest_common.sh@10 -- # set +x 00:20:28.439 13:03:33 -- nvmf/common.sh@470 -- # nvmfpid=4013485 00:20:28.439 13:03:33 -- nvmf/common.sh@471 -- # waitforlisten 4013485 00:20:28.439 13:03:33 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:28.439 13:03:33 -- common/autotest_common.sh@817 -- # '[' -z 4013485 ']' 00:20:28.439 13:03:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:28.439 13:03:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:28.439 13:03:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:28.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:28.439 13:03:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:28.439 13:03:33 -- common/autotest_common.sh@10 -- # set +x 00:20:28.439 [2024-04-26 13:03:33.433117] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:20:28.439 [2024-04-26 13:03:33.433167] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:28.439 EAL: No free 2048 kB hugepages reported on node 1 00:20:28.700 [2024-04-26 13:03:33.499788] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:28.700 [2024-04-26 13:03:33.562638] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:28.700 [2024-04-26 13:03:33.562678] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:28.700 [2024-04-26 13:03:33.562685] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:28.700 [2024-04-26 13:03:33.562691] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:28.700 [2024-04-26 13:03:33.562697] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:28.700 [2024-04-26 13:03:33.562717] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:29.271 13:03:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:29.271 13:03:34 -- common/autotest_common.sh@850 -- # return 0 00:20:29.271 13:03:34 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:29.271 13:03:34 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:29.271 13:03:34 -- common/autotest_common.sh@10 -- # set +x 00:20:29.271 13:03:34 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:29.271 13:03:34 -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.SKN1k2KAyO 00:20:29.271 13:03:34 -- target/tls.sh@49 -- # local key=/tmp/tmp.SKN1k2KAyO 00:20:29.271 13:03:34 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:29.532 [2024-04-26 13:03:34.385919] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:29.532 13:03:34 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:29.533 13:03:34 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:29.794 [2024-04-26 13:03:34.714733] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:29.794 [2024-04-26 13:03:34.714928] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:29.794 13:03:34 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:30.055 malloc0 00:20:30.055 13:03:34 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:30.055 13:03:35 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.SKN1k2KAyO 00:20:30.317 [2024-04-26 13:03:35.222826] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:30.317 13:03:35 -- target/tls.sh@222 -- # bdevperf_pid=4013914 00:20:30.317 13:03:35 -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:30.317 13:03:35 -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:30.317 13:03:35 -- target/tls.sh@225 -- # waitforlisten 4013914 /var/tmp/bdevperf.sock 00:20:30.317 13:03:35 -- common/autotest_common.sh@817 -- # '[' -z 4013914 ']' 00:20:30.317 13:03:35 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:30.317 13:03:35 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:30.317 13:03:35 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:30.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:30.317 13:03:35 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:30.317 13:03:35 -- common/autotest_common.sh@10 -- # set +x 00:20:30.317 [2024-04-26 13:03:35.298997] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:20:30.317 [2024-04-26 13:03:35.299049] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4013914 ] 00:20:30.317 EAL: No free 2048 kB hugepages reported on node 1 00:20:30.317 [2024-04-26 13:03:35.373201] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:30.578 [2024-04-26 13:03:35.425209] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:31.150 13:03:36 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:31.150 13:03:36 -- common/autotest_common.sh@850 -- # return 0 00:20:31.150 13:03:36 -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.SKN1k2KAyO 00:20:31.150 13:03:36 -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:31.411 [2024-04-26 13:03:36.347697] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:31.411 nvme0n1 00:20:31.411 13:03:36 -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:31.672 Running I/O for 1 seconds... 00:20:32.613 00:20:32.613 Latency(us) 00:20:32.613 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:32.614 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:32.614 Verification LBA range: start 0x0 length 0x2000 00:20:32.614 nvme0n1 : 1.05 4558.88 17.81 0.00 0.00 27457.37 5352.11 51336.53 00:20:32.614 =================================================================================================================== 00:20:32.614 Total : 4558.88 17.81 0.00 0.00 27457.37 5352.11 51336.53 00:20:32.614 0 00:20:32.614 13:03:37 -- target/tls.sh@234 -- # killprocess 4013914 00:20:32.614 13:03:37 -- common/autotest_common.sh@936 -- # '[' -z 4013914 ']' 00:20:32.614 13:03:37 -- common/autotest_common.sh@940 -- # kill -0 4013914 00:20:32.614 13:03:37 -- common/autotest_common.sh@941 -- # uname 00:20:32.614 13:03:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:32.614 13:03:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4013914 00:20:32.614 13:03:37 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:32.614 13:03:37 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:32.614 13:03:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4013914' 00:20:32.614 killing process with pid 4013914 00:20:32.614 13:03:37 -- common/autotest_common.sh@955 -- # kill 4013914 00:20:32.614 Received shutdown signal, test time was about 1.000000 seconds 00:20:32.614 00:20:32.614 Latency(us) 00:20:32.614 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:32.614 =================================================================================================================== 00:20:32.614 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:32.614 13:03:37 -- common/autotest_common.sh@960 -- # wait 4013914 00:20:32.874 13:03:37 -- target/tls.sh@235 -- # killprocess 4013485 00:20:32.874 13:03:37 -- common/autotest_common.sh@936 -- # '[' -z 4013485 ']' 00:20:32.874 13:03:37 -- common/autotest_common.sh@940 -- # kill -0 4013485 00:20:32.874 13:03:37 -- common/autotest_common.sh@941 -- # uname 00:20:32.874 13:03:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:32.874 13:03:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4013485 00:20:32.874 13:03:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:32.874 13:03:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:32.874 13:03:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4013485' 00:20:32.874 killing process with pid 4013485 00:20:32.874 13:03:37 -- common/autotest_common.sh@955 -- # kill 4013485 00:20:32.874 [2024-04-26 13:03:37.801771] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:32.874 13:03:37 -- common/autotest_common.sh@960 -- # wait 4013485 00:20:33.134 13:03:37 -- target/tls.sh@238 -- # nvmfappstart 00:20:33.134 13:03:37 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:33.134 13:03:37 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:33.134 13:03:37 -- common/autotest_common.sh@10 -- # set +x 00:20:33.134 13:03:37 -- nvmf/common.sh@470 -- # nvmfpid=4014280 00:20:33.134 13:03:37 -- nvmf/common.sh@471 -- # waitforlisten 4014280 00:20:33.134 13:03:37 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:33.134 13:03:37 -- common/autotest_common.sh@817 -- # '[' -z 4014280 ']' 00:20:33.134 13:03:37 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:33.134 13:03:37 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:33.134 13:03:37 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:33.134 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:33.134 13:03:37 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:33.134 13:03:37 -- common/autotest_common.sh@10 -- # set +x 00:20:33.134 [2024-04-26 13:03:37.994360] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:20:33.134 [2024-04-26 13:03:37.994410] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:33.134 EAL: No free 2048 kB hugepages reported on node 1 00:20:33.134 [2024-04-26 13:03:38.060522] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:33.134 [2024-04-26 13:03:38.122428] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:33.134 [2024-04-26 13:03:38.122469] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:33.134 [2024-04-26 13:03:38.122477] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:33.134 [2024-04-26 13:03:38.122483] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:33.134 [2024-04-26 13:03:38.122489] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:33.134 [2024-04-26 13:03:38.122513] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:34.075 13:03:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:34.075 13:03:38 -- common/autotest_common.sh@850 -- # return 0 00:20:34.075 13:03:38 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:34.075 13:03:38 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:34.075 13:03:38 -- common/autotest_common.sh@10 -- # set +x 00:20:34.075 13:03:38 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:34.075 13:03:38 -- target/tls.sh@239 -- # rpc_cmd 00:20:34.075 13:03:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:34.075 13:03:38 -- common/autotest_common.sh@10 -- # set +x 00:20:34.075 [2024-04-26 13:03:38.817059] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:34.075 malloc0 00:20:34.075 [2024-04-26 13:03:38.843819] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:34.075 [2024-04-26 13:03:38.844018] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:34.075 13:03:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:34.075 13:03:38 -- target/tls.sh@252 -- # bdevperf_pid=4014625 00:20:34.075 13:03:38 -- target/tls.sh@254 -- # waitforlisten 4014625 /var/tmp/bdevperf.sock 00:20:34.075 13:03:38 -- common/autotest_common.sh@817 -- # '[' -z 4014625 ']' 00:20:34.075 13:03:38 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:34.075 13:03:38 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:34.075 13:03:38 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:34.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:34.075 13:03:38 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:34.075 13:03:38 -- common/autotest_common.sh@10 -- # set +x 00:20:34.075 13:03:38 -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:34.075 [2024-04-26 13:03:38.918087] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:20:34.075 [2024-04-26 13:03:38.918134] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4014625 ] 00:20:34.075 EAL: No free 2048 kB hugepages reported on node 1 00:20:34.075 [2024-04-26 13:03:38.993454] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:34.075 [2024-04-26 13:03:39.046088] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:34.646 13:03:39 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:34.646 13:03:39 -- common/autotest_common.sh@850 -- # return 0 00:20:34.646 13:03:39 -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.SKN1k2KAyO 00:20:34.904 13:03:39 -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:34.904 [2024-04-26 13:03:39.964152] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:35.164 nvme0n1 00:20:35.164 13:03:40 -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:35.164 Running I/O for 1 seconds... 00:20:36.103 00:20:36.103 Latency(us) 00:20:36.103 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:36.103 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:36.103 Verification LBA range: start 0x0 length 0x2000 00:20:36.103 nvme0n1 : 1.02 5667.81 22.14 0.00 0.00 22405.79 5434.03 29709.65 00:20:36.103 =================================================================================================================== 00:20:36.103 Total : 5667.81 22.14 0.00 0.00 22405.79 5434.03 29709.65 00:20:36.103 0 00:20:36.362 13:03:41 -- target/tls.sh@263 -- # rpc_cmd save_config 00:20:36.362 13:03:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:36.362 13:03:41 -- common/autotest_common.sh@10 -- # set +x 00:20:36.362 13:03:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:36.362 13:03:41 -- target/tls.sh@263 -- # tgtcfg='{ 00:20:36.363 "subsystems": [ 00:20:36.363 { 00:20:36.363 "subsystem": "keyring", 00:20:36.363 "config": [ 00:20:36.363 { 00:20:36.363 "method": "keyring_file_add_key", 00:20:36.363 "params": { 00:20:36.363 "name": "key0", 00:20:36.363 "path": "/tmp/tmp.SKN1k2KAyO" 00:20:36.363 } 00:20:36.363 } 00:20:36.363 ] 00:20:36.363 }, 00:20:36.363 { 00:20:36.363 "subsystem": "iobuf", 00:20:36.363 "config": [ 00:20:36.363 { 00:20:36.363 "method": "iobuf_set_options", 00:20:36.363 "params": { 00:20:36.363 "small_pool_count": 8192, 00:20:36.363 "large_pool_count": 1024, 00:20:36.363 "small_bufsize": 8192, 00:20:36.363 "large_bufsize": 135168 00:20:36.363 } 00:20:36.363 } 00:20:36.363 ] 00:20:36.363 }, 00:20:36.363 { 00:20:36.363 "subsystem": "sock", 00:20:36.363 "config": [ 00:20:36.363 { 00:20:36.363 "method": "sock_impl_set_options", 00:20:36.363 "params": { 00:20:36.363 "impl_name": "posix", 00:20:36.363 "recv_buf_size": 2097152, 00:20:36.363 "send_buf_size": 2097152, 00:20:36.363 "enable_recv_pipe": true, 00:20:36.363 "enable_quickack": false, 00:20:36.363 "enable_placement_id": 0, 00:20:36.363 "enable_zerocopy_send_server": true, 00:20:36.363 "enable_zerocopy_send_client": false, 00:20:36.363 "zerocopy_threshold": 0, 00:20:36.363 "tls_version": 0, 00:20:36.363 "enable_ktls": false 00:20:36.363 } 00:20:36.363 }, 00:20:36.363 { 00:20:36.363 "method": "sock_impl_set_options", 00:20:36.363 "params": { 00:20:36.363 "impl_name": "ssl", 00:20:36.363 "recv_buf_size": 4096, 00:20:36.363 "send_buf_size": 4096, 00:20:36.363 "enable_recv_pipe": true, 00:20:36.363 "enable_quickack": false, 00:20:36.363 "enable_placement_id": 0, 00:20:36.363 "enable_zerocopy_send_server": true, 00:20:36.363 "enable_zerocopy_send_client": false, 00:20:36.363 "zerocopy_threshold": 0, 00:20:36.363 "tls_version": 0, 00:20:36.363 "enable_ktls": false 00:20:36.363 } 00:20:36.363 } 00:20:36.363 ] 00:20:36.363 }, 00:20:36.363 { 00:20:36.363 "subsystem": "vmd", 00:20:36.363 "config": [] 00:20:36.363 }, 00:20:36.363 { 00:20:36.363 "subsystem": "accel", 00:20:36.363 "config": [ 00:20:36.363 { 00:20:36.363 "method": "accel_set_options", 00:20:36.363 "params": { 00:20:36.363 "small_cache_size": 128, 00:20:36.363 "large_cache_size": 16, 00:20:36.363 "task_count": 2048, 00:20:36.363 "sequence_count": 2048, 00:20:36.363 "buf_count": 2048 00:20:36.363 } 00:20:36.363 } 00:20:36.363 ] 00:20:36.363 }, 00:20:36.363 { 00:20:36.363 "subsystem": "bdev", 00:20:36.363 "config": [ 00:20:36.363 { 00:20:36.363 "method": "bdev_set_options", 00:20:36.363 "params": { 00:20:36.363 "bdev_io_pool_size": 65535, 00:20:36.363 "bdev_io_cache_size": 256, 00:20:36.363 "bdev_auto_examine": true, 00:20:36.363 "iobuf_small_cache_size": 128, 00:20:36.363 "iobuf_large_cache_size": 16 00:20:36.363 } 00:20:36.363 }, 00:20:36.363 { 00:20:36.363 "method": "bdev_raid_set_options", 00:20:36.363 "params": { 00:20:36.363 "process_window_size_kb": 1024 00:20:36.363 } 00:20:36.363 }, 00:20:36.363 { 00:20:36.363 "method": "bdev_iscsi_set_options", 00:20:36.363 "params": { 00:20:36.363 "timeout_sec": 30 00:20:36.363 } 00:20:36.363 }, 00:20:36.363 { 00:20:36.363 "method": "bdev_nvme_set_options", 00:20:36.363 "params": { 00:20:36.363 "action_on_timeout": "none", 00:20:36.363 "timeout_us": 0, 00:20:36.363 "timeout_admin_us": 0, 00:20:36.363 "keep_alive_timeout_ms": 10000, 00:20:36.363 "arbitration_burst": 0, 00:20:36.363 "low_priority_weight": 0, 00:20:36.363 "medium_priority_weight": 0, 00:20:36.363 "high_priority_weight": 0, 00:20:36.363 "nvme_adminq_poll_period_us": 10000, 00:20:36.363 "nvme_ioq_poll_period_us": 0, 00:20:36.363 "io_queue_requests": 0, 00:20:36.363 "delay_cmd_submit": true, 00:20:36.363 "transport_retry_count": 4, 00:20:36.363 "bdev_retry_count": 3, 00:20:36.363 "transport_ack_timeout": 0, 00:20:36.363 "ctrlr_loss_timeout_sec": 0, 00:20:36.363 "reconnect_delay_sec": 0, 00:20:36.363 "fast_io_fail_timeout_sec": 0, 00:20:36.363 "disable_auto_failback": false, 00:20:36.363 "generate_uuids": false, 00:20:36.363 "transport_tos": 0, 00:20:36.363 "nvme_error_stat": false, 00:20:36.363 "rdma_srq_size": 0, 00:20:36.363 "io_path_stat": false, 00:20:36.363 "allow_accel_sequence": false, 00:20:36.363 "rdma_max_cq_size": 0, 00:20:36.363 "rdma_cm_event_timeout_ms": 0, 00:20:36.363 "dhchap_digests": [ 00:20:36.363 "sha256", 00:20:36.363 "sha384", 00:20:36.363 "sha512" 00:20:36.363 ], 00:20:36.363 "dhchap_dhgroups": [ 00:20:36.363 "null", 00:20:36.363 "ffdhe2048", 00:20:36.363 "ffdhe3072", 00:20:36.363 "ffdhe4096", 00:20:36.363 "ffdhe6144", 00:20:36.363 "ffdhe8192" 00:20:36.363 ] 00:20:36.363 } 00:20:36.363 }, 00:20:36.363 { 00:20:36.363 "method": "bdev_nvme_set_hotplug", 00:20:36.363 "params": { 00:20:36.363 "period_us": 100000, 00:20:36.363 "enable": false 00:20:36.363 } 00:20:36.363 }, 00:20:36.363 { 00:20:36.363 "method": "bdev_malloc_create", 00:20:36.363 "params": { 00:20:36.363 "name": "malloc0", 00:20:36.363 "num_blocks": 8192, 00:20:36.363 "block_size": 4096, 00:20:36.363 "physical_block_size": 4096, 00:20:36.363 "uuid": "f1b85105-4fca-44f3-9836-ece250111ca8", 00:20:36.363 "optimal_io_boundary": 0 00:20:36.363 } 00:20:36.363 }, 00:20:36.363 { 00:20:36.363 "method": "bdev_wait_for_examine" 00:20:36.363 } 00:20:36.363 ] 00:20:36.363 }, 00:20:36.363 { 00:20:36.363 "subsystem": "nbd", 00:20:36.363 "config": [] 00:20:36.363 }, 00:20:36.363 { 00:20:36.363 "subsystem": "scheduler", 00:20:36.363 "config": [ 00:20:36.363 { 00:20:36.363 "method": "framework_set_scheduler", 00:20:36.363 "params": { 00:20:36.363 "name": "static" 00:20:36.363 } 00:20:36.363 } 00:20:36.363 ] 00:20:36.363 }, 00:20:36.363 { 00:20:36.363 "subsystem": "nvmf", 00:20:36.363 "config": [ 00:20:36.363 { 00:20:36.363 "method": "nvmf_set_config", 00:20:36.363 "params": { 00:20:36.363 "discovery_filter": "match_any", 00:20:36.363 "admin_cmd_passthru": { 00:20:36.363 "identify_ctrlr": false 00:20:36.363 } 00:20:36.363 } 00:20:36.363 }, 00:20:36.363 { 00:20:36.363 "method": "nvmf_set_max_subsystems", 00:20:36.363 "params": { 00:20:36.363 "max_subsystems": 1024 00:20:36.363 } 00:20:36.363 }, 00:20:36.363 { 00:20:36.363 "method": "nvmf_set_crdt", 00:20:36.363 "params": { 00:20:36.363 "crdt1": 0, 00:20:36.363 "crdt2": 0, 00:20:36.363 "crdt3": 0 00:20:36.363 } 00:20:36.363 }, 00:20:36.363 { 00:20:36.363 "method": "nvmf_create_transport", 00:20:36.363 "params": { 00:20:36.363 "trtype": "TCP", 00:20:36.363 "max_queue_depth": 128, 00:20:36.363 "max_io_qpairs_per_ctrlr": 127, 00:20:36.363 "in_capsule_data_size": 4096, 00:20:36.363 "max_io_size": 131072, 00:20:36.363 "io_unit_size": 131072, 00:20:36.363 "max_aq_depth": 128, 00:20:36.363 "num_shared_buffers": 511, 00:20:36.363 "buf_cache_size": 4294967295, 00:20:36.363 "dif_insert_or_strip": false, 00:20:36.363 "zcopy": false, 00:20:36.363 "c2h_success": false, 00:20:36.363 "sock_priority": 0, 00:20:36.363 "abort_timeout_sec": 1, 00:20:36.363 "ack_timeout": 0, 00:20:36.363 "data_wr_pool_size": 0 00:20:36.363 } 00:20:36.363 }, 00:20:36.363 { 00:20:36.363 "method": "nvmf_create_subsystem", 00:20:36.363 "params": { 00:20:36.363 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:36.363 "allow_any_host": false, 00:20:36.363 "serial_number": "00000000000000000000", 00:20:36.363 "model_number": "SPDK bdev Controller", 00:20:36.363 "max_namespaces": 32, 00:20:36.363 "min_cntlid": 1, 00:20:36.363 "max_cntlid": 65519, 00:20:36.363 "ana_reporting": false 00:20:36.363 } 00:20:36.363 }, 00:20:36.363 { 00:20:36.363 "method": "nvmf_subsystem_add_host", 00:20:36.363 "params": { 00:20:36.363 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:36.363 "host": "nqn.2016-06.io.spdk:host1", 00:20:36.363 "psk": "key0" 00:20:36.363 } 00:20:36.363 }, 00:20:36.363 { 00:20:36.363 "method": "nvmf_subsystem_add_ns", 00:20:36.363 "params": { 00:20:36.363 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:36.363 "namespace": { 00:20:36.363 "nsid": 1, 00:20:36.363 "bdev_name": "malloc0", 00:20:36.363 "nguid": "F1B851054FCA44F39836ECE250111CA8", 00:20:36.363 "uuid": "f1b85105-4fca-44f3-9836-ece250111ca8", 00:20:36.363 "no_auto_visible": false 00:20:36.363 } 00:20:36.363 } 00:20:36.363 }, 00:20:36.363 { 00:20:36.363 "method": "nvmf_subsystem_add_listener", 00:20:36.363 "params": { 00:20:36.363 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:36.363 "listen_address": { 00:20:36.363 "trtype": "TCP", 00:20:36.363 "adrfam": "IPv4", 00:20:36.363 "traddr": "10.0.0.2", 00:20:36.363 "trsvcid": "4420" 00:20:36.363 }, 00:20:36.363 "secure_channel": true 00:20:36.363 } 00:20:36.364 } 00:20:36.364 ] 00:20:36.364 } 00:20:36.364 ] 00:20:36.364 }' 00:20:36.364 13:03:41 -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:36.623 13:03:41 -- target/tls.sh@264 -- # bperfcfg='{ 00:20:36.624 "subsystems": [ 00:20:36.624 { 00:20:36.624 "subsystem": "keyring", 00:20:36.624 "config": [ 00:20:36.624 { 00:20:36.624 "method": "keyring_file_add_key", 00:20:36.624 "params": { 00:20:36.624 "name": "key0", 00:20:36.624 "path": "/tmp/tmp.SKN1k2KAyO" 00:20:36.624 } 00:20:36.624 } 00:20:36.624 ] 00:20:36.624 }, 00:20:36.624 { 00:20:36.624 "subsystem": "iobuf", 00:20:36.624 "config": [ 00:20:36.624 { 00:20:36.624 "method": "iobuf_set_options", 00:20:36.624 "params": { 00:20:36.624 "small_pool_count": 8192, 00:20:36.624 "large_pool_count": 1024, 00:20:36.624 "small_bufsize": 8192, 00:20:36.624 "large_bufsize": 135168 00:20:36.624 } 00:20:36.624 } 00:20:36.624 ] 00:20:36.624 }, 00:20:36.624 { 00:20:36.624 "subsystem": "sock", 00:20:36.624 "config": [ 00:20:36.624 { 00:20:36.624 "method": "sock_impl_set_options", 00:20:36.624 "params": { 00:20:36.624 "impl_name": "posix", 00:20:36.624 "recv_buf_size": 2097152, 00:20:36.624 "send_buf_size": 2097152, 00:20:36.624 "enable_recv_pipe": true, 00:20:36.624 "enable_quickack": false, 00:20:36.624 "enable_placement_id": 0, 00:20:36.624 "enable_zerocopy_send_server": true, 00:20:36.624 "enable_zerocopy_send_client": false, 00:20:36.624 "zerocopy_threshold": 0, 00:20:36.624 "tls_version": 0, 00:20:36.624 "enable_ktls": false 00:20:36.624 } 00:20:36.624 }, 00:20:36.624 { 00:20:36.624 "method": "sock_impl_set_options", 00:20:36.624 "params": { 00:20:36.624 "impl_name": "ssl", 00:20:36.624 "recv_buf_size": 4096, 00:20:36.624 "send_buf_size": 4096, 00:20:36.624 "enable_recv_pipe": true, 00:20:36.624 "enable_quickack": false, 00:20:36.624 "enable_placement_id": 0, 00:20:36.624 "enable_zerocopy_send_server": true, 00:20:36.624 "enable_zerocopy_send_client": false, 00:20:36.624 "zerocopy_threshold": 0, 00:20:36.624 "tls_version": 0, 00:20:36.624 "enable_ktls": false 00:20:36.624 } 00:20:36.624 } 00:20:36.624 ] 00:20:36.624 }, 00:20:36.624 { 00:20:36.624 "subsystem": "vmd", 00:20:36.624 "config": [] 00:20:36.624 }, 00:20:36.624 { 00:20:36.624 "subsystem": "accel", 00:20:36.624 "config": [ 00:20:36.624 { 00:20:36.624 "method": "accel_set_options", 00:20:36.624 "params": { 00:20:36.624 "small_cache_size": 128, 00:20:36.624 "large_cache_size": 16, 00:20:36.624 "task_count": 2048, 00:20:36.624 "sequence_count": 2048, 00:20:36.624 "buf_count": 2048 00:20:36.624 } 00:20:36.624 } 00:20:36.624 ] 00:20:36.624 }, 00:20:36.624 { 00:20:36.624 "subsystem": "bdev", 00:20:36.624 "config": [ 00:20:36.624 { 00:20:36.624 "method": "bdev_set_options", 00:20:36.624 "params": { 00:20:36.624 "bdev_io_pool_size": 65535, 00:20:36.624 "bdev_io_cache_size": 256, 00:20:36.624 "bdev_auto_examine": true, 00:20:36.624 "iobuf_small_cache_size": 128, 00:20:36.624 "iobuf_large_cache_size": 16 00:20:36.624 } 00:20:36.624 }, 00:20:36.624 { 00:20:36.624 "method": "bdev_raid_set_options", 00:20:36.624 "params": { 00:20:36.624 "process_window_size_kb": 1024 00:20:36.624 } 00:20:36.624 }, 00:20:36.624 { 00:20:36.624 "method": "bdev_iscsi_set_options", 00:20:36.624 "params": { 00:20:36.624 "timeout_sec": 30 00:20:36.624 } 00:20:36.624 }, 00:20:36.624 { 00:20:36.624 "method": "bdev_nvme_set_options", 00:20:36.624 "params": { 00:20:36.624 "action_on_timeout": "none", 00:20:36.624 "timeout_us": 0, 00:20:36.624 "timeout_admin_us": 0, 00:20:36.624 "keep_alive_timeout_ms": 10000, 00:20:36.624 "arbitration_burst": 0, 00:20:36.624 "low_priority_weight": 0, 00:20:36.624 "medium_priority_weight": 0, 00:20:36.624 "high_priority_weight": 0, 00:20:36.624 "nvme_adminq_poll_period_us": 10000, 00:20:36.624 "nvme_ioq_poll_period_us": 0, 00:20:36.624 "io_queue_requests": 512, 00:20:36.624 "delay_cmd_submit": true, 00:20:36.624 "transport_retry_count": 4, 00:20:36.624 "bdev_retry_count": 3, 00:20:36.624 "transport_ack_timeout": 0, 00:20:36.624 "ctrlr_loss_timeout_sec": 0, 00:20:36.624 "reconnect_delay_sec": 0, 00:20:36.624 "fast_io_fail_timeout_sec": 0, 00:20:36.624 "disable_auto_failback": false, 00:20:36.624 "generate_uuids": false, 00:20:36.624 "transport_tos": 0, 00:20:36.624 "nvme_error_stat": false, 00:20:36.624 "rdma_srq_size": 0, 00:20:36.624 "io_path_stat": false, 00:20:36.624 "allow_accel_sequence": false, 00:20:36.624 "rdma_max_cq_size": 0, 00:20:36.624 "rdma_cm_event_timeout_ms": 0, 00:20:36.624 "dhchap_digests": [ 00:20:36.624 "sha256", 00:20:36.624 "sha384", 00:20:36.624 "sha512" 00:20:36.624 ], 00:20:36.624 "dhchap_dhgroups": [ 00:20:36.624 "null", 00:20:36.624 "ffdhe2048", 00:20:36.624 "ffdhe3072", 00:20:36.624 "ffdhe4096", 00:20:36.624 "ffdhe6144", 00:20:36.624 "ffdhe8192" 00:20:36.624 ] 00:20:36.624 } 00:20:36.624 }, 00:20:36.624 { 00:20:36.624 "method": "bdev_nvme_attach_controller", 00:20:36.624 "params": { 00:20:36.624 "name": "nvme0", 00:20:36.624 "trtype": "TCP", 00:20:36.624 "adrfam": "IPv4", 00:20:36.624 "traddr": "10.0.0.2", 00:20:36.624 "trsvcid": "4420", 00:20:36.624 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:36.624 "prchk_reftag": false, 00:20:36.624 "prchk_guard": false, 00:20:36.624 "ctrlr_loss_timeout_sec": 0, 00:20:36.624 "reconnect_delay_sec": 0, 00:20:36.624 "fast_io_fail_timeout_sec": 0, 00:20:36.624 "psk": "key0", 00:20:36.624 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:36.624 "hdgst": false, 00:20:36.624 "ddgst": false 00:20:36.624 } 00:20:36.624 }, 00:20:36.624 { 00:20:36.624 "method": "bdev_nvme_set_hotplug", 00:20:36.624 "params": { 00:20:36.624 "period_us": 100000, 00:20:36.625 "enable": false 00:20:36.625 } 00:20:36.625 }, 00:20:36.625 { 00:20:36.625 "method": "bdev_enable_histogram", 00:20:36.625 "params": { 00:20:36.625 "name": "nvme0n1", 00:20:36.625 "enable": true 00:20:36.625 } 00:20:36.625 }, 00:20:36.625 { 00:20:36.625 "method": "bdev_wait_for_examine" 00:20:36.625 } 00:20:36.625 ] 00:20:36.625 }, 00:20:36.625 { 00:20:36.625 "subsystem": "nbd", 00:20:36.625 "config": [] 00:20:36.625 } 00:20:36.625 ] 00:20:36.625 }' 00:20:36.625 13:03:41 -- target/tls.sh@266 -- # killprocess 4014625 00:20:36.625 13:03:41 -- common/autotest_common.sh@936 -- # '[' -z 4014625 ']' 00:20:36.625 13:03:41 -- common/autotest_common.sh@940 -- # kill -0 4014625 00:20:36.625 13:03:41 -- common/autotest_common.sh@941 -- # uname 00:20:36.625 13:03:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:36.625 13:03:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4014625 00:20:36.625 13:03:41 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:36.625 13:03:41 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:36.625 13:03:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4014625' 00:20:36.625 killing process with pid 4014625 00:20:36.625 13:03:41 -- common/autotest_common.sh@955 -- # kill 4014625 00:20:36.625 Received shutdown signal, test time was about 1.000000 seconds 00:20:36.625 00:20:36.625 Latency(us) 00:20:36.625 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:36.625 =================================================================================================================== 00:20:36.625 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:36.625 13:03:41 -- common/autotest_common.sh@960 -- # wait 4014625 00:20:36.886 13:03:41 -- target/tls.sh@267 -- # killprocess 4014280 00:20:36.886 13:03:41 -- common/autotest_common.sh@936 -- # '[' -z 4014280 ']' 00:20:36.886 13:03:41 -- common/autotest_common.sh@940 -- # kill -0 4014280 00:20:36.886 13:03:41 -- common/autotest_common.sh@941 -- # uname 00:20:36.886 13:03:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:36.886 13:03:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4014280 00:20:36.886 13:03:41 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:36.886 13:03:41 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:36.886 13:03:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4014280' 00:20:36.886 killing process with pid 4014280 00:20:36.886 13:03:41 -- common/autotest_common.sh@955 -- # kill 4014280 00:20:36.886 13:03:41 -- common/autotest_common.sh@960 -- # wait 4014280 00:20:36.886 13:03:41 -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:20:36.886 13:03:41 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:36.886 13:03:41 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:36.886 13:03:41 -- common/autotest_common.sh@10 -- # set +x 00:20:36.886 13:03:41 -- target/tls.sh@269 -- # echo '{ 00:20:36.886 "subsystems": [ 00:20:36.886 { 00:20:36.886 "subsystem": "keyring", 00:20:36.886 "config": [ 00:20:36.886 { 00:20:36.886 "method": "keyring_file_add_key", 00:20:36.886 "params": { 00:20:36.886 "name": "key0", 00:20:36.886 "path": "/tmp/tmp.SKN1k2KAyO" 00:20:36.886 } 00:20:36.886 } 00:20:36.886 ] 00:20:36.886 }, 00:20:36.886 { 00:20:36.886 "subsystem": "iobuf", 00:20:36.886 "config": [ 00:20:36.886 { 00:20:36.886 "method": "iobuf_set_options", 00:20:36.886 "params": { 00:20:36.886 "small_pool_count": 8192, 00:20:36.886 "large_pool_count": 1024, 00:20:36.886 "small_bufsize": 8192, 00:20:36.886 "large_bufsize": 135168 00:20:36.886 } 00:20:36.886 } 00:20:36.886 ] 00:20:36.886 }, 00:20:36.886 { 00:20:36.886 "subsystem": "sock", 00:20:36.886 "config": [ 00:20:36.886 { 00:20:36.886 "method": "sock_impl_set_options", 00:20:36.886 "params": { 00:20:36.886 "impl_name": "posix", 00:20:36.886 "recv_buf_size": 2097152, 00:20:36.886 "send_buf_size": 2097152, 00:20:36.886 "enable_recv_pipe": true, 00:20:36.886 "enable_quickack": false, 00:20:36.886 "enable_placement_id": 0, 00:20:36.886 "enable_zerocopy_send_server": true, 00:20:36.886 "enable_zerocopy_send_client": false, 00:20:36.886 "zerocopy_threshold": 0, 00:20:36.886 "tls_version": 0, 00:20:36.886 "enable_ktls": false 00:20:36.886 } 00:20:36.886 }, 00:20:36.886 { 00:20:36.886 "method": "sock_impl_set_options", 00:20:36.886 "params": { 00:20:36.886 "impl_name": "ssl", 00:20:36.886 "recv_buf_size": 4096, 00:20:36.886 "send_buf_size": 4096, 00:20:36.886 "enable_recv_pipe": true, 00:20:36.886 "enable_quickack": false, 00:20:36.886 "enable_placement_id": 0, 00:20:36.886 "enable_zerocopy_send_server": true, 00:20:36.886 "enable_zerocopy_send_client": false, 00:20:36.886 "zerocopy_threshold": 0, 00:20:36.886 "tls_version": 0, 00:20:36.886 "enable_ktls": false 00:20:36.886 } 00:20:36.886 } 00:20:36.886 ] 00:20:36.886 }, 00:20:36.886 { 00:20:36.886 "subsystem": "vmd", 00:20:36.886 "config": [] 00:20:36.886 }, 00:20:36.886 { 00:20:36.886 "subsystem": "accel", 00:20:36.886 "config": [ 00:20:36.886 { 00:20:36.886 "method": "accel_set_options", 00:20:36.886 "params": { 00:20:36.886 "small_cache_size": 128, 00:20:36.886 "large_cache_size": 16, 00:20:36.886 "task_count": 2048, 00:20:36.886 "sequence_count": 2048, 00:20:36.886 "buf_count": 2048 00:20:36.886 } 00:20:36.886 } 00:20:36.886 ] 00:20:36.886 }, 00:20:36.886 { 00:20:36.886 "subsystem": "bdev", 00:20:36.886 "config": [ 00:20:36.886 { 00:20:36.886 "method": "bdev_set_options", 00:20:36.886 "params": { 00:20:36.886 "bdev_io_pool_size": 65535, 00:20:36.886 "bdev_io_cache_size": 256, 00:20:36.886 "bdev_auto_examine": true, 00:20:36.886 "iobuf_small_cache_size": 128, 00:20:36.886 "iobuf_large_cache_size": 16 00:20:36.886 } 00:20:36.886 }, 00:20:36.886 { 00:20:36.886 "method": "bdev_raid_set_options", 00:20:36.886 "params": { 00:20:36.886 "process_window_size_kb": 1024 00:20:36.886 } 00:20:36.886 }, 00:20:36.886 { 00:20:36.886 "method": "bdev_iscsi_set_options", 00:20:36.886 "params": { 00:20:36.886 "timeout_sec": 30 00:20:36.886 } 00:20:36.886 }, 00:20:36.886 { 00:20:36.886 "method": "bdev_nvme_set_options", 00:20:36.886 "params": { 00:20:36.886 "action_on_timeout": "none", 00:20:36.886 "timeout_us": 0, 00:20:36.886 "timeout_admin_us": 0, 00:20:36.886 "keep_alive_timeout_ms": 10000, 00:20:36.886 "arbitration_burst": 0, 00:20:36.886 "low_priority_weight": 0, 00:20:36.886 "medium_priority_weight": 0, 00:20:36.886 "high_priority_weight": 0, 00:20:36.886 "nvme_adminq_poll_period_us": 10000, 00:20:36.886 "nvme_ioq_poll_period_us": 0, 00:20:36.886 "io_queue_requests": 0, 00:20:36.886 "delay_cmd_submit": true, 00:20:36.886 "transport_retry_count": 4, 00:20:36.886 "bdev_retry_count": 3, 00:20:36.886 "transport_ack_timeout": 0, 00:20:36.886 "ctrlr_loss_timeout_sec": 0, 00:20:36.886 "reconnect_delay_sec": 0, 00:20:36.886 "fast_io_fail_timeout_sec": 0, 00:20:36.886 "disable_auto_failback": false, 00:20:36.886 "generate_uuids": false, 00:20:36.886 "transport_tos": 0, 00:20:36.886 "nvme_error_stat": false, 00:20:36.886 "rdma_srq_size": 0, 00:20:36.886 "io_path_stat": false, 00:20:36.886 "allow_accel_sequence": false, 00:20:36.886 "rdma_max_cq_size": 0, 00:20:36.886 "rdma_cm_event_timeout_ms": 0, 00:20:36.886 "dhchap_digests": [ 00:20:36.886 "sha256", 00:20:36.886 "sha384", 00:20:36.886 "sha512" 00:20:36.886 ], 00:20:36.886 "dhchap_dhgroups": [ 00:20:36.886 "null", 00:20:36.886 "ffdhe2048", 00:20:36.886 "ffdhe3072", 00:20:36.886 "ffdhe4096", 00:20:36.886 "ffdhe6144", 00:20:36.886 "ffdhe8192" 00:20:36.886 ] 00:20:36.886 } 00:20:36.886 }, 00:20:36.886 { 00:20:36.886 "method": "bdev_nvme_set_hotplug", 00:20:36.886 "params": { 00:20:36.886 "period_us": 100000, 00:20:36.886 "enable": false 00:20:36.886 } 00:20:36.886 }, 00:20:36.886 { 00:20:36.886 "method": "bdev_malloc_create", 00:20:36.886 "params": { 00:20:36.886 "name": "malloc0", 00:20:36.886 "num_blocks": 8192, 00:20:36.886 "block_size": 4096, 00:20:36.886 "physical_block_size": 4096, 00:20:36.886 "uuid": "f1b85105-4fca-44f3-9836-ece250111ca8", 00:20:36.886 "optimal_io_boundary": 0 00:20:36.886 } 00:20:36.886 }, 00:20:36.886 { 00:20:36.886 "method": "bdev_wait_for_examine" 00:20:36.886 } 00:20:36.886 ] 00:20:36.886 }, 00:20:36.886 { 00:20:36.886 "subsystem": "nbd", 00:20:36.886 "config": [] 00:20:36.886 }, 00:20:36.886 { 00:20:36.886 "subsystem": "scheduler", 00:20:36.886 "config": [ 00:20:36.886 { 00:20:36.886 "method": "framework_set_scheduler", 00:20:36.886 "params": { 00:20:36.886 "name": "static" 00:20:36.886 } 00:20:36.886 } 00:20:36.886 ] 00:20:36.886 }, 00:20:36.886 { 00:20:36.886 "subsystem": "nvmf", 00:20:36.886 "config": [ 00:20:36.886 { 00:20:36.886 "method": "nvmf_set_config", 00:20:36.886 "params": { 00:20:36.886 "discovery_filter": "match_any", 00:20:36.886 "admin_cmd_passthru": { 00:20:36.886 "identify_ctrlr": false 00:20:36.886 } 00:20:36.886 } 00:20:36.886 }, 00:20:36.886 { 00:20:36.886 "method": "nvmf_set_max_subsystems", 00:20:36.886 "params": { 00:20:36.886 "max_subsystems": 1024 00:20:36.886 } 00:20:36.886 }, 00:20:36.886 { 00:20:36.886 "method": "nvmf_set_crdt", 00:20:36.886 "params": { 00:20:36.886 "crdt1": 0, 00:20:36.886 "crdt2": 0, 00:20:36.886 "crdt3": 0 00:20:36.886 } 00:20:36.886 }, 00:20:36.886 { 00:20:36.886 "method": "nvmf_create_transport", 00:20:36.886 "params": { 00:20:36.886 "trtype": "TCP", 00:20:36.886 "max_queue_depth": 128, 00:20:36.886 "max_io_qpairs_per_ctrlr": 127, 00:20:36.886 "in_capsule_data_size": 4096, 00:20:36.886 "max_io_size": 131072, 00:20:36.886 "io_unit_size": 131072, 00:20:36.886 "max_aq_depth": 128, 00:20:36.886 "num_shared_buffers": 511, 00:20:36.886 "buf_cache_size": 4294967295, 00:20:36.886 "dif_insert_or_strip": false, 00:20:36.886 "zcopy": false, 00:20:36.886 "c2h_success": false, 00:20:36.886 "sock_priority": 0, 00:20:36.886 "abort_timeout_sec": 1, 00:20:36.886 "ack_timeout": 0, 00:20:36.886 "data_wr_pool_size": 0 00:20:36.886 } 00:20:36.886 }, 00:20:36.886 { 00:20:36.886 "method": "nvmf_create_subsystem", 00:20:36.886 "params": { 00:20:36.886 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:36.886 "allow_any_host": false, 00:20:36.886 "serial_number": "00000000000000000000", 00:20:36.886 "model_number": "SPDK bdev Controller", 00:20:36.886 "max_namespaces": 32, 00:20:36.886 "min_cntlid": 1, 00:20:36.886 "max_cntlid": 65519, 00:20:36.886 "ana_reporting": false 00:20:36.886 } 00:20:36.886 }, 00:20:36.886 { 00:20:36.886 "method": "nvmf_subsystem_add_host", 00:20:36.886 "params": { 00:20:36.886 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:36.886 "host": "nqn.2016-06.io.spdk:host1", 00:20:36.886 "psk": "key0" 00:20:36.886 } 00:20:36.886 }, 00:20:36.886 { 00:20:36.886 "method": "nvmf_subsystem_add_ns", 00:20:36.886 "params": { 00:20:36.886 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:36.886 "namespace": { 00:20:36.886 "nsid": 1, 00:20:36.886 "bdev_name": "malloc0", 00:20:36.886 "nguid": "F1B851054FCA44F39836ECE250111CA8", 00:20:36.886 "uuid": "f1b85105-4fca-44f3-9836-ece250111ca8", 00:20:36.886 "no_auto_visible": false 00:20:36.886 } 00:20:36.886 } 00:20:36.886 }, 00:20:36.886 { 00:20:36.886 "method": "nvmf_subsystem_add_listener", 00:20:36.886 "params": { 00:20:36.886 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:36.886 "listen_address": { 00:20:36.886 "trtype": "TCP", 00:20:36.886 "adrfam": "IPv4", 00:20:36.886 "traddr": "10.0.0.2", 00:20:36.886 "trsvcid": "4420" 00:20:36.886 }, 00:20:36.886 "secure_channel": true 00:20:36.886 } 00:20:36.886 } 00:20:36.886 ] 00:20:36.886 } 00:20:36.886 ] 00:20:36.886 }' 00:20:36.886 13:03:41 -- nvmf/common.sh@470 -- # nvmfpid=4015153 00:20:36.886 13:03:41 -- nvmf/common.sh@471 -- # waitforlisten 4015153 00:20:36.886 13:03:41 -- common/autotest_common.sh@817 -- # '[' -z 4015153 ']' 00:20:36.886 13:03:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:36.886 13:03:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:36.886 13:03:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:36.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:36.886 13:03:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:36.886 13:03:41 -- common/autotest_common.sh@10 -- # set +x 00:20:36.886 13:03:41 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:20:36.886 [2024-04-26 13:03:41.934235] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:20:36.887 [2024-04-26 13:03:41.934291] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:37.147 EAL: No free 2048 kB hugepages reported on node 1 00:20:37.147 [2024-04-26 13:03:41.999181] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:37.147 [2024-04-26 13:03:42.061821] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:37.147 [2024-04-26 13:03:42.061867] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:37.147 [2024-04-26 13:03:42.061875] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:37.147 [2024-04-26 13:03:42.061882] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:37.147 [2024-04-26 13:03:42.061888] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:37.147 [2024-04-26 13:03:42.061942] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:37.407 [2024-04-26 13:03:42.250964] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:37.407 [2024-04-26 13:03:42.282976] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:37.407 [2024-04-26 13:03:42.291148] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:37.667 13:03:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:37.667 13:03:42 -- common/autotest_common.sh@850 -- # return 0 00:20:37.667 13:03:42 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:37.667 13:03:42 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:37.667 13:03:42 -- common/autotest_common.sh@10 -- # set +x 00:20:37.927 13:03:42 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:37.927 13:03:42 -- target/tls.sh@272 -- # bdevperf_pid=4015340 00:20:37.927 13:03:42 -- target/tls.sh@273 -- # waitforlisten 4015340 /var/tmp/bdevperf.sock 00:20:37.927 13:03:42 -- common/autotest_common.sh@817 -- # '[' -z 4015340 ']' 00:20:37.927 13:03:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:37.927 13:03:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:37.928 13:03:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:37.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:37.928 13:03:42 -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:20:37.928 13:03:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:37.928 13:03:42 -- common/autotest_common.sh@10 -- # set +x 00:20:37.928 13:03:42 -- target/tls.sh@270 -- # echo '{ 00:20:37.928 "subsystems": [ 00:20:37.928 { 00:20:37.928 "subsystem": "keyring", 00:20:37.928 "config": [ 00:20:37.928 { 00:20:37.928 "method": "keyring_file_add_key", 00:20:37.928 "params": { 00:20:37.928 "name": "key0", 00:20:37.928 "path": "/tmp/tmp.SKN1k2KAyO" 00:20:37.928 } 00:20:37.928 } 00:20:37.928 ] 00:20:37.928 }, 00:20:37.928 { 00:20:37.928 "subsystem": "iobuf", 00:20:37.928 "config": [ 00:20:37.928 { 00:20:37.928 "method": "iobuf_set_options", 00:20:37.928 "params": { 00:20:37.928 "small_pool_count": 8192, 00:20:37.928 "large_pool_count": 1024, 00:20:37.928 "small_bufsize": 8192, 00:20:37.928 "large_bufsize": 135168 00:20:37.928 } 00:20:37.928 } 00:20:37.928 ] 00:20:37.928 }, 00:20:37.928 { 00:20:37.928 "subsystem": "sock", 00:20:37.928 "config": [ 00:20:37.928 { 00:20:37.928 "method": "sock_impl_set_options", 00:20:37.928 "params": { 00:20:37.928 "impl_name": "posix", 00:20:37.928 "recv_buf_size": 2097152, 00:20:37.928 "send_buf_size": 2097152, 00:20:37.928 "enable_recv_pipe": true, 00:20:37.928 "enable_quickack": false, 00:20:37.928 "enable_placement_id": 0, 00:20:37.928 "enable_zerocopy_send_server": true, 00:20:37.928 "enable_zerocopy_send_client": false, 00:20:37.928 "zerocopy_threshold": 0, 00:20:37.928 "tls_version": 0, 00:20:37.928 "enable_ktls": false 00:20:37.928 } 00:20:37.928 }, 00:20:37.928 { 00:20:37.928 "method": "sock_impl_set_options", 00:20:37.928 "params": { 00:20:37.928 "impl_name": "ssl", 00:20:37.928 "recv_buf_size": 4096, 00:20:37.928 "send_buf_size": 4096, 00:20:37.928 "enable_recv_pipe": true, 00:20:37.928 "enable_quickack": false, 00:20:37.928 "enable_placement_id": 0, 00:20:37.928 "enable_zerocopy_send_server": true, 00:20:37.928 "enable_zerocopy_send_client": false, 00:20:37.928 "zerocopy_threshold": 0, 00:20:37.928 "tls_version": 0, 00:20:37.928 "enable_ktls": false 00:20:37.928 } 00:20:37.928 } 00:20:37.928 ] 00:20:37.928 }, 00:20:37.928 { 00:20:37.928 "subsystem": "vmd", 00:20:37.928 "config": [] 00:20:37.928 }, 00:20:37.928 { 00:20:37.928 "subsystem": "accel", 00:20:37.928 "config": [ 00:20:37.928 { 00:20:37.928 "method": "accel_set_options", 00:20:37.928 "params": { 00:20:37.928 "small_cache_size": 128, 00:20:37.928 "large_cache_size": 16, 00:20:37.928 "task_count": 2048, 00:20:37.928 "sequence_count": 2048, 00:20:37.928 "buf_count": 2048 00:20:37.928 } 00:20:37.928 } 00:20:37.928 ] 00:20:37.928 }, 00:20:37.928 { 00:20:37.928 "subsystem": "bdev", 00:20:37.928 "config": [ 00:20:37.928 { 00:20:37.928 "method": "bdev_set_options", 00:20:37.928 "params": { 00:20:37.928 "bdev_io_pool_size": 65535, 00:20:37.928 "bdev_io_cache_size": 256, 00:20:37.928 "bdev_auto_examine": true, 00:20:37.928 "iobuf_small_cache_size": 128, 00:20:37.928 "iobuf_large_cache_size": 16 00:20:37.928 } 00:20:37.928 }, 00:20:37.928 { 00:20:37.928 "method": "bdev_raid_set_options", 00:20:37.928 "params": { 00:20:37.928 "process_window_size_kb": 1024 00:20:37.928 } 00:20:37.928 }, 00:20:37.928 { 00:20:37.928 "method": "bdev_iscsi_set_options", 00:20:37.928 "params": { 00:20:37.928 "timeout_sec": 30 00:20:37.928 } 00:20:37.928 }, 00:20:37.928 { 00:20:37.928 "method": "bdev_nvme_set_options", 00:20:37.928 "params": { 00:20:37.928 "action_on_timeout": "none", 00:20:37.928 "timeout_us": 0, 00:20:37.928 "timeout_admin_us": 0, 00:20:37.928 "keep_alive_timeout_ms": 10000, 00:20:37.928 "arbitration_burst": 0, 00:20:37.928 "low_priority_weight": 0, 00:20:37.928 "medium_priority_weight": 0, 00:20:37.928 "high_priority_weight": 0, 00:20:37.928 "nvme_adminq_poll_period_us": 10000, 00:20:37.928 "nvme_ioq_poll_period_us": 0, 00:20:37.928 "io_queue_requests": 512, 00:20:37.928 "delay_cmd_submit": true, 00:20:37.928 "transport_retry_count": 4, 00:20:37.928 "bdev_retry_count": 3, 00:20:37.928 "transport_ack_timeout": 0, 00:20:37.928 "ctrlr_loss_timeout_sec": 0, 00:20:37.928 "reconnect_delay_sec": 0, 00:20:37.928 "fast_io_fail_timeout_sec": 0, 00:20:37.928 "disable_auto_failback": false, 00:20:37.928 "generate_uuids": false, 00:20:37.928 "transport_tos": 0, 00:20:37.928 "nvme_error_stat": false, 00:20:37.928 "rdma_srq_size": 0, 00:20:37.928 "io_path_stat": false, 00:20:37.928 "allow_accel_sequence": false, 00:20:37.928 "rdma_max_cq_size": 0, 00:20:37.928 "rdma_cm_event_timeout_ms": 0, 00:20:37.928 "dhchap_digests": [ 00:20:37.928 "sha256", 00:20:37.928 "sha384", 00:20:37.928 "sha512" 00:20:37.928 ], 00:20:37.928 "dhchap_dhgroups": [ 00:20:37.928 "null", 00:20:37.928 "ffdhe2048", 00:20:37.928 "ffdhe3072", 00:20:37.928 "ffdhe4096", 00:20:37.928 "ffdhe6144", 00:20:37.928 "ffdhe8192" 00:20:37.928 ] 00:20:37.928 } 00:20:37.928 }, 00:20:37.928 { 00:20:37.928 "method": "bdev_nvme_attach_controller", 00:20:37.928 "params": { 00:20:37.928 "name": "nvme0", 00:20:37.928 "trtype": "TCP", 00:20:37.928 "adrfam": "IPv4", 00:20:37.928 "traddr": "10.0.0.2", 00:20:37.928 "trsvcid": "4420", 00:20:37.928 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:37.928 "prchk_reftag": false, 00:20:37.928 "prchk_guard": false, 00:20:37.928 "ctrlr_loss_timeout_sec": 0, 00:20:37.928 "reconnect_delay_sec": 0, 00:20:37.928 "fast_io_fail_timeout_sec": 0, 00:20:37.928 "psk": "key0", 00:20:37.928 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:37.928 "hdgst": false, 00:20:37.928 "ddgst": false 00:20:37.928 } 00:20:37.928 }, 00:20:37.928 { 00:20:37.928 "method": "bdev_nvme_set_hotplug", 00:20:37.928 "params": { 00:20:37.928 "period_us": 100000, 00:20:37.928 "enable": false 00:20:37.928 } 00:20:37.928 }, 00:20:37.928 { 00:20:37.928 "method": "bdev_enable_histogram", 00:20:37.928 "params": { 00:20:37.928 "name": "nvme0n1", 00:20:37.928 "enable": true 00:20:37.928 } 00:20:37.928 }, 00:20:37.928 { 00:20:37.928 "method": "bdev_wait_for_examine" 00:20:37.928 } 00:20:37.928 ] 00:20:37.928 }, 00:20:37.928 { 00:20:37.928 "subsystem": "nbd", 00:20:37.928 "config": [] 00:20:37.928 } 00:20:37.928 ] 00:20:37.928 }' 00:20:37.928 [2024-04-26 13:03:42.776800] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:20:37.928 [2024-04-26 13:03:42.776855] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4015340 ] 00:20:37.928 EAL: No free 2048 kB hugepages reported on node 1 00:20:37.928 [2024-04-26 13:03:42.851539] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:37.928 [2024-04-26 13:03:42.904646] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:38.191 [2024-04-26 13:03:43.030384] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:38.768 13:03:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:38.768 13:03:43 -- common/autotest_common.sh@850 -- # return 0 00:20:38.768 13:03:43 -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:38.768 13:03:43 -- target/tls.sh@275 -- # jq -r '.[].name' 00:20:38.768 13:03:43 -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:38.768 13:03:43 -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:38.768 Running I/O for 1 seconds... 00:20:40.153 00:20:40.153 Latency(us) 00:20:40.153 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:40.153 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:40.153 Verification LBA range: start 0x0 length 0x2000 00:20:40.153 nvme0n1 : 1.02 4723.16 18.45 0.00 0.00 26897.61 4505.60 38884.69 00:20:40.153 =================================================================================================================== 00:20:40.153 Total : 4723.16 18.45 0.00 0.00 26897.61 4505.60 38884.69 00:20:40.153 0 00:20:40.153 13:03:44 -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:20:40.153 13:03:44 -- target/tls.sh@279 -- # cleanup 00:20:40.153 13:03:44 -- target/tls.sh@15 -- # process_shm --id 0 00:20:40.153 13:03:44 -- common/autotest_common.sh@794 -- # type=--id 00:20:40.153 13:03:44 -- common/autotest_common.sh@795 -- # id=0 00:20:40.153 13:03:44 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:20:40.154 13:03:44 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:40.154 13:03:44 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:20:40.154 13:03:44 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:20:40.154 13:03:44 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:20:40.154 13:03:44 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:40.154 nvmf_trace.0 00:20:40.154 13:03:44 -- common/autotest_common.sh@809 -- # return 0 00:20:40.154 13:03:44 -- target/tls.sh@16 -- # killprocess 4015340 00:20:40.154 13:03:44 -- common/autotest_common.sh@936 -- # '[' -z 4015340 ']' 00:20:40.154 13:03:44 -- common/autotest_common.sh@940 -- # kill -0 4015340 00:20:40.154 13:03:44 -- common/autotest_common.sh@941 -- # uname 00:20:40.154 13:03:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:40.154 13:03:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4015340 00:20:40.154 13:03:44 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:40.154 13:03:44 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:40.154 13:03:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4015340' 00:20:40.154 killing process with pid 4015340 00:20:40.154 13:03:44 -- common/autotest_common.sh@955 -- # kill 4015340 00:20:40.154 Received shutdown signal, test time was about 1.000000 seconds 00:20:40.154 00:20:40.154 Latency(us) 00:20:40.154 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:40.154 =================================================================================================================== 00:20:40.154 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:40.154 13:03:44 -- common/autotest_common.sh@960 -- # wait 4015340 00:20:40.154 13:03:45 -- target/tls.sh@17 -- # nvmftestfini 00:20:40.154 13:03:45 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:40.154 13:03:45 -- nvmf/common.sh@117 -- # sync 00:20:40.154 13:03:45 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:40.154 13:03:45 -- nvmf/common.sh@120 -- # set +e 00:20:40.154 13:03:45 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:40.154 13:03:45 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:40.154 rmmod nvme_tcp 00:20:40.154 rmmod nvme_fabrics 00:20:40.154 rmmod nvme_keyring 00:20:40.154 13:03:45 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:40.154 13:03:45 -- nvmf/common.sh@124 -- # set -e 00:20:40.154 13:03:45 -- nvmf/common.sh@125 -- # return 0 00:20:40.154 13:03:45 -- nvmf/common.sh@478 -- # '[' -n 4015153 ']' 00:20:40.154 13:03:45 -- nvmf/common.sh@479 -- # killprocess 4015153 00:20:40.154 13:03:45 -- common/autotest_common.sh@936 -- # '[' -z 4015153 ']' 00:20:40.154 13:03:45 -- common/autotest_common.sh@940 -- # kill -0 4015153 00:20:40.154 13:03:45 -- common/autotest_common.sh@941 -- # uname 00:20:40.154 13:03:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:40.154 13:03:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4015153 00:20:40.154 13:03:45 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:40.154 13:03:45 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:40.154 13:03:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4015153' 00:20:40.154 killing process with pid 4015153 00:20:40.154 13:03:45 -- common/autotest_common.sh@955 -- # kill 4015153 00:20:40.154 13:03:45 -- common/autotest_common.sh@960 -- # wait 4015153 00:20:40.415 13:03:45 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:40.415 13:03:45 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:40.415 13:03:45 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:40.415 13:03:45 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:40.415 13:03:45 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:40.415 13:03:45 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:40.415 13:03:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:40.415 13:03:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:42.958 13:03:47 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:42.958 13:03:47 -- target/tls.sh@18 -- # rm -f /tmp/tmp.OrdMxzELuR /tmp/tmp.WvMYWm7Vfp /tmp/tmp.SKN1k2KAyO 00:20:42.958 00:20:42.958 real 1m23.336s 00:20:42.958 user 2m9.842s 00:20:42.958 sys 0m25.494s 00:20:42.958 13:03:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:42.958 13:03:47 -- common/autotest_common.sh@10 -- # set +x 00:20:42.958 ************************************ 00:20:42.958 END TEST nvmf_tls 00:20:42.958 ************************************ 00:20:42.958 13:03:47 -- nvmf/nvmf.sh@61 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:42.958 13:03:47 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:42.958 13:03:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:42.958 13:03:47 -- common/autotest_common.sh@10 -- # set +x 00:20:42.958 ************************************ 00:20:42.958 START TEST nvmf_fips 00:20:42.958 ************************************ 00:20:42.958 13:03:47 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:42.958 * Looking for test storage... 00:20:42.958 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:20:42.958 13:03:47 -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:42.958 13:03:47 -- nvmf/common.sh@7 -- # uname -s 00:20:42.958 13:03:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:42.958 13:03:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:42.958 13:03:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:42.958 13:03:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:42.958 13:03:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:42.958 13:03:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:42.958 13:03:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:42.958 13:03:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:42.958 13:03:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:42.958 13:03:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:42.958 13:03:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:42.958 13:03:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:42.958 13:03:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:42.958 13:03:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:42.958 13:03:47 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:42.958 13:03:47 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:42.958 13:03:47 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:42.958 13:03:47 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:42.958 13:03:47 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:42.958 13:03:47 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:42.958 13:03:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.958 13:03:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.958 13:03:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.959 13:03:47 -- paths/export.sh@5 -- # export PATH 00:20:42.959 13:03:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.959 13:03:47 -- nvmf/common.sh@47 -- # : 0 00:20:42.959 13:03:47 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:42.959 13:03:47 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:42.959 13:03:47 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:42.959 13:03:47 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:42.959 13:03:47 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:42.959 13:03:47 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:42.959 13:03:47 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:42.959 13:03:47 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:42.959 13:03:47 -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:42.959 13:03:47 -- fips/fips.sh@89 -- # check_openssl_version 00:20:42.959 13:03:47 -- fips/fips.sh@83 -- # local target=3.0.0 00:20:42.959 13:03:47 -- fips/fips.sh@85 -- # openssl version 00:20:42.959 13:03:47 -- fips/fips.sh@85 -- # awk '{print $2}' 00:20:42.959 13:03:47 -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:20:42.959 13:03:47 -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:20:42.959 13:03:47 -- scripts/common.sh@330 -- # local ver1 ver1_l 00:20:42.959 13:03:47 -- scripts/common.sh@331 -- # local ver2 ver2_l 00:20:42.959 13:03:47 -- scripts/common.sh@333 -- # IFS=.-: 00:20:42.959 13:03:47 -- scripts/common.sh@333 -- # read -ra ver1 00:20:42.959 13:03:47 -- scripts/common.sh@334 -- # IFS=.-: 00:20:42.959 13:03:47 -- scripts/common.sh@334 -- # read -ra ver2 00:20:42.959 13:03:47 -- scripts/common.sh@335 -- # local 'op=>=' 00:20:42.959 13:03:47 -- scripts/common.sh@337 -- # ver1_l=3 00:20:42.959 13:03:47 -- scripts/common.sh@338 -- # ver2_l=3 00:20:42.959 13:03:47 -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:20:42.959 13:03:47 -- scripts/common.sh@341 -- # case "$op" in 00:20:42.959 13:03:47 -- scripts/common.sh@345 -- # : 1 00:20:42.959 13:03:47 -- scripts/common.sh@361 -- # (( v = 0 )) 00:20:42.959 13:03:47 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:42.959 13:03:47 -- scripts/common.sh@362 -- # decimal 3 00:20:42.959 13:03:47 -- scripts/common.sh@350 -- # local d=3 00:20:42.959 13:03:47 -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:42.959 13:03:47 -- scripts/common.sh@352 -- # echo 3 00:20:42.959 13:03:47 -- scripts/common.sh@362 -- # ver1[v]=3 00:20:42.959 13:03:47 -- scripts/common.sh@363 -- # decimal 3 00:20:42.959 13:03:47 -- scripts/common.sh@350 -- # local d=3 00:20:42.959 13:03:47 -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:42.959 13:03:47 -- scripts/common.sh@352 -- # echo 3 00:20:42.959 13:03:47 -- scripts/common.sh@363 -- # ver2[v]=3 00:20:42.959 13:03:47 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:20:42.959 13:03:47 -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:20:42.959 13:03:47 -- scripts/common.sh@361 -- # (( v++ )) 00:20:42.959 13:03:47 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:42.959 13:03:47 -- scripts/common.sh@362 -- # decimal 0 00:20:42.959 13:03:47 -- scripts/common.sh@350 -- # local d=0 00:20:42.959 13:03:47 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:42.959 13:03:47 -- scripts/common.sh@352 -- # echo 0 00:20:42.959 13:03:47 -- scripts/common.sh@362 -- # ver1[v]=0 00:20:42.959 13:03:47 -- scripts/common.sh@363 -- # decimal 0 00:20:42.959 13:03:47 -- scripts/common.sh@350 -- # local d=0 00:20:42.959 13:03:47 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:42.959 13:03:47 -- scripts/common.sh@352 -- # echo 0 00:20:42.959 13:03:47 -- scripts/common.sh@363 -- # ver2[v]=0 00:20:42.959 13:03:47 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:20:42.959 13:03:47 -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:20:42.959 13:03:47 -- scripts/common.sh@361 -- # (( v++ )) 00:20:42.959 13:03:47 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:42.959 13:03:47 -- scripts/common.sh@362 -- # decimal 9 00:20:42.959 13:03:47 -- scripts/common.sh@350 -- # local d=9 00:20:42.959 13:03:47 -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:20:42.959 13:03:47 -- scripts/common.sh@352 -- # echo 9 00:20:42.959 13:03:47 -- scripts/common.sh@362 -- # ver1[v]=9 00:20:42.959 13:03:47 -- scripts/common.sh@363 -- # decimal 0 00:20:42.959 13:03:47 -- scripts/common.sh@350 -- # local d=0 00:20:42.959 13:03:47 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:42.959 13:03:47 -- scripts/common.sh@352 -- # echo 0 00:20:42.959 13:03:47 -- scripts/common.sh@363 -- # ver2[v]=0 00:20:42.959 13:03:47 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:20:42.959 13:03:47 -- scripts/common.sh@364 -- # return 0 00:20:42.959 13:03:47 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:20:42.959 13:03:47 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:20:42.959 13:03:47 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:20:42.959 13:03:47 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:20:42.959 13:03:47 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:20:42.959 13:03:47 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:20:42.959 13:03:47 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:20:42.959 13:03:47 -- fips/fips.sh@113 -- # build_openssl_config 00:20:42.959 13:03:47 -- fips/fips.sh@37 -- # cat 00:20:42.959 13:03:47 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:20:42.959 13:03:47 -- fips/fips.sh@58 -- # cat - 00:20:42.959 13:03:47 -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:20:42.959 13:03:47 -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:20:42.959 13:03:47 -- fips/fips.sh@116 -- # mapfile -t providers 00:20:42.959 13:03:47 -- fips/fips.sh@116 -- # openssl list -providers 00:20:42.959 13:03:47 -- fips/fips.sh@116 -- # grep name 00:20:42.959 13:03:47 -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:20:42.959 13:03:47 -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:20:42.959 13:03:47 -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:20:42.959 13:03:47 -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:20:42.959 13:03:47 -- common/autotest_common.sh@638 -- # local es=0 00:20:42.959 13:03:47 -- common/autotest_common.sh@640 -- # valid_exec_arg openssl md5 /dev/fd/62 00:20:42.959 13:03:47 -- fips/fips.sh@127 -- # : 00:20:42.959 13:03:47 -- common/autotest_common.sh@626 -- # local arg=openssl 00:20:42.959 13:03:47 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:42.959 13:03:47 -- common/autotest_common.sh@630 -- # type -t openssl 00:20:42.959 13:03:47 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:42.959 13:03:47 -- common/autotest_common.sh@632 -- # type -P openssl 00:20:42.959 13:03:47 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:42.959 13:03:47 -- common/autotest_common.sh@632 -- # arg=/usr/bin/openssl 00:20:42.959 13:03:47 -- common/autotest_common.sh@632 -- # [[ -x /usr/bin/openssl ]] 00:20:42.959 13:03:47 -- common/autotest_common.sh@641 -- # openssl md5 /dev/fd/62 00:20:42.959 Error setting digest 00:20:42.959 00D27493547F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:20:42.959 00D27493547F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:20:42.959 13:03:47 -- common/autotest_common.sh@641 -- # es=1 00:20:42.959 13:03:47 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:20:42.959 13:03:47 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:20:42.959 13:03:47 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:20:42.959 13:03:47 -- fips/fips.sh@130 -- # nvmftestinit 00:20:42.959 13:03:47 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:20:42.959 13:03:47 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:42.959 13:03:47 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:42.959 13:03:47 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:42.959 13:03:47 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:42.959 13:03:47 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:42.959 13:03:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:42.959 13:03:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:42.959 13:03:47 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:20:42.959 13:03:47 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:20:42.959 13:03:47 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:42.959 13:03:47 -- common/autotest_common.sh@10 -- # set +x 00:20:51.094 13:03:54 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:51.094 13:03:54 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:51.094 13:03:54 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:51.094 13:03:54 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:51.094 13:03:54 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:51.094 13:03:54 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:51.094 13:03:54 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:51.094 13:03:54 -- nvmf/common.sh@295 -- # net_devs=() 00:20:51.094 13:03:54 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:51.094 13:03:54 -- nvmf/common.sh@296 -- # e810=() 00:20:51.094 13:03:54 -- nvmf/common.sh@296 -- # local -ga e810 00:20:51.094 13:03:54 -- nvmf/common.sh@297 -- # x722=() 00:20:51.094 13:03:54 -- nvmf/common.sh@297 -- # local -ga x722 00:20:51.094 13:03:54 -- nvmf/common.sh@298 -- # mlx=() 00:20:51.094 13:03:54 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:51.094 13:03:54 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:51.094 13:03:54 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:51.094 13:03:54 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:51.094 13:03:54 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:51.094 13:03:54 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:51.094 13:03:54 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:51.094 13:03:54 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:51.094 13:03:54 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:51.094 13:03:54 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:51.094 13:03:54 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:51.094 13:03:54 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:51.094 13:03:54 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:51.094 13:03:54 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:51.094 13:03:54 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:51.094 13:03:54 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:51.094 13:03:54 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:51.094 13:03:54 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:51.094 13:03:54 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:51.094 13:03:54 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:20:51.094 Found 0000:31:00.0 (0x8086 - 0x159b) 00:20:51.094 13:03:54 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:51.094 13:03:54 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:51.094 13:03:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:51.094 13:03:54 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:51.094 13:03:54 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:51.094 13:03:54 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:51.094 13:03:54 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:20:51.094 Found 0000:31:00.1 (0x8086 - 0x159b) 00:20:51.094 13:03:54 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:51.094 13:03:54 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:51.094 13:03:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:51.094 13:03:54 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:51.094 13:03:54 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:51.094 13:03:54 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:51.094 13:03:54 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:51.094 13:03:54 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:51.094 13:03:54 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:51.094 13:03:54 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:51.094 13:03:54 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:51.094 13:03:54 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:51.094 13:03:54 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:20:51.094 Found net devices under 0000:31:00.0: cvl_0_0 00:20:51.094 13:03:54 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:51.094 13:03:54 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:51.094 13:03:54 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:51.094 13:03:54 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:51.094 13:03:54 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:51.094 13:03:54 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:20:51.094 Found net devices under 0000:31:00.1: cvl_0_1 00:20:51.094 13:03:54 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:51.094 13:03:54 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:20:51.094 13:03:54 -- nvmf/common.sh@403 -- # is_hw=yes 00:20:51.094 13:03:54 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:20:51.094 13:03:54 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:20:51.094 13:03:54 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:20:51.094 13:03:54 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:51.094 13:03:54 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:51.094 13:03:54 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:51.094 13:03:54 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:51.094 13:03:54 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:51.094 13:03:54 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:51.094 13:03:54 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:51.094 13:03:54 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:51.094 13:03:54 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:51.094 13:03:54 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:51.094 13:03:54 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:51.094 13:03:54 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:51.094 13:03:54 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:51.094 13:03:54 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:51.094 13:03:54 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:51.094 13:03:54 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:51.094 13:03:54 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:51.094 13:03:55 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:51.094 13:03:55 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:51.094 13:03:55 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:51.094 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:51.094 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.615 ms 00:20:51.094 00:20:51.094 --- 10.0.0.2 ping statistics --- 00:20:51.094 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:51.094 rtt min/avg/max/mdev = 0.615/0.615/0.615/0.000 ms 00:20:51.094 13:03:55 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:51.094 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:51.094 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.278 ms 00:20:51.094 00:20:51.094 --- 10.0.0.1 ping statistics --- 00:20:51.094 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:51.094 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:20:51.094 13:03:55 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:51.094 13:03:55 -- nvmf/common.sh@411 -- # return 0 00:20:51.094 13:03:55 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:51.094 13:03:55 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:51.094 13:03:55 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:51.094 13:03:55 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:51.094 13:03:55 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:51.094 13:03:55 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:51.094 13:03:55 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:51.094 13:03:55 -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:20:51.094 13:03:55 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:51.094 13:03:55 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:51.094 13:03:55 -- common/autotest_common.sh@10 -- # set +x 00:20:51.094 13:03:55 -- nvmf/common.sh@470 -- # nvmfpid=4020111 00:20:51.094 13:03:55 -- nvmf/common.sh@471 -- # waitforlisten 4020111 00:20:51.094 13:03:55 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:51.094 13:03:55 -- common/autotest_common.sh@817 -- # '[' -z 4020111 ']' 00:20:51.094 13:03:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:51.094 13:03:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:51.094 13:03:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:51.094 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:51.094 13:03:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:51.094 13:03:55 -- common/autotest_common.sh@10 -- # set +x 00:20:51.094 [2024-04-26 13:03:55.169905] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:20:51.094 [2024-04-26 13:03:55.169954] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:51.094 EAL: No free 2048 kB hugepages reported on node 1 00:20:51.094 [2024-04-26 13:03:55.251577] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:51.094 [2024-04-26 13:03:55.304523] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:51.094 [2024-04-26 13:03:55.304555] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:51.094 [2024-04-26 13:03:55.304560] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:51.094 [2024-04-26 13:03:55.304565] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:51.095 [2024-04-26 13:03:55.304569] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:51.095 [2024-04-26 13:03:55.304584] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:51.095 13:03:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:51.095 13:03:55 -- common/autotest_common.sh@850 -- # return 0 00:20:51.095 13:03:55 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:51.095 13:03:55 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:51.095 13:03:55 -- common/autotest_common.sh@10 -- # set +x 00:20:51.095 13:03:55 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:51.095 13:03:55 -- fips/fips.sh@133 -- # trap cleanup EXIT 00:20:51.095 13:03:55 -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:51.095 13:03:55 -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:51.095 13:03:55 -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:51.095 13:03:55 -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:51.095 13:03:55 -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:51.095 13:03:55 -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:51.095 13:03:55 -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:51.095 [2024-04-26 13:03:56.090377] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:51.095 [2024-04-26 13:03:56.106388] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:51.095 [2024-04-26 13:03:56.106563] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:51.095 [2024-04-26 13:03:56.132349] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:51.095 malloc0 00:20:51.355 13:03:56 -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:51.355 13:03:56 -- fips/fips.sh@147 -- # bdevperf_pid=4020390 00:20:51.355 13:03:56 -- fips/fips.sh@148 -- # waitforlisten 4020390 /var/tmp/bdevperf.sock 00:20:51.355 13:03:56 -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:51.355 13:03:56 -- common/autotest_common.sh@817 -- # '[' -z 4020390 ']' 00:20:51.355 13:03:56 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:51.355 13:03:56 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:51.355 13:03:56 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:51.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:51.355 13:03:56 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:51.355 13:03:56 -- common/autotest_common.sh@10 -- # set +x 00:20:51.355 [2024-04-26 13:03:56.223468] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:20:51.355 [2024-04-26 13:03:56.223521] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4020390 ] 00:20:51.355 EAL: No free 2048 kB hugepages reported on node 1 00:20:51.355 [2024-04-26 13:03:56.273572] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:51.355 [2024-04-26 13:03:56.326194] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:51.924 13:03:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:51.924 13:03:56 -- common/autotest_common.sh@850 -- # return 0 00:20:51.924 13:03:56 -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:52.183 [2024-04-26 13:03:57.103212] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:52.183 [2024-04-26 13:03:57.103272] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:52.183 TLSTESTn1 00:20:52.183 13:03:57 -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:52.442 Running I/O for 10 seconds... 00:21:02.455 00:21:02.455 Latency(us) 00:21:02.455 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:02.455 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:02.455 Verification LBA range: start 0x0 length 0x2000 00:21:02.455 TLSTESTn1 : 10.02 4638.68 18.12 0.00 0.00 27548.15 4478.29 79517.01 00:21:02.455 =================================================================================================================== 00:21:02.455 Total : 4638.68 18.12 0.00 0.00 27548.15 4478.29 79517.01 00:21:02.455 0 00:21:02.455 13:04:07 -- fips/fips.sh@1 -- # cleanup 00:21:02.455 13:04:07 -- fips/fips.sh@15 -- # process_shm --id 0 00:21:02.455 13:04:07 -- common/autotest_common.sh@794 -- # type=--id 00:21:02.455 13:04:07 -- common/autotest_common.sh@795 -- # id=0 00:21:02.455 13:04:07 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:21:02.455 13:04:07 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:02.455 13:04:07 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:21:02.455 13:04:07 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:21:02.455 13:04:07 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:21:02.455 13:04:07 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:02.455 nvmf_trace.0 00:21:02.455 13:04:07 -- common/autotest_common.sh@809 -- # return 0 00:21:02.455 13:04:07 -- fips/fips.sh@16 -- # killprocess 4020390 00:21:02.455 13:04:07 -- common/autotest_common.sh@936 -- # '[' -z 4020390 ']' 00:21:02.455 13:04:07 -- common/autotest_common.sh@940 -- # kill -0 4020390 00:21:02.455 13:04:07 -- common/autotest_common.sh@941 -- # uname 00:21:02.455 13:04:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:02.455 13:04:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4020390 00:21:02.455 13:04:07 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:21:02.455 13:04:07 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:21:02.455 13:04:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4020390' 00:21:02.455 killing process with pid 4020390 00:21:02.455 13:04:07 -- common/autotest_common.sh@955 -- # kill 4020390 00:21:02.455 Received shutdown signal, test time was about 10.000000 seconds 00:21:02.455 00:21:02.455 Latency(us) 00:21:02.455 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:02.455 =================================================================================================================== 00:21:02.455 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:02.455 [2024-04-26 13:04:07.484370] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:02.455 13:04:07 -- common/autotest_common.sh@960 -- # wait 4020390 00:21:02.716 13:04:07 -- fips/fips.sh@17 -- # nvmftestfini 00:21:02.716 13:04:07 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:02.716 13:04:07 -- nvmf/common.sh@117 -- # sync 00:21:02.716 13:04:07 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:02.716 13:04:07 -- nvmf/common.sh@120 -- # set +e 00:21:02.716 13:04:07 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:02.716 13:04:07 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:02.716 rmmod nvme_tcp 00:21:02.716 rmmod nvme_fabrics 00:21:02.716 rmmod nvme_keyring 00:21:02.716 13:04:07 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:02.716 13:04:07 -- nvmf/common.sh@124 -- # set -e 00:21:02.716 13:04:07 -- nvmf/common.sh@125 -- # return 0 00:21:02.716 13:04:07 -- nvmf/common.sh@478 -- # '[' -n 4020111 ']' 00:21:02.716 13:04:07 -- nvmf/common.sh@479 -- # killprocess 4020111 00:21:02.716 13:04:07 -- common/autotest_common.sh@936 -- # '[' -z 4020111 ']' 00:21:02.716 13:04:07 -- common/autotest_common.sh@940 -- # kill -0 4020111 00:21:02.716 13:04:07 -- common/autotest_common.sh@941 -- # uname 00:21:02.716 13:04:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:02.716 13:04:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4020111 00:21:02.716 13:04:07 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:02.716 13:04:07 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:02.716 13:04:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4020111' 00:21:02.716 killing process with pid 4020111 00:21:02.716 13:04:07 -- common/autotest_common.sh@955 -- # kill 4020111 00:21:02.716 [2024-04-26 13:04:07.710132] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:02.716 13:04:07 -- common/autotest_common.sh@960 -- # wait 4020111 00:21:02.976 13:04:07 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:02.976 13:04:07 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:02.976 13:04:07 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:02.976 13:04:07 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:02.976 13:04:07 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:02.976 13:04:07 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:02.976 13:04:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:02.976 13:04:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:04.983 13:04:09 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:04.983 13:04:09 -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:04.983 00:21:04.983 real 0m22.290s 00:21:04.983 user 0m23.509s 00:21:04.983 sys 0m9.328s 00:21:04.983 13:04:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:04.983 13:04:09 -- common/autotest_common.sh@10 -- # set +x 00:21:04.983 ************************************ 00:21:04.983 END TEST nvmf_fips 00:21:04.983 ************************************ 00:21:04.983 13:04:09 -- nvmf/nvmf.sh@64 -- # '[' 1 -eq 1 ']' 00:21:04.983 13:04:09 -- nvmf/nvmf.sh@65 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:21:04.983 13:04:09 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:04.983 13:04:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:04.983 13:04:09 -- common/autotest_common.sh@10 -- # set +x 00:21:05.244 ************************************ 00:21:05.244 START TEST nvmf_fuzz 00:21:05.244 ************************************ 00:21:05.244 13:04:10 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:21:05.244 * Looking for test storage... 00:21:05.244 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:05.244 13:04:10 -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:05.244 13:04:10 -- nvmf/common.sh@7 -- # uname -s 00:21:05.244 13:04:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:05.244 13:04:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:05.244 13:04:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:05.244 13:04:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:05.244 13:04:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:05.244 13:04:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:05.244 13:04:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:05.244 13:04:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:05.244 13:04:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:05.244 13:04:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:05.244 13:04:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:05.244 13:04:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:05.244 13:04:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:05.244 13:04:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:05.244 13:04:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:05.244 13:04:10 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:05.244 13:04:10 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:05.244 13:04:10 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:05.244 13:04:10 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:05.244 13:04:10 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:05.244 13:04:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:05.244 13:04:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:05.244 13:04:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:05.244 13:04:10 -- paths/export.sh@5 -- # export PATH 00:21:05.244 13:04:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:05.244 13:04:10 -- nvmf/common.sh@47 -- # : 0 00:21:05.244 13:04:10 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:05.244 13:04:10 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:05.244 13:04:10 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:05.244 13:04:10 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:05.244 13:04:10 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:05.244 13:04:10 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:05.244 13:04:10 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:05.244 13:04:10 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:05.244 13:04:10 -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:21:05.244 13:04:10 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:05.244 13:04:10 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:05.244 13:04:10 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:05.244 13:04:10 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:05.244 13:04:10 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:05.244 13:04:10 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:05.244 13:04:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:05.244 13:04:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:05.244 13:04:10 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:21:05.244 13:04:10 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:21:05.244 13:04:10 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:05.244 13:04:10 -- common/autotest_common.sh@10 -- # set +x 00:21:13.387 13:04:17 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:13.387 13:04:17 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:13.387 13:04:17 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:13.387 13:04:17 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:13.387 13:04:17 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:13.387 13:04:17 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:13.387 13:04:17 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:13.387 13:04:17 -- nvmf/common.sh@295 -- # net_devs=() 00:21:13.387 13:04:17 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:13.387 13:04:17 -- nvmf/common.sh@296 -- # e810=() 00:21:13.387 13:04:17 -- nvmf/common.sh@296 -- # local -ga e810 00:21:13.387 13:04:17 -- nvmf/common.sh@297 -- # x722=() 00:21:13.387 13:04:17 -- nvmf/common.sh@297 -- # local -ga x722 00:21:13.387 13:04:17 -- nvmf/common.sh@298 -- # mlx=() 00:21:13.387 13:04:17 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:13.387 13:04:17 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:13.387 13:04:17 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:13.387 13:04:17 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:13.387 13:04:17 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:13.387 13:04:17 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:13.387 13:04:17 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:13.387 13:04:17 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:13.387 13:04:17 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:13.387 13:04:17 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:13.387 13:04:17 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:13.387 13:04:17 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:13.387 13:04:17 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:13.387 13:04:17 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:13.387 13:04:17 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:13.387 13:04:17 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:13.387 13:04:17 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:13.387 13:04:17 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:13.387 13:04:17 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:13.387 13:04:17 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:13.387 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:13.387 13:04:17 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:13.387 13:04:17 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:13.387 13:04:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:13.387 13:04:17 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:13.387 13:04:17 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:13.387 13:04:17 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:13.387 13:04:17 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:13.387 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:13.387 13:04:17 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:13.387 13:04:17 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:13.387 13:04:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:13.387 13:04:17 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:13.387 13:04:17 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:13.388 13:04:17 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:13.388 13:04:17 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:13.388 13:04:17 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:13.388 13:04:17 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:13.388 13:04:17 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:13.388 13:04:17 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:13.388 13:04:17 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:13.388 13:04:17 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:13.388 Found net devices under 0000:31:00.0: cvl_0_0 00:21:13.388 13:04:17 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:13.388 13:04:17 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:13.388 13:04:17 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:13.388 13:04:17 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:13.388 13:04:17 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:13.388 13:04:17 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:13.388 Found net devices under 0000:31:00.1: cvl_0_1 00:21:13.388 13:04:17 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:13.388 13:04:17 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:21:13.388 13:04:17 -- nvmf/common.sh@403 -- # is_hw=yes 00:21:13.388 13:04:17 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:21:13.388 13:04:17 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:21:13.388 13:04:17 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:21:13.388 13:04:17 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:13.388 13:04:17 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:13.388 13:04:17 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:13.388 13:04:17 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:13.388 13:04:17 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:13.388 13:04:17 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:13.388 13:04:17 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:13.388 13:04:17 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:13.388 13:04:17 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:13.388 13:04:17 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:13.388 13:04:17 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:13.388 13:04:17 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:13.388 13:04:17 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:13.388 13:04:17 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:13.388 13:04:17 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:13.388 13:04:17 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:13.388 13:04:17 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:13.388 13:04:17 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:13.388 13:04:17 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:13.388 13:04:17 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:13.388 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:13.388 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.550 ms 00:21:13.388 00:21:13.388 --- 10.0.0.2 ping statistics --- 00:21:13.388 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:13.388 rtt min/avg/max/mdev = 0.550/0.550/0.550/0.000 ms 00:21:13.388 13:04:17 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:13.388 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:13.388 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.273 ms 00:21:13.388 00:21:13.388 --- 10.0.0.1 ping statistics --- 00:21:13.388 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:13.388 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:21:13.388 13:04:17 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:13.388 13:04:17 -- nvmf/common.sh@411 -- # return 0 00:21:13.388 13:04:17 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:13.388 13:04:17 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:13.388 13:04:17 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:13.388 13:04:17 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:13.388 13:04:17 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:13.388 13:04:17 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:13.388 13:04:17 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:13.388 13:04:17 -- target/fabrics_fuzz.sh@14 -- # nvmfpid=4026869 00:21:13.388 13:04:17 -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:21:13.388 13:04:17 -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:21:13.388 13:04:17 -- target/fabrics_fuzz.sh@18 -- # waitforlisten 4026869 00:21:13.388 13:04:17 -- common/autotest_common.sh@817 -- # '[' -z 4026869 ']' 00:21:13.388 13:04:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:13.388 13:04:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:13.388 13:04:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:13.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:13.388 13:04:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:13.388 13:04:17 -- common/autotest_common.sh@10 -- # set +x 00:21:13.388 13:04:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:13.388 13:04:17 -- common/autotest_common.sh@850 -- # return 0 00:21:13.388 13:04:17 -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:13.388 13:04:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:13.388 13:04:17 -- common/autotest_common.sh@10 -- # set +x 00:21:13.388 13:04:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:13.388 13:04:17 -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:21:13.388 13:04:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:13.388 13:04:17 -- common/autotest_common.sh@10 -- # set +x 00:21:13.388 Malloc0 00:21:13.388 13:04:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:13.388 13:04:17 -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:13.388 13:04:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:13.388 13:04:17 -- common/autotest_common.sh@10 -- # set +x 00:21:13.388 13:04:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:13.388 13:04:17 -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:13.388 13:04:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:13.388 13:04:17 -- common/autotest_common.sh@10 -- # set +x 00:21:13.388 13:04:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:13.388 13:04:17 -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:13.388 13:04:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:13.388 13:04:17 -- common/autotest_common.sh@10 -- # set +x 00:21:13.388 13:04:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:13.388 13:04:17 -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:21:13.388 13:04:17 -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:21:45.504 Fuzzing completed. Shutting down the fuzz application 00:21:45.504 00:21:45.504 Dumping successful admin opcodes: 00:21:45.504 8, 9, 10, 24, 00:21:45.504 Dumping successful io opcodes: 00:21:45.504 0, 9, 00:21:45.504 NS: 0x200003aeff00 I/O qp, Total commands completed: 877163, total successful commands: 5098, random_seed: 3548921600 00:21:45.504 NS: 0x200003aeff00 admin qp, Total commands completed: 110593, total successful commands: 908, random_seed: 3079971264 00:21:45.504 13:04:48 -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:21:45.504 Fuzzing completed. Shutting down the fuzz application 00:21:45.504 00:21:45.504 Dumping successful admin opcodes: 00:21:45.504 24, 00:21:45.504 Dumping successful io opcodes: 00:21:45.504 00:21:45.504 NS: 0x200003aeff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 2405853561 00:21:45.504 NS: 0x200003aeff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 2405930125 00:21:45.504 13:04:49 -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:45.504 13:04:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:45.504 13:04:49 -- common/autotest_common.sh@10 -- # set +x 00:21:45.504 13:04:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:45.504 13:04:49 -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:21:45.504 13:04:49 -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:21:45.504 13:04:49 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:45.504 13:04:49 -- nvmf/common.sh@117 -- # sync 00:21:45.504 13:04:49 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:45.504 13:04:49 -- nvmf/common.sh@120 -- # set +e 00:21:45.504 13:04:49 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:45.504 13:04:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:45.504 rmmod nvme_tcp 00:21:45.504 rmmod nvme_fabrics 00:21:45.504 rmmod nvme_keyring 00:21:45.504 13:04:49 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:45.504 13:04:49 -- nvmf/common.sh@124 -- # set -e 00:21:45.504 13:04:49 -- nvmf/common.sh@125 -- # return 0 00:21:45.504 13:04:49 -- nvmf/common.sh@478 -- # '[' -n 4026869 ']' 00:21:45.504 13:04:49 -- nvmf/common.sh@479 -- # killprocess 4026869 00:21:45.504 13:04:49 -- common/autotest_common.sh@936 -- # '[' -z 4026869 ']' 00:21:45.504 13:04:49 -- common/autotest_common.sh@940 -- # kill -0 4026869 00:21:45.504 13:04:49 -- common/autotest_common.sh@941 -- # uname 00:21:45.504 13:04:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:45.504 13:04:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4026869 00:21:45.504 13:04:49 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:45.504 13:04:49 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:45.504 13:04:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4026869' 00:21:45.504 killing process with pid 4026869 00:21:45.504 13:04:49 -- common/autotest_common.sh@955 -- # kill 4026869 00:21:45.504 13:04:49 -- common/autotest_common.sh@960 -- # wait 4026869 00:21:45.504 13:04:49 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:45.504 13:04:49 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:45.504 13:04:49 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:45.504 13:04:49 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:45.504 13:04:49 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:45.504 13:04:49 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:45.504 13:04:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:45.504 13:04:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:46.890 13:04:51 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:46.890 13:04:51 -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:21:47.151 00:21:47.151 real 0m41.859s 00:21:47.151 user 0m57.219s 00:21:47.151 sys 0m13.863s 00:21:47.151 13:04:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:47.151 13:04:51 -- common/autotest_common.sh@10 -- # set +x 00:21:47.151 ************************************ 00:21:47.151 END TEST nvmf_fuzz 00:21:47.151 ************************************ 00:21:47.151 13:04:52 -- nvmf/nvmf.sh@66 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:21:47.151 13:04:52 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:47.151 13:04:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:47.151 13:04:52 -- common/autotest_common.sh@10 -- # set +x 00:21:47.151 ************************************ 00:21:47.151 START TEST nvmf_multiconnection 00:21:47.151 ************************************ 00:21:47.151 13:04:52 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:21:47.412 * Looking for test storage... 00:21:47.412 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:47.412 13:04:52 -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:47.412 13:04:52 -- nvmf/common.sh@7 -- # uname -s 00:21:47.412 13:04:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:47.412 13:04:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:47.412 13:04:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:47.412 13:04:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:47.412 13:04:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:47.412 13:04:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:47.412 13:04:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:47.412 13:04:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:47.412 13:04:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:47.412 13:04:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:47.412 13:04:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:47.412 13:04:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:47.412 13:04:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:47.412 13:04:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:47.412 13:04:52 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:47.412 13:04:52 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:47.412 13:04:52 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:47.412 13:04:52 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:47.412 13:04:52 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:47.412 13:04:52 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:47.412 13:04:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.412 13:04:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.412 13:04:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.412 13:04:52 -- paths/export.sh@5 -- # export PATH 00:21:47.412 13:04:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.412 13:04:52 -- nvmf/common.sh@47 -- # : 0 00:21:47.412 13:04:52 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:47.412 13:04:52 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:47.412 13:04:52 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:47.412 13:04:52 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:47.412 13:04:52 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:47.412 13:04:52 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:47.412 13:04:52 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:47.412 13:04:52 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:47.412 13:04:52 -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:47.412 13:04:52 -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:47.412 13:04:52 -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:21:47.412 13:04:52 -- target/multiconnection.sh@16 -- # nvmftestinit 00:21:47.412 13:04:52 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:47.412 13:04:52 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:47.412 13:04:52 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:47.412 13:04:52 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:47.412 13:04:52 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:47.412 13:04:52 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:47.412 13:04:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:47.412 13:04:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:47.412 13:04:52 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:21:47.412 13:04:52 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:21:47.412 13:04:52 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:47.412 13:04:52 -- common/autotest_common.sh@10 -- # set +x 00:21:55.554 13:04:59 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:55.554 13:04:59 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:55.554 13:04:59 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:55.554 13:04:59 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:55.554 13:04:59 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:55.554 13:04:59 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:55.554 13:04:59 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:55.554 13:04:59 -- nvmf/common.sh@295 -- # net_devs=() 00:21:55.554 13:04:59 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:55.554 13:04:59 -- nvmf/common.sh@296 -- # e810=() 00:21:55.554 13:04:59 -- nvmf/common.sh@296 -- # local -ga e810 00:21:55.554 13:04:59 -- nvmf/common.sh@297 -- # x722=() 00:21:55.554 13:04:59 -- nvmf/common.sh@297 -- # local -ga x722 00:21:55.554 13:04:59 -- nvmf/common.sh@298 -- # mlx=() 00:21:55.554 13:04:59 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:55.554 13:04:59 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:55.554 13:04:59 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:55.554 13:04:59 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:55.554 13:04:59 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:55.554 13:04:59 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:55.554 13:04:59 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:55.555 13:04:59 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:55.555 13:04:59 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:55.555 13:04:59 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:55.555 13:04:59 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:55.555 13:04:59 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:55.555 13:04:59 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:55.555 13:04:59 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:55.555 13:04:59 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:55.555 13:04:59 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:55.555 13:04:59 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:55.555 13:04:59 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:55.555 13:04:59 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:55.555 13:04:59 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:55.555 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:55.555 13:04:59 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:55.555 13:04:59 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:55.555 13:04:59 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:55.555 13:04:59 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:55.555 13:04:59 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:55.555 13:04:59 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:55.555 13:04:59 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:55.555 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:55.555 13:04:59 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:55.555 13:04:59 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:55.555 13:04:59 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:55.555 13:04:59 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:55.555 13:04:59 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:55.555 13:04:59 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:55.555 13:04:59 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:55.555 13:04:59 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:55.555 13:04:59 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:55.555 13:04:59 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:55.555 13:04:59 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:55.555 13:04:59 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:55.555 13:04:59 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:55.555 Found net devices under 0000:31:00.0: cvl_0_0 00:21:55.555 13:04:59 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:55.555 13:04:59 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:55.555 13:04:59 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:55.555 13:04:59 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:55.555 13:04:59 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:55.555 13:04:59 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:55.555 Found net devices under 0000:31:00.1: cvl_0_1 00:21:55.555 13:04:59 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:55.555 13:04:59 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:21:55.555 13:04:59 -- nvmf/common.sh@403 -- # is_hw=yes 00:21:55.555 13:04:59 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:21:55.555 13:04:59 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:21:55.555 13:04:59 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:21:55.555 13:04:59 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:55.555 13:04:59 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:55.555 13:04:59 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:55.555 13:04:59 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:55.555 13:04:59 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:55.555 13:04:59 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:55.555 13:04:59 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:55.555 13:04:59 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:55.555 13:04:59 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:55.555 13:04:59 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:55.555 13:04:59 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:55.555 13:04:59 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:55.555 13:04:59 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:55.555 13:04:59 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:55.555 13:04:59 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:55.555 13:04:59 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:55.555 13:04:59 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:55.555 13:04:59 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:55.555 13:04:59 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:55.555 13:04:59 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:55.555 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:55.555 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.574 ms 00:21:55.555 00:21:55.555 --- 10.0.0.2 ping statistics --- 00:21:55.555 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:55.555 rtt min/avg/max/mdev = 0.574/0.574/0.574/0.000 ms 00:21:55.555 13:04:59 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:55.555 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:55.555 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.288 ms 00:21:55.555 00:21:55.555 --- 10.0.0.1 ping statistics --- 00:21:55.555 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:55.555 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:21:55.555 13:04:59 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:55.555 13:04:59 -- nvmf/common.sh@411 -- # return 0 00:21:55.555 13:04:59 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:55.555 13:04:59 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:55.555 13:04:59 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:55.555 13:04:59 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:55.555 13:04:59 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:55.555 13:04:59 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:55.555 13:04:59 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:55.555 13:04:59 -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:21:55.555 13:04:59 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:55.555 13:04:59 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:55.555 13:04:59 -- common/autotest_common.sh@10 -- # set +x 00:21:55.555 13:04:59 -- nvmf/common.sh@470 -- # nvmfpid=4037267 00:21:55.555 13:04:59 -- nvmf/common.sh@471 -- # waitforlisten 4037267 00:21:55.555 13:04:59 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:55.555 13:04:59 -- common/autotest_common.sh@817 -- # '[' -z 4037267 ']' 00:21:55.555 13:04:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:55.555 13:04:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:55.555 13:04:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:55.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:55.555 13:04:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:55.555 13:04:59 -- common/autotest_common.sh@10 -- # set +x 00:21:55.555 [2024-04-26 13:04:59.744396] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:21:55.555 [2024-04-26 13:04:59.744460] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:55.555 EAL: No free 2048 kB hugepages reported on node 1 00:21:55.555 [2024-04-26 13:04:59.817165] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:55.555 [2024-04-26 13:04:59.891169] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:55.555 [2024-04-26 13:04:59.891211] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:55.555 [2024-04-26 13:04:59.891220] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:55.555 [2024-04-26 13:04:59.891228] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:55.555 [2024-04-26 13:04:59.891235] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:55.555 [2024-04-26 13:04:59.891404] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:55.555 [2024-04-26 13:04:59.891543] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:55.555 [2024-04-26 13:04:59.891740] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:55.555 [2024-04-26 13:04:59.891740] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:55.555 13:05:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:55.555 13:05:00 -- common/autotest_common.sh@850 -- # return 0 00:21:55.555 13:05:00 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:55.555 13:05:00 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:55.555 13:05:00 -- common/autotest_common.sh@10 -- # set +x 00:21:55.555 13:05:00 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:55.555 13:05:00 -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:55.555 13:05:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:55.555 13:05:00 -- common/autotest_common.sh@10 -- # set +x 00:21:55.555 [2024-04-26 13:05:00.572355] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:55.555 13:05:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:55.555 13:05:00 -- target/multiconnection.sh@21 -- # seq 1 11 00:21:55.555 13:05:00 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:55.555 13:05:00 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:55.555 13:05:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:55.555 13:05:00 -- common/autotest_common.sh@10 -- # set +x 00:21:55.555 Malloc1 00:21:55.555 13:05:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:55.555 13:05:00 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:21:55.555 13:05:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:55.555 13:05:00 -- common/autotest_common.sh@10 -- # set +x 00:21:55.816 13:05:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:55.816 13:05:00 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:55.816 13:05:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:55.816 13:05:00 -- common/autotest_common.sh@10 -- # set +x 00:21:55.816 13:05:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:55.816 13:05:00 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:55.816 13:05:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:55.816 13:05:00 -- common/autotest_common.sh@10 -- # set +x 00:21:55.816 [2024-04-26 13:05:00.639790] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:55.816 13:05:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:55.816 13:05:00 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:55.816 13:05:00 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:21:55.816 13:05:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:55.816 13:05:00 -- common/autotest_common.sh@10 -- # set +x 00:21:55.816 Malloc2 00:21:55.816 13:05:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:55.816 13:05:00 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:21:55.816 13:05:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:55.816 13:05:00 -- common/autotest_common.sh@10 -- # set +x 00:21:55.816 13:05:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:55.816 13:05:00 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:21:55.816 13:05:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:55.816 13:05:00 -- common/autotest_common.sh@10 -- # set +x 00:21:55.816 13:05:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:55.816 13:05:00 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:21:55.816 13:05:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:55.816 13:05:00 -- common/autotest_common.sh@10 -- # set +x 00:21:55.816 13:05:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:55.816 13:05:00 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:55.816 13:05:00 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:21:55.816 13:05:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:55.816 13:05:00 -- common/autotest_common.sh@10 -- # set +x 00:21:55.816 Malloc3 00:21:55.816 13:05:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:55.816 13:05:00 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:21:55.816 13:05:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:55.816 13:05:00 -- common/autotest_common.sh@10 -- # set +x 00:21:55.816 13:05:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:55.816 13:05:00 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:21:55.816 13:05:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:55.816 13:05:00 -- common/autotest_common.sh@10 -- # set +x 00:21:55.816 13:05:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:55.816 13:05:00 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:21:55.816 13:05:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:55.816 13:05:00 -- common/autotest_common.sh@10 -- # set +x 00:21:55.816 13:05:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:55.816 13:05:00 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:55.816 13:05:00 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:21:55.816 13:05:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:55.816 13:05:00 -- common/autotest_common.sh@10 -- # set +x 00:21:55.816 Malloc4 00:21:55.816 13:05:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:55.816 13:05:00 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:21:55.816 13:05:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:55.816 13:05:00 -- common/autotest_common.sh@10 -- # set +x 00:21:55.816 13:05:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:55.816 13:05:00 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:21:55.816 13:05:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:55.816 13:05:00 -- common/autotest_common.sh@10 -- # set +x 00:21:55.816 13:05:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:55.816 13:05:00 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:21:55.816 13:05:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:55.816 13:05:00 -- common/autotest_common.sh@10 -- # set +x 00:21:55.816 13:05:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:55.816 13:05:00 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:55.816 13:05:00 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:21:55.816 13:05:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:55.816 13:05:00 -- common/autotest_common.sh@10 -- # set +x 00:21:55.816 Malloc5 00:21:55.816 13:05:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:55.817 13:05:00 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:21:55.817 13:05:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:55.817 13:05:00 -- common/autotest_common.sh@10 -- # set +x 00:21:55.817 13:05:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:55.817 13:05:00 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:21:55.817 13:05:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:55.817 13:05:00 -- common/autotest_common.sh@10 -- # set +x 00:21:55.817 13:05:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:55.817 13:05:00 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:21:55.817 13:05:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:55.817 13:05:00 -- common/autotest_common.sh@10 -- # set +x 00:21:55.817 13:05:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:55.817 13:05:00 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:55.817 13:05:00 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:21:55.817 13:05:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:55.817 13:05:00 -- common/autotest_common.sh@10 -- # set +x 00:21:55.817 Malloc6 00:21:56.077 13:05:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:56.077 13:05:00 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:21:56.077 13:05:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:56.077 13:05:00 -- common/autotest_common.sh@10 -- # set +x 00:21:56.077 13:05:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:56.077 13:05:00 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:21:56.077 13:05:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:56.077 13:05:00 -- common/autotest_common.sh@10 -- # set +x 00:21:56.077 13:05:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:56.077 13:05:00 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:21:56.077 13:05:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:56.077 13:05:00 -- common/autotest_common.sh@10 -- # set +x 00:21:56.077 13:05:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:56.077 13:05:00 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:56.077 13:05:00 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:21:56.077 13:05:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:56.077 13:05:00 -- common/autotest_common.sh@10 -- # set +x 00:21:56.077 Malloc7 00:21:56.077 13:05:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:56.077 13:05:00 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:21:56.077 13:05:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:56.077 13:05:00 -- common/autotest_common.sh@10 -- # set +x 00:21:56.077 13:05:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:56.077 13:05:00 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:21:56.077 13:05:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:56.077 13:05:00 -- common/autotest_common.sh@10 -- # set +x 00:21:56.077 13:05:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:56.077 13:05:00 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:21:56.077 13:05:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:56.077 13:05:00 -- common/autotest_common.sh@10 -- # set +x 00:21:56.077 13:05:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:56.077 13:05:00 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:56.077 13:05:00 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:21:56.077 13:05:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:56.077 13:05:00 -- common/autotest_common.sh@10 -- # set +x 00:21:56.077 Malloc8 00:21:56.077 13:05:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:56.077 13:05:00 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:21:56.077 13:05:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:56.077 13:05:00 -- common/autotest_common.sh@10 -- # set +x 00:21:56.077 13:05:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:56.077 13:05:01 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:21:56.077 13:05:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:56.077 13:05:01 -- common/autotest_common.sh@10 -- # set +x 00:21:56.077 13:05:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:56.077 13:05:01 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:21:56.077 13:05:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:56.077 13:05:01 -- common/autotest_common.sh@10 -- # set +x 00:21:56.077 13:05:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:56.077 13:05:01 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:56.077 13:05:01 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:21:56.077 13:05:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:56.077 13:05:01 -- common/autotest_common.sh@10 -- # set +x 00:21:56.077 Malloc9 00:21:56.077 13:05:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:56.077 13:05:01 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:21:56.077 13:05:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:56.077 13:05:01 -- common/autotest_common.sh@10 -- # set +x 00:21:56.077 13:05:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:56.077 13:05:01 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:21:56.077 13:05:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:56.077 13:05:01 -- common/autotest_common.sh@10 -- # set +x 00:21:56.077 13:05:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:56.077 13:05:01 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:21:56.077 13:05:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:56.077 13:05:01 -- common/autotest_common.sh@10 -- # set +x 00:21:56.077 13:05:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:56.077 13:05:01 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:56.077 13:05:01 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:21:56.077 13:05:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:56.077 13:05:01 -- common/autotest_common.sh@10 -- # set +x 00:21:56.077 Malloc10 00:21:56.077 13:05:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:56.077 13:05:01 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:21:56.078 13:05:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:56.078 13:05:01 -- common/autotest_common.sh@10 -- # set +x 00:21:56.078 13:05:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:56.078 13:05:01 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:21:56.078 13:05:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:56.078 13:05:01 -- common/autotest_common.sh@10 -- # set +x 00:21:56.078 13:05:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:56.078 13:05:01 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:21:56.078 13:05:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:56.078 13:05:01 -- common/autotest_common.sh@10 -- # set +x 00:21:56.078 13:05:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:56.078 13:05:01 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:56.078 13:05:01 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:21:56.078 13:05:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:56.078 13:05:01 -- common/autotest_common.sh@10 -- # set +x 00:21:56.338 Malloc11 00:21:56.338 13:05:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:56.338 13:05:01 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:21:56.338 13:05:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:56.338 13:05:01 -- common/autotest_common.sh@10 -- # set +x 00:21:56.338 13:05:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:56.338 13:05:01 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:21:56.338 13:05:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:56.338 13:05:01 -- common/autotest_common.sh@10 -- # set +x 00:21:56.338 13:05:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:56.338 13:05:01 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:21:56.338 13:05:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:56.338 13:05:01 -- common/autotest_common.sh@10 -- # set +x 00:21:56.338 13:05:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:56.338 13:05:01 -- target/multiconnection.sh@28 -- # seq 1 11 00:21:56.338 13:05:01 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:56.338 13:05:01 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:21:57.721 13:05:02 -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:21:57.721 13:05:02 -- common/autotest_common.sh@1184 -- # local i=0 00:21:57.721 13:05:02 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:21:57.721 13:05:02 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:21:57.721 13:05:02 -- common/autotest_common.sh@1191 -- # sleep 2 00:22:00.262 13:05:04 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:22:00.262 13:05:04 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:22:00.262 13:05:04 -- common/autotest_common.sh@1193 -- # grep -c SPDK1 00:22:00.262 13:05:04 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:22:00.262 13:05:04 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:22:00.262 13:05:04 -- common/autotest_common.sh@1194 -- # return 0 00:22:00.262 13:05:04 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:00.262 13:05:04 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:22:01.643 13:05:06 -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:22:01.643 13:05:06 -- common/autotest_common.sh@1184 -- # local i=0 00:22:01.643 13:05:06 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:22:01.643 13:05:06 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:22:01.643 13:05:06 -- common/autotest_common.sh@1191 -- # sleep 2 00:22:03.554 13:05:08 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:22:03.554 13:05:08 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:22:03.554 13:05:08 -- common/autotest_common.sh@1193 -- # grep -c SPDK2 00:22:03.554 13:05:08 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:22:03.554 13:05:08 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:22:03.554 13:05:08 -- common/autotest_common.sh@1194 -- # return 0 00:22:03.554 13:05:08 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:03.554 13:05:08 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:22:04.936 13:05:09 -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:22:04.936 13:05:09 -- common/autotest_common.sh@1184 -- # local i=0 00:22:04.936 13:05:09 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:22:04.936 13:05:09 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:22:04.936 13:05:09 -- common/autotest_common.sh@1191 -- # sleep 2 00:22:07.481 13:05:11 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:22:07.481 13:05:11 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:22:07.481 13:05:11 -- common/autotest_common.sh@1193 -- # grep -c SPDK3 00:22:07.481 13:05:11 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:22:07.481 13:05:11 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:22:07.481 13:05:11 -- common/autotest_common.sh@1194 -- # return 0 00:22:07.481 13:05:11 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:07.481 13:05:11 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:22:08.864 13:05:13 -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:22:08.864 13:05:13 -- common/autotest_common.sh@1184 -- # local i=0 00:22:08.864 13:05:13 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:22:08.864 13:05:13 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:22:08.864 13:05:13 -- common/autotest_common.sh@1191 -- # sleep 2 00:22:10.775 13:05:15 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:22:10.775 13:05:15 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:22:10.775 13:05:15 -- common/autotest_common.sh@1193 -- # grep -c SPDK4 00:22:10.775 13:05:15 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:22:10.775 13:05:15 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:22:10.775 13:05:15 -- common/autotest_common.sh@1194 -- # return 0 00:22:10.775 13:05:15 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:10.775 13:05:15 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:22:12.220 13:05:17 -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:22:12.220 13:05:17 -- common/autotest_common.sh@1184 -- # local i=0 00:22:12.220 13:05:17 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:22:12.220 13:05:17 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:22:12.220 13:05:17 -- common/autotest_common.sh@1191 -- # sleep 2 00:22:14.152 13:05:19 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:22:14.152 13:05:19 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:22:14.152 13:05:19 -- common/autotest_common.sh@1193 -- # grep -c SPDK5 00:22:14.152 13:05:19 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:22:14.152 13:05:19 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:22:14.153 13:05:19 -- common/autotest_common.sh@1194 -- # return 0 00:22:14.153 13:05:19 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:14.153 13:05:19 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:22:16.064 13:05:20 -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:22:16.064 13:05:20 -- common/autotest_common.sh@1184 -- # local i=0 00:22:16.064 13:05:20 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:22:16.064 13:05:20 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:22:16.064 13:05:20 -- common/autotest_common.sh@1191 -- # sleep 2 00:22:17.977 13:05:22 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:22:17.977 13:05:22 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:22:17.977 13:05:22 -- common/autotest_common.sh@1193 -- # grep -c SPDK6 00:22:17.977 13:05:22 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:22:17.977 13:05:22 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:22:17.977 13:05:22 -- common/autotest_common.sh@1194 -- # return 0 00:22:17.977 13:05:22 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:17.977 13:05:22 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:22:19.887 13:05:24 -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:22:19.887 13:05:24 -- common/autotest_common.sh@1184 -- # local i=0 00:22:19.887 13:05:24 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:22:19.887 13:05:24 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:22:19.887 13:05:24 -- common/autotest_common.sh@1191 -- # sleep 2 00:22:21.797 13:05:26 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:22:21.797 13:05:26 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:22:21.797 13:05:26 -- common/autotest_common.sh@1193 -- # grep -c SPDK7 00:22:21.797 13:05:26 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:22:21.797 13:05:26 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:22:21.797 13:05:26 -- common/autotest_common.sh@1194 -- # return 0 00:22:21.797 13:05:26 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:21.797 13:05:26 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:22:23.711 13:05:28 -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:22:23.711 13:05:28 -- common/autotest_common.sh@1184 -- # local i=0 00:22:23.711 13:05:28 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:22:23.711 13:05:28 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:22:23.711 13:05:28 -- common/autotest_common.sh@1191 -- # sleep 2 00:22:25.634 13:05:30 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:22:25.634 13:05:30 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:22:25.634 13:05:30 -- common/autotest_common.sh@1193 -- # grep -c SPDK8 00:22:25.634 13:05:30 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:22:25.634 13:05:30 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:22:25.634 13:05:30 -- common/autotest_common.sh@1194 -- # return 0 00:22:25.634 13:05:30 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:25.634 13:05:30 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:22:27.547 13:05:32 -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:22:27.547 13:05:32 -- common/autotest_common.sh@1184 -- # local i=0 00:22:27.547 13:05:32 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:22:27.547 13:05:32 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:22:27.547 13:05:32 -- common/autotest_common.sh@1191 -- # sleep 2 00:22:29.459 13:05:34 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:22:29.459 13:05:34 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:22:29.459 13:05:34 -- common/autotest_common.sh@1193 -- # grep -c SPDK9 00:22:29.459 13:05:34 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:22:29.459 13:05:34 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:22:29.459 13:05:34 -- common/autotest_common.sh@1194 -- # return 0 00:22:29.459 13:05:34 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:29.459 13:05:34 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:22:31.375 13:05:36 -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:22:31.375 13:05:36 -- common/autotest_common.sh@1184 -- # local i=0 00:22:31.375 13:05:36 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:22:31.375 13:05:36 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:22:31.375 13:05:36 -- common/autotest_common.sh@1191 -- # sleep 2 00:22:33.288 13:05:38 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:22:33.288 13:05:38 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:22:33.288 13:05:38 -- common/autotest_common.sh@1193 -- # grep -c SPDK10 00:22:33.288 13:05:38 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:22:33.288 13:05:38 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:22:33.288 13:05:38 -- common/autotest_common.sh@1194 -- # return 0 00:22:33.288 13:05:38 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:33.289 13:05:38 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:22:35.202 13:05:40 -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:22:35.202 13:05:40 -- common/autotest_common.sh@1184 -- # local i=0 00:22:35.202 13:05:40 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:22:35.202 13:05:40 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:22:35.202 13:05:40 -- common/autotest_common.sh@1191 -- # sleep 2 00:22:37.116 13:05:42 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:22:37.116 13:05:42 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:22:37.116 13:05:42 -- common/autotest_common.sh@1193 -- # grep -c SPDK11 00:22:37.116 13:05:42 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:22:37.116 13:05:42 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:22:37.116 13:05:42 -- common/autotest_common.sh@1194 -- # return 0 00:22:37.116 13:05:42 -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:22:37.116 [global] 00:22:37.116 thread=1 00:22:37.116 invalidate=1 00:22:37.116 rw=read 00:22:37.116 time_based=1 00:22:37.116 runtime=10 00:22:37.116 ioengine=libaio 00:22:37.116 direct=1 00:22:37.116 bs=262144 00:22:37.116 iodepth=64 00:22:37.116 norandommap=1 00:22:37.116 numjobs=1 00:22:37.116 00:22:37.116 [job0] 00:22:37.116 filename=/dev/nvme0n1 00:22:37.116 [job1] 00:22:37.116 filename=/dev/nvme10n1 00:22:37.116 [job2] 00:22:37.116 filename=/dev/nvme1n1 00:22:37.116 [job3] 00:22:37.116 filename=/dev/nvme2n1 00:22:37.116 [job4] 00:22:37.116 filename=/dev/nvme3n1 00:22:37.116 [job5] 00:22:37.116 filename=/dev/nvme4n1 00:22:37.116 [job6] 00:22:37.116 filename=/dev/nvme5n1 00:22:37.116 [job7] 00:22:37.116 filename=/dev/nvme6n1 00:22:37.116 [job8] 00:22:37.116 filename=/dev/nvme7n1 00:22:37.116 [job9] 00:22:37.116 filename=/dev/nvme8n1 00:22:37.116 [job10] 00:22:37.116 filename=/dev/nvme9n1 00:22:37.376 Could not set queue depth (nvme0n1) 00:22:37.376 Could not set queue depth (nvme10n1) 00:22:37.376 Could not set queue depth (nvme1n1) 00:22:37.376 Could not set queue depth (nvme2n1) 00:22:37.376 Could not set queue depth (nvme3n1) 00:22:37.376 Could not set queue depth (nvme4n1) 00:22:37.376 Could not set queue depth (nvme5n1) 00:22:37.376 Could not set queue depth (nvme6n1) 00:22:37.376 Could not set queue depth (nvme7n1) 00:22:37.376 Could not set queue depth (nvme8n1) 00:22:37.376 Could not set queue depth (nvme9n1) 00:22:37.637 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:37.637 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:37.637 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:37.637 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:37.637 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:37.637 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:37.637 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:37.637 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:37.637 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:37.637 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:37.637 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:37.637 fio-3.35 00:22:37.637 Starting 11 threads 00:22:49.878 00:22:49.878 job0: (groupid=0, jobs=1): err= 0: pid=4045852: Fri Apr 26 13:05:53 2024 00:22:49.878 read: IOPS=836, BW=209MiB/s (219MB/s)(2101MiB/10045msec) 00:22:49.878 slat (usec): min=7, max=44845, avg=1183.85, stdev=3042.58 00:22:49.878 clat (msec): min=16, max=155, avg=75.21, stdev=26.28 00:22:49.878 lat (msec): min=18, max=157, avg=76.39, stdev=26.73 00:22:49.878 clat percentiles (msec): 00:22:49.878 | 1.00th=[ 28], 5.00th=[ 45], 10.00th=[ 50], 20.00th=[ 53], 00:22:49.878 | 30.00th=[ 56], 40.00th=[ 60], 50.00th=[ 69], 60.00th=[ 78], 00:22:49.878 | 70.00th=[ 91], 80.00th=[ 104], 90.00th=[ 113], 95.00th=[ 123], 00:22:49.878 | 99.00th=[ 134], 99.50th=[ 136], 99.90th=[ 144], 99.95th=[ 150], 00:22:49.878 | 99.99th=[ 157] 00:22:49.878 bw ( KiB/s): min=133120, max=313856, per=8.49%, avg=213504.00, stdev=66800.02, samples=20 00:22:49.878 iops : min= 520, max= 1226, avg=834.00, stdev=260.94, samples=20 00:22:49.878 lat (msec) : 20=0.08%, 50=12.10%, 100=62.74%, 250=25.07% 00:22:49.878 cpu : usr=0.33%, sys=2.71%, ctx=1930, majf=0, minf=3534 00:22:49.878 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:22:49.878 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:49.878 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:49.878 issued rwts: total=8403,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:49.878 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:49.878 job1: (groupid=0, jobs=1): err= 0: pid=4045870: Fri Apr 26 13:05:53 2024 00:22:49.878 read: IOPS=729, BW=182MiB/s (191MB/s)(1834MiB/10056msec) 00:22:49.878 slat (usec): min=7, max=58551, avg=1183.16, stdev=3178.28 00:22:49.878 clat (msec): min=40, max=158, avg=86.49, stdev=18.18 00:22:49.878 lat (msec): min=47, max=158, avg=87.68, stdev=18.24 00:22:49.878 clat percentiles (msec): 00:22:49.878 | 1.00th=[ 56], 5.00th=[ 63], 10.00th=[ 66], 20.00th=[ 71], 00:22:49.878 | 30.00th=[ 77], 40.00th=[ 80], 50.00th=[ 84], 60.00th=[ 88], 00:22:49.878 | 70.00th=[ 94], 80.00th=[ 103], 90.00th=[ 113], 95.00th=[ 122], 00:22:49.878 | 99.00th=[ 136], 99.50th=[ 140], 99.90th=[ 150], 99.95th=[ 150], 00:22:49.878 | 99.99th=[ 159] 00:22:49.878 bw ( KiB/s): min=130560, max=235520, per=7.40%, avg=186157.30, stdev=30016.01, samples=20 00:22:49.878 iops : min= 510, max= 920, avg=727.15, stdev=117.24, samples=20 00:22:49.878 lat (msec) : 50=0.15%, 100=77.41%, 250=22.44% 00:22:49.878 cpu : usr=0.24%, sys=2.73%, ctx=1659, majf=0, minf=4097 00:22:49.878 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:22:49.878 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:49.878 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:49.878 issued rwts: total=7334,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:49.878 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:49.878 job2: (groupid=0, jobs=1): err= 0: pid=4045890: Fri Apr 26 13:05:53 2024 00:22:49.878 read: IOPS=805, BW=201MiB/s (211MB/s)(2026MiB/10063msec) 00:22:49.878 slat (usec): min=8, max=53347, avg=1154.37, stdev=3158.09 00:22:49.878 clat (msec): min=12, max=167, avg=78.24, stdev=21.13 00:22:49.878 lat (msec): min=13, max=167, avg=79.40, stdev=21.47 00:22:49.878 clat percentiles (msec): 00:22:49.878 | 1.00th=[ 39], 5.00th=[ 48], 10.00th=[ 52], 20.00th=[ 59], 00:22:49.878 | 30.00th=[ 68], 40.00th=[ 73], 50.00th=[ 78], 60.00th=[ 81], 00:22:49.878 | 70.00th=[ 87], 80.00th=[ 95], 90.00th=[ 109], 95.00th=[ 118], 00:22:49.878 | 99.00th=[ 132], 99.50th=[ 136], 99.90th=[ 155], 99.95th=[ 159], 00:22:49.878 | 99.99th=[ 167] 00:22:49.878 bw ( KiB/s): min=133632, max=290304, per=8.18%, avg=205822.55, stdev=44188.40, samples=20 00:22:49.878 iops : min= 522, max= 1134, avg=803.95, stdev=172.57, samples=20 00:22:49.878 lat (msec) : 20=0.20%, 50=7.36%, 100=76.52%, 250=15.92% 00:22:49.878 cpu : usr=0.30%, sys=2.88%, ctx=1837, majf=0, minf=4097 00:22:49.878 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:22:49.879 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:49.879 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:49.879 issued rwts: total=8102,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:49.879 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:49.879 job3: (groupid=0, jobs=1): err= 0: pid=4045894: Fri Apr 26 13:05:53 2024 00:22:49.879 read: IOPS=724, BW=181MiB/s (190MB/s)(1826MiB/10080msec) 00:22:49.879 slat (usec): min=7, max=51069, avg=1201.59, stdev=3550.09 00:22:49.879 clat (msec): min=5, max=213, avg=87.00, stdev=28.74 00:22:49.879 lat (msec): min=5, max=213, avg=88.20, stdev=29.29 00:22:49.879 clat percentiles (msec): 00:22:49.879 | 1.00th=[ 17], 5.00th=[ 39], 10.00th=[ 49], 20.00th=[ 61], 00:22:49.879 | 30.00th=[ 72], 40.00th=[ 81], 50.00th=[ 91], 60.00th=[ 100], 00:22:49.879 | 70.00th=[ 105], 80.00th=[ 112], 90.00th=[ 123], 95.00th=[ 130], 00:22:49.879 | 99.00th=[ 140], 99.50th=[ 146], 99.90th=[ 182], 99.95th=[ 182], 00:22:49.879 | 99.99th=[ 213] 00:22:49.879 bw ( KiB/s): min=129024, max=302592, per=7.37%, avg=185292.80, stdev=51856.51, samples=20 00:22:49.879 iops : min= 504, max= 1182, avg=723.80, stdev=202.56, samples=20 00:22:49.879 lat (msec) : 10=0.34%, 20=1.22%, 50=9.57%, 100=50.12%, 250=38.74% 00:22:49.879 cpu : usr=0.22%, sys=2.26%, ctx=1823, majf=0, minf=4097 00:22:49.879 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:22:49.879 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:49.879 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:49.879 issued rwts: total=7302,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:49.879 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:49.879 job4: (groupid=0, jobs=1): err= 0: pid=4045897: Fri Apr 26 13:05:53 2024 00:22:49.879 read: IOPS=1031, BW=258MiB/s (270MB/s)(2592MiB/10055msec) 00:22:49.879 slat (usec): min=5, max=65170, avg=802.46, stdev=3109.92 00:22:49.879 clat (msec): min=3, max=197, avg=61.19, stdev=30.31 00:22:49.879 lat (msec): min=3, max=197, avg=61.99, stdev=30.79 00:22:49.879 clat percentiles (msec): 00:22:49.879 | 1.00th=[ 9], 5.00th=[ 20], 10.00th=[ 26], 20.00th=[ 32], 00:22:49.879 | 30.00th=[ 44], 40.00th=[ 50], 50.00th=[ 58], 60.00th=[ 66], 00:22:49.879 | 70.00th=[ 74], 80.00th=[ 86], 90.00th=[ 108], 95.00th=[ 120], 00:22:49.879 | 99.00th=[ 134], 99.50th=[ 138], 99.90th=[ 159], 99.95th=[ 161], 00:22:49.879 | 99.99th=[ 188] 00:22:49.879 bw ( KiB/s): min=132096, max=409600, per=10.49%, avg=263808.00, stdev=82185.99, samples=20 00:22:49.879 iops : min= 516, max= 1600, avg=1030.50, stdev=321.04, samples=20 00:22:49.879 lat (msec) : 4=0.04%, 10=1.45%, 20=3.71%, 50=35.15%, 100=45.84% 00:22:49.879 lat (msec) : 250=13.81% 00:22:49.879 cpu : usr=0.34%, sys=2.78%, ctx=2433, majf=0, minf=4097 00:22:49.879 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:22:49.879 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:49.879 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:49.879 issued rwts: total=10368,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:49.879 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:49.879 job5: (groupid=0, jobs=1): err= 0: pid=4045920: Fri Apr 26 13:05:53 2024 00:22:49.879 read: IOPS=976, BW=244MiB/s (256MB/s)(2451MiB/10043msec) 00:22:49.879 slat (usec): min=6, max=95089, avg=782.55, stdev=3167.06 00:22:49.879 clat (msec): min=2, max=207, avg=64.66, stdev=27.84 00:22:49.879 lat (msec): min=2, max=217, avg=65.44, stdev=28.27 00:22:49.879 clat percentiles (msec): 00:22:49.879 | 1.00th=[ 7], 5.00th=[ 23], 10.00th=[ 34], 20.00th=[ 46], 00:22:49.879 | 30.00th=[ 52], 40.00th=[ 55], 50.00th=[ 59], 60.00th=[ 65], 00:22:49.879 | 70.00th=[ 73], 80.00th=[ 86], 90.00th=[ 106], 95.00th=[ 118], 00:22:49.879 | 99.00th=[ 140], 99.50th=[ 155], 99.90th=[ 163], 99.95th=[ 176], 00:22:49.879 | 99.99th=[ 207] 00:22:49.879 bw ( KiB/s): min=143872, max=305152, per=9.91%, avg=249395.20, stdev=44069.87, samples=20 00:22:49.879 iops : min= 562, max= 1192, avg=974.20, stdev=172.15, samples=20 00:22:49.879 lat (msec) : 4=0.25%, 10=1.52%, 20=2.29%, 50=23.18%, 100=60.01% 00:22:49.879 lat (msec) : 250=12.74% 00:22:49.879 cpu : usr=0.42%, sys=3.11%, ctx=2539, majf=0, minf=4097 00:22:49.879 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:22:49.879 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:49.879 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:49.879 issued rwts: total=9805,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:49.879 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:49.879 job6: (groupid=0, jobs=1): err= 0: pid=4045932: Fri Apr 26 13:05:53 2024 00:22:49.879 read: IOPS=778, BW=195MiB/s (204MB/s)(1962MiB/10074msec) 00:22:49.879 slat (usec): min=6, max=71900, avg=1154.70, stdev=3163.78 00:22:49.879 clat (msec): min=4, max=199, avg=80.89, stdev=23.43 00:22:49.879 lat (msec): min=5, max=199, avg=82.05, stdev=23.78 00:22:49.879 clat percentiles (msec): 00:22:49.879 | 1.00th=[ 15], 5.00th=[ 43], 10.00th=[ 57], 20.00th=[ 65], 00:22:49.879 | 30.00th=[ 71], 40.00th=[ 75], 50.00th=[ 81], 60.00th=[ 85], 00:22:49.879 | 70.00th=[ 91], 80.00th=[ 99], 90.00th=[ 112], 95.00th=[ 121], 00:22:49.879 | 99.00th=[ 136], 99.50th=[ 144], 99.90th=[ 194], 99.95th=[ 194], 00:22:49.879 | 99.99th=[ 201] 00:22:49.879 bw ( KiB/s): min=133632, max=286208, per=7.92%, avg=199244.80, stdev=40372.81, samples=20 00:22:49.879 iops : min= 522, max= 1118, avg=778.30, stdev=157.71, samples=20 00:22:49.879 lat (msec) : 10=0.50%, 20=1.31%, 50=5.21%, 100=74.98%, 250=18.00% 00:22:49.879 cpu : usr=0.20%, sys=2.69%, ctx=1802, majf=0, minf=4097 00:22:49.879 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:22:49.879 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:49.879 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:49.879 issued rwts: total=7846,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:49.879 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:49.879 job7: (groupid=0, jobs=1): err= 0: pid=4045941: Fri Apr 26 13:05:53 2024 00:22:49.879 read: IOPS=1143, BW=286MiB/s (300MB/s)(2865MiB/10019msec) 00:22:49.879 slat (usec): min=7, max=66135, avg=798.06, stdev=2301.26 00:22:49.879 clat (msec): min=4, max=153, avg=55.06, stdev=25.45 00:22:49.879 lat (msec): min=4, max=153, avg=55.86, stdev=25.79 00:22:49.879 clat percentiles (msec): 00:22:49.879 | 1.00th=[ 18], 5.00th=[ 26], 10.00th=[ 28], 20.00th=[ 30], 00:22:49.879 | 30.00th=[ 33], 40.00th=[ 45], 50.00th=[ 53], 60.00th=[ 61], 00:22:49.879 | 70.00th=[ 67], 80.00th=[ 78], 90.00th=[ 92], 95.00th=[ 103], 00:22:49.879 | 99.00th=[ 121], 99.50th=[ 128], 99.90th=[ 142], 99.95th=[ 148], 00:22:49.879 | 99.99th=[ 155] 00:22:49.879 bw ( KiB/s): min=155136, max=519680, per=11.60%, avg=291811.10, stdev=107409.81, samples=20 00:22:49.879 iops : min= 606, max= 2030, avg=1139.85, stdev=419.59, samples=20 00:22:49.879 lat (msec) : 10=0.38%, 20=0.81%, 50=45.89%, 100=47.06%, 250=5.85% 00:22:49.879 cpu : usr=0.45%, sys=4.02%, ctx=2436, majf=0, minf=4097 00:22:49.879 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:22:49.879 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:49.879 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:49.879 issued rwts: total=11461,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:49.879 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:49.879 job8: (groupid=0, jobs=1): err= 0: pid=4045969: Fri Apr 26 13:05:53 2024 00:22:49.879 read: IOPS=791, BW=198MiB/s (207MB/s)(1992MiB/10068msec) 00:22:49.879 slat (usec): min=5, max=105248, avg=1162.84, stdev=3528.03 00:22:49.879 clat (msec): min=3, max=188, avg=79.61, stdev=27.60 00:22:49.879 lat (msec): min=3, max=188, avg=80.77, stdev=28.01 00:22:49.879 clat percentiles (msec): 00:22:49.879 | 1.00th=[ 14], 5.00th=[ 33], 10.00th=[ 40], 20.00th=[ 59], 00:22:49.879 | 30.00th=[ 68], 40.00th=[ 75], 50.00th=[ 82], 60.00th=[ 87], 00:22:49.879 | 70.00th=[ 93], 80.00th=[ 102], 90.00th=[ 113], 95.00th=[ 123], 00:22:49.879 | 99.00th=[ 157], 99.50th=[ 165], 99.90th=[ 188], 99.95th=[ 188], 00:22:49.879 | 99.99th=[ 188] 00:22:49.879 bw ( KiB/s): min=126976, max=291840, per=8.05%, avg=202393.60, stdev=48895.92, samples=20 00:22:49.879 iops : min= 496, max= 1140, avg=790.60, stdev=191.00, samples=20 00:22:49.879 lat (msec) : 4=0.03%, 10=0.24%, 20=1.71%, 50=12.07%, 100=65.45% 00:22:49.879 lat (msec) : 250=20.50% 00:22:49.879 cpu : usr=0.37%, sys=2.24%, ctx=1766, majf=0, minf=4097 00:22:49.879 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:22:49.879 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:49.879 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:49.879 issued rwts: total=7969,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:49.879 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:49.879 job9: (groupid=0, jobs=1): err= 0: pid=4045981: Fri Apr 26 13:05:53 2024 00:22:49.879 read: IOPS=1057, BW=264MiB/s (277MB/s)(2658MiB/10058msec) 00:22:49.879 slat (usec): min=5, max=77159, avg=773.72, stdev=3185.05 00:22:49.879 clat (usec): min=1454, max=172945, avg=59709.12, stdev=38914.02 00:22:49.879 lat (usec): min=1501, max=189566, avg=60482.84, stdev=39461.36 00:22:49.879 clat percentiles (msec): 00:22:49.879 | 1.00th=[ 5], 5.00th=[ 11], 10.00th=[ 19], 20.00th=[ 24], 00:22:49.879 | 30.00th=[ 27], 40.00th=[ 31], 50.00th=[ 56], 60.00th=[ 71], 00:22:49.879 | 70.00th=[ 83], 80.00th=[ 100], 90.00th=[ 113], 95.00th=[ 130], 00:22:49.879 | 99.00th=[ 153], 99.50th=[ 163], 99.90th=[ 171], 99.95th=[ 171], 00:22:49.879 | 99.99th=[ 171] 00:22:49.879 bw ( KiB/s): min=135168, max=648704, per=10.76%, avg=270592.00, stdev=136394.21, samples=20 00:22:49.879 iops : min= 528, max= 2534, avg=1057.00, stdev=532.79, samples=20 00:22:49.879 lat (msec) : 2=0.02%, 4=0.51%, 10=4.20%, 20=7.57%, 50=34.75% 00:22:49.879 lat (msec) : 100=33.28%, 250=19.67% 00:22:49.879 cpu : usr=0.40%, sys=3.22%, ctx=2504, majf=0, minf=4097 00:22:49.879 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:22:49.879 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:49.879 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:49.879 issued rwts: total=10633,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:49.879 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:49.879 job10: (groupid=0, jobs=1): err= 0: pid=4045991: Fri Apr 26 13:05:53 2024 00:22:49.879 read: IOPS=974, BW=244MiB/s (256MB/s)(2455MiB/10074msec) 00:22:49.879 slat (usec): min=6, max=34705, avg=985.54, stdev=2732.37 00:22:49.880 clat (msec): min=3, max=194, avg=64.60, stdev=35.70 00:22:49.880 lat (msec): min=3, max=194, avg=65.59, stdev=36.25 00:22:49.880 clat percentiles (msec): 00:22:49.880 | 1.00th=[ 18], 5.00th=[ 26], 10.00th=[ 28], 20.00th=[ 29], 00:22:49.880 | 30.00th=[ 31], 40.00th=[ 34], 50.00th=[ 65], 60.00th=[ 79], 00:22:49.880 | 70.00th=[ 88], 80.00th=[ 102], 90.00th=[ 115], 95.00th=[ 125], 00:22:49.880 | 99.00th=[ 136], 99.50th=[ 142], 99.90th=[ 188], 99.95th=[ 194], 00:22:49.880 | 99.99th=[ 194] 00:22:49.880 bw ( KiB/s): min=130560, max=566784, per=9.93%, avg=249753.60, stdev=151951.10, samples=20 00:22:49.880 iops : min= 510, max= 2214, avg=975.60, stdev=593.56, samples=20 00:22:49.880 lat (msec) : 4=0.03%, 10=0.55%, 20=0.59%, 50=44.06%, 100=33.85% 00:22:49.880 lat (msec) : 250=20.92% 00:22:49.880 cpu : usr=0.26%, sys=3.18%, ctx=2098, majf=0, minf=4097 00:22:49.880 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:22:49.880 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:49.880 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:49.880 issued rwts: total=9819,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:49.880 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:49.880 00:22:49.880 Run status group 0 (all jobs): 00:22:49.880 READ: bw=2456MiB/s (2576MB/s), 181MiB/s-286MiB/s (190MB/s-300MB/s), io=24.2GiB (26.0GB), run=10019-10080msec 00:22:49.880 00:22:49.880 Disk stats (read/write): 00:22:49.880 nvme0n1: ios=16357/0, merge=0/0, ticks=1217100/0, in_queue=1217100, util=96.46% 00:22:49.880 nvme10n1: ios=14251/0, merge=0/0, ticks=1219942/0, in_queue=1219942, util=96.63% 00:22:49.880 nvme1n1: ios=15868/0, merge=0/0, ticks=1218198/0, in_queue=1218198, util=97.07% 00:22:49.880 nvme2n1: ios=14330/0, merge=0/0, ticks=1215734/0, in_queue=1215734, util=97.30% 00:22:49.880 nvme3n1: ios=20280/0, merge=0/0, ticks=1225598/0, in_queue=1225598, util=97.39% 00:22:49.880 nvme4n1: ios=19177/0, merge=0/0, ticks=1225804/0, in_queue=1225804, util=97.87% 00:22:49.880 nvme5n1: ios=15420/0, merge=0/0, ticks=1215108/0, in_queue=1215108, util=98.10% 00:22:49.880 nvme6n1: ios=22264/0, merge=0/0, ticks=1222664/0, in_queue=1222664, util=98.25% 00:22:49.880 nvme7n1: ios=15643/0, merge=0/0, ticks=1215319/0, in_queue=1215319, util=98.74% 00:22:49.880 nvme8n1: ios=20809/0, merge=0/0, ticks=1222893/0, in_queue=1222893, util=98.97% 00:22:49.880 nvme9n1: ios=19363/0, merge=0/0, ticks=1213345/0, in_queue=1213345, util=99.22% 00:22:49.880 13:05:53 -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:22:49.880 [global] 00:22:49.880 thread=1 00:22:49.880 invalidate=1 00:22:49.880 rw=randwrite 00:22:49.880 time_based=1 00:22:49.880 runtime=10 00:22:49.880 ioengine=libaio 00:22:49.880 direct=1 00:22:49.880 bs=262144 00:22:49.880 iodepth=64 00:22:49.880 norandommap=1 00:22:49.880 numjobs=1 00:22:49.880 00:22:49.880 [job0] 00:22:49.880 filename=/dev/nvme0n1 00:22:49.880 [job1] 00:22:49.880 filename=/dev/nvme10n1 00:22:49.880 [job2] 00:22:49.880 filename=/dev/nvme1n1 00:22:49.880 [job3] 00:22:49.880 filename=/dev/nvme2n1 00:22:49.880 [job4] 00:22:49.880 filename=/dev/nvme3n1 00:22:49.880 [job5] 00:22:49.880 filename=/dev/nvme4n1 00:22:49.880 [job6] 00:22:49.880 filename=/dev/nvme5n1 00:22:49.880 [job7] 00:22:49.880 filename=/dev/nvme6n1 00:22:49.880 [job8] 00:22:49.880 filename=/dev/nvme7n1 00:22:49.880 [job9] 00:22:49.880 filename=/dev/nvme8n1 00:22:49.880 [job10] 00:22:49.880 filename=/dev/nvme9n1 00:22:49.880 Could not set queue depth (nvme0n1) 00:22:49.880 Could not set queue depth (nvme10n1) 00:22:49.880 Could not set queue depth (nvme1n1) 00:22:49.880 Could not set queue depth (nvme2n1) 00:22:49.880 Could not set queue depth (nvme3n1) 00:22:49.880 Could not set queue depth (nvme4n1) 00:22:49.880 Could not set queue depth (nvme5n1) 00:22:49.880 Could not set queue depth (nvme6n1) 00:22:49.880 Could not set queue depth (nvme7n1) 00:22:49.880 Could not set queue depth (nvme8n1) 00:22:49.880 Could not set queue depth (nvme9n1) 00:22:49.880 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:49.880 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:49.880 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:49.880 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:49.880 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:49.880 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:49.880 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:49.880 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:49.880 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:49.880 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:49.880 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:49.880 fio-3.35 00:22:49.880 Starting 11 threads 00:22:59.888 00:22:59.888 job0: (groupid=0, jobs=1): err= 0: pid=4048141: Fri Apr 26 13:06:04 2024 00:22:59.888 write: IOPS=733, BW=183MiB/s (192MB/s)(1852MiB/10099msec); 0 zone resets 00:22:59.888 slat (usec): min=21, max=36822, avg=1345.17, stdev=2432.90 00:22:59.888 clat (msec): min=10, max=202, avg=85.85, stdev=24.33 00:22:59.888 lat (msec): min=10, max=202, avg=87.19, stdev=24.61 00:22:59.888 clat percentiles (msec): 00:22:59.888 | 1.00th=[ 52], 5.00th=[ 57], 10.00th=[ 61], 20.00th=[ 71], 00:22:59.888 | 30.00th=[ 73], 40.00th=[ 74], 50.00th=[ 77], 60.00th=[ 79], 00:22:59.888 | 70.00th=[ 100], 80.00th=[ 107], 90.00th=[ 127], 95.00th=[ 134], 00:22:59.888 | 99.00th=[ 146], 99.50th=[ 150], 99.90th=[ 190], 99.95th=[ 197], 00:22:59.888 | 99.99th=[ 203] 00:22:59.888 bw ( KiB/s): min=120832, max=269312, per=8.88%, avg=188032.00, stdev=46124.85, samples=20 00:22:59.888 iops : min= 472, max= 1052, avg=734.50, stdev=180.18, samples=20 00:22:59.888 lat (msec) : 20=0.11%, 50=0.58%, 100=69.61%, 250=29.70% 00:22:59.888 cpu : usr=1.64%, sys=2.46%, ctx=1879, majf=0, minf=1 00:22:59.888 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:22:59.888 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:59.888 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:59.888 issued rwts: total=0,7408,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:59.888 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:59.888 job1: (groupid=0, jobs=1): err= 0: pid=4048143: Fri Apr 26 13:06:04 2024 00:22:59.888 write: IOPS=836, BW=209MiB/s (219MB/s)(2108MiB/10076msec); 0 zone resets 00:22:59.888 slat (usec): min=17, max=31236, avg=1169.04, stdev=2122.45 00:22:59.888 clat (msec): min=14, max=156, avg=75.26, stdev=19.36 00:22:59.888 lat (msec): min=14, max=156, avg=76.43, stdev=19.60 00:22:59.888 clat percentiles (msec): 00:22:59.888 | 1.00th=[ 36], 5.00th=[ 42], 10.00th=[ 49], 20.00th=[ 57], 00:22:59.888 | 30.00th=[ 61], 40.00th=[ 73], 50.00th=[ 78], 60.00th=[ 82], 00:22:59.888 | 70.00th=[ 89], 80.00th=[ 96], 90.00th=[ 100], 95.00th=[ 102], 00:22:59.888 | 99.00th=[ 106], 99.50th=[ 107], 99.90th=[ 142], 99.95th=[ 150], 00:22:59.888 | 99.99th=[ 157] 00:22:59.888 bw ( KiB/s): min=158208, max=347648, per=10.12%, avg=214246.40, stdev=55639.13, samples=20 00:22:59.888 iops : min= 618, max= 1358, avg=836.90, stdev=217.34, samples=20 00:22:59.888 lat (msec) : 20=0.09%, 50=10.31%, 100=81.72%, 250=7.87% 00:22:59.888 cpu : usr=2.13%, sys=2.55%, ctx=2200, majf=0, minf=1 00:22:59.888 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:22:59.888 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:59.888 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:59.888 issued rwts: total=0,8432,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:59.888 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:59.888 job2: (groupid=0, jobs=1): err= 0: pid=4048154: Fri Apr 26 13:06:04 2024 00:22:59.888 write: IOPS=607, BW=152MiB/s (159MB/s)(1534MiB/10093msec); 0 zone resets 00:22:59.888 slat (usec): min=27, max=16576, avg=1607.55, stdev=2802.99 00:22:59.888 clat (msec): min=11, max=189, avg=103.62, stdev=11.92 00:22:59.888 lat (msec): min=11, max=189, avg=105.23, stdev=11.77 00:22:59.888 clat percentiles (msec): 00:22:59.888 | 1.00th=[ 78], 5.00th=[ 86], 10.00th=[ 94], 20.00th=[ 97], 00:22:59.888 | 30.00th=[ 101], 40.00th=[ 102], 50.00th=[ 103], 60.00th=[ 105], 00:22:59.888 | 70.00th=[ 107], 80.00th=[ 111], 90.00th=[ 117], 95.00th=[ 123], 00:22:59.888 | 99.00th=[ 136], 99.50th=[ 144], 99.90th=[ 178], 99.95th=[ 184], 00:22:59.888 | 99.99th=[ 190] 00:22:59.888 bw ( KiB/s): min=137216, max=183808, per=7.34%, avg=155468.80, stdev=10670.56, samples=20 00:22:59.888 iops : min= 536, max= 718, avg=607.30, stdev=41.68, samples=20 00:22:59.888 lat (msec) : 20=0.10%, 50=0.39%, 100=28.63%, 250=70.88% 00:22:59.888 cpu : usr=1.46%, sys=2.03%, ctx=1631, majf=0, minf=1 00:22:59.888 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:22:59.888 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:59.888 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:59.888 issued rwts: total=0,6136,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:59.888 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:59.888 job3: (groupid=0, jobs=1): err= 0: pid=4048155: Fri Apr 26 13:06:04 2024 00:22:59.888 write: IOPS=1013, BW=253MiB/s (266MB/s)(2559MiB/10093msec); 0 zone resets 00:22:59.888 slat (usec): min=13, max=44939, avg=900.65, stdev=1831.38 00:22:59.888 clat (msec): min=4, max=190, avg=62.18, stdev=18.47 00:22:59.888 lat (msec): min=4, max=190, avg=63.08, stdev=18.71 00:22:59.888 clat percentiles (msec): 00:22:59.888 | 1.00th=[ 26], 5.00th=[ 45], 10.00th=[ 49], 20.00th=[ 52], 00:22:59.888 | 30.00th=[ 54], 40.00th=[ 55], 50.00th=[ 59], 60.00th=[ 62], 00:22:59.889 | 70.00th=[ 63], 80.00th=[ 67], 90.00th=[ 95], 95.00th=[ 102], 00:22:59.889 | 99.00th=[ 133], 99.50th=[ 144], 99.90th=[ 171], 99.95th=[ 184], 00:22:59.889 | 99.99th=[ 190] 00:22:59.889 bw ( KiB/s): min=160256, max=322560, per=12.30%, avg=260402.85, stdev=52951.69, samples=20 00:22:59.889 iops : min= 626, max= 1260, avg=1017.15, stdev=206.85, samples=20 00:22:59.889 lat (msec) : 10=0.12%, 20=0.35%, 50=14.18%, 100=78.00%, 250=7.35% 00:22:59.889 cpu : usr=2.34%, sys=2.80%, ctx=3170, majf=0, minf=1 00:22:59.889 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:22:59.889 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:59.889 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:59.889 issued rwts: total=0,10234,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:59.889 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:59.889 job4: (groupid=0, jobs=1): err= 0: pid=4048156: Fri Apr 26 13:06:04 2024 00:22:59.889 write: IOPS=762, BW=191MiB/s (200MB/s)(1921MiB/10073msec); 0 zone resets 00:22:59.889 slat (usec): min=17, max=20056, avg=1105.23, stdev=2268.92 00:22:59.889 clat (msec): min=2, max=159, avg=82.75, stdev=27.12 00:22:59.889 lat (msec): min=4, max=159, avg=83.85, stdev=27.49 00:22:59.889 clat percentiles (msec): 00:22:59.889 | 1.00th=[ 13], 5.00th=[ 31], 10.00th=[ 49], 20.00th=[ 70], 00:22:59.889 | 30.00th=[ 73], 40.00th=[ 77], 50.00th=[ 79], 60.00th=[ 83], 00:22:59.889 | 70.00th=[ 99], 80.00th=[ 107], 90.00th=[ 123], 95.00th=[ 127], 00:22:59.889 | 99.00th=[ 133], 99.50th=[ 136], 99.90th=[ 153], 99.95th=[ 153], 00:22:59.889 | 99.99th=[ 159] 00:22:59.889 bw ( KiB/s): min=133120, max=320000, per=9.21%, avg=195112.80, stdev=44566.37, samples=20 00:22:59.889 iops : min= 520, max= 1250, avg=762.15, stdev=174.10, samples=20 00:22:59.889 lat (msec) : 4=0.01%, 10=0.57%, 20=1.97%, 50=7.98%, 100=61.39% 00:22:59.889 lat (msec) : 250=28.08% 00:22:59.889 cpu : usr=1.79%, sys=2.35%, ctx=3024, majf=0, minf=1 00:22:59.889 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:22:59.889 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:59.889 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:59.889 issued rwts: total=0,7684,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:59.889 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:59.889 job5: (groupid=0, jobs=1): err= 0: pid=4048157: Fri Apr 26 13:06:04 2024 00:22:59.889 write: IOPS=692, BW=173MiB/s (182MB/s)(1749MiB/10097msec); 0 zone resets 00:22:59.889 slat (usec): min=19, max=106878, avg=1277.26, stdev=2924.87 00:22:59.889 clat (msec): min=2, max=212, avg=90.96, stdev=32.33 00:22:59.889 lat (msec): min=2, max=212, avg=92.24, stdev=32.78 00:22:59.889 clat percentiles (msec): 00:22:59.889 | 1.00th=[ 10], 5.00th=[ 27], 10.00th=[ 48], 20.00th=[ 58], 00:22:59.889 | 30.00th=[ 79], 40.00th=[ 97], 50.00th=[ 103], 60.00th=[ 105], 00:22:59.889 | 70.00th=[ 108], 80.00th=[ 112], 90.00th=[ 125], 95.00th=[ 134], 00:22:59.889 | 99.00th=[ 150], 99.50th=[ 182], 99.90th=[ 205], 99.95th=[ 207], 00:22:59.889 | 99.99th=[ 213] 00:22:59.889 bw ( KiB/s): min=133120, max=324608, per=8.38%, avg=177499.50, stdev=56387.13, samples=20 00:22:59.889 iops : min= 520, max= 1268, avg=693.35, stdev=220.27, samples=20 00:22:59.889 lat (msec) : 4=0.10%, 10=1.00%, 20=2.06%, 50=7.40%, 100=34.12% 00:22:59.889 lat (msec) : 250=55.32% 00:22:59.889 cpu : usr=1.49%, sys=2.26%, ctx=2615, majf=0, minf=1 00:22:59.889 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:22:59.889 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:59.889 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:59.889 issued rwts: total=0,6996,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:59.889 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:59.889 job6: (groupid=0, jobs=1): err= 0: pid=4048158: Fri Apr 26 13:06:04 2024 00:22:59.889 write: IOPS=603, BW=151MiB/s (158MB/s)(1523MiB/10092msec); 0 zone resets 00:22:59.889 slat (usec): min=24, max=21696, avg=1576.19, stdev=2829.73 00:22:59.889 clat (msec): min=13, max=190, avg=104.41, stdev=13.92 00:22:59.889 lat (msec): min=13, max=190, avg=105.98, stdev=13.84 00:22:59.889 clat percentiles (msec): 00:22:59.889 | 1.00th=[ 78], 5.00th=[ 84], 10.00th=[ 94], 20.00th=[ 97], 00:22:59.889 | 30.00th=[ 101], 40.00th=[ 102], 50.00th=[ 103], 60.00th=[ 105], 00:22:59.889 | 70.00th=[ 106], 80.00th=[ 111], 90.00th=[ 122], 95.00th=[ 129], 00:22:59.889 | 99.00th=[ 153], 99.50th=[ 159], 99.90th=[ 180], 99.95th=[ 186], 00:22:59.889 | 99.99th=[ 190] 00:22:59.889 bw ( KiB/s): min=129024, max=191488, per=7.29%, avg=154342.40, stdev=14102.62, samples=20 00:22:59.889 iops : min= 504, max= 748, avg=602.90, stdev=55.09, samples=20 00:22:59.889 lat (msec) : 20=0.13%, 50=0.33%, 100=28.45%, 250=71.09% 00:22:59.889 cpu : usr=1.52%, sys=1.90%, ctx=1743, majf=0, minf=1 00:22:59.889 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:22:59.889 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:59.889 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:59.889 issued rwts: total=0,6092,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:59.889 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:59.889 job7: (groupid=0, jobs=1): err= 0: pid=4048159: Fri Apr 26 13:06:04 2024 00:22:59.889 write: IOPS=946, BW=237MiB/s (248MB/s)(2384MiB/10074msec); 0 zone resets 00:22:59.889 slat (usec): min=10, max=52122, avg=920.84, stdev=2008.56 00:22:59.889 clat (msec): min=2, max=157, avg=66.64, stdev=28.40 00:22:59.889 lat (msec): min=2, max=157, avg=67.56, stdev=28.86 00:22:59.889 clat percentiles (msec): 00:22:59.889 | 1.00th=[ 10], 5.00th=[ 23], 10.00th=[ 34], 20.00th=[ 45], 00:22:59.889 | 30.00th=[ 54], 40.00th=[ 57], 50.00th=[ 59], 60.00th=[ 65], 00:22:59.889 | 70.00th=[ 82], 80.00th=[ 97], 90.00th=[ 102], 95.00th=[ 118], 00:22:59.889 | 99.00th=[ 140], 99.50th=[ 144], 99.90th=[ 150], 99.95th=[ 150], 00:22:59.889 | 99.99th=[ 159] 00:22:59.889 bw ( KiB/s): min=120832, max=381952, per=11.45%, avg=242560.95, stdev=74225.14, samples=20 00:22:59.889 iops : min= 472, max= 1492, avg=947.50, stdev=289.94, samples=20 00:22:59.889 lat (msec) : 4=0.06%, 10=1.00%, 20=2.87%, 50=19.10%, 100=65.74% 00:22:59.889 lat (msec) : 250=11.22% 00:22:59.889 cpu : usr=2.09%, sys=2.85%, ctx=3745, majf=0, minf=1 00:22:59.889 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:22:59.889 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:59.889 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:59.889 issued rwts: total=0,9537,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:59.889 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:59.889 job8: (groupid=0, jobs=1): err= 0: pid=4048160: Fri Apr 26 13:06:04 2024 00:22:59.889 write: IOPS=642, BW=161MiB/s (168MB/s)(1621MiB/10093msec); 0 zone resets 00:22:59.889 slat (usec): min=23, max=17436, avg=1469.29, stdev=2676.98 00:22:59.889 clat (msec): min=8, max=189, avg=98.15, stdev=17.79 00:22:59.889 lat (msec): min=8, max=189, avg=99.62, stdev=17.90 00:22:59.889 clat percentiles (msec): 00:22:59.889 | 1.00th=[ 34], 5.00th=[ 70], 10.00th=[ 72], 20.00th=[ 90], 00:22:59.889 | 30.00th=[ 97], 40.00th=[ 101], 50.00th=[ 103], 60.00th=[ 104], 00:22:59.889 | 70.00th=[ 106], 80.00th=[ 109], 90.00th=[ 116], 95.00th=[ 121], 00:22:59.889 | 99.00th=[ 131], 99.50th=[ 140], 99.90th=[ 178], 99.95th=[ 184], 00:22:59.889 | 99.99th=[ 190] 00:22:59.889 bw ( KiB/s): min=137216, max=221696, per=7.76%, avg=164348.25, stdev=23200.52, samples=20 00:22:59.889 iops : min= 536, max= 866, avg=641.95, stdev=90.54, samples=20 00:22:59.889 lat (msec) : 10=0.02%, 20=0.45%, 50=1.23%, 100=34.87%, 250=63.44% 00:22:59.889 cpu : usr=1.36%, sys=2.07%, ctx=1908, majf=0, minf=1 00:22:59.889 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:22:59.889 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:59.889 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:59.889 issued rwts: total=0,6482,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:59.889 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:59.889 job9: (groupid=0, jobs=1): err= 0: pid=4048161: Fri Apr 26 13:06:04 2024 00:22:59.889 write: IOPS=799, BW=200MiB/s (210MB/s)(2006MiB/10035msec); 0 zone resets 00:22:59.889 slat (usec): min=13, max=48466, avg=1101.10, stdev=2476.01 00:22:59.889 clat (usec): min=1120, max=147778, avg=78903.94, stdev=28151.08 00:22:59.889 lat (usec): min=1178, max=147828, avg=80005.04, stdev=28580.62 00:22:59.889 clat percentiles (msec): 00:22:59.889 | 1.00th=[ 10], 5.00th=[ 23], 10.00th=[ 32], 20.00th=[ 57], 00:22:59.889 | 30.00th=[ 75], 40.00th=[ 80], 50.00th=[ 87], 60.00th=[ 95], 00:22:59.889 | 70.00th=[ 99], 80.00th=[ 101], 90.00th=[ 104], 95.00th=[ 111], 00:22:59.889 | 99.00th=[ 127], 99.50th=[ 130], 99.90th=[ 140], 99.95th=[ 144], 00:22:59.889 | 99.99th=[ 148] 00:22:59.889 bw ( KiB/s): min=138240, max=439808, per=9.62%, avg=203776.00, stdev=72996.04, samples=20 00:22:59.889 iops : min= 540, max= 1718, avg=796.00, stdev=285.14, samples=20 00:22:59.889 lat (msec) : 2=0.10%, 4=0.25%, 10=0.72%, 20=2.90%, 50=14.75% 00:22:59.889 lat (msec) : 100=59.92%, 250=21.36% 00:22:59.889 cpu : usr=1.68%, sys=2.43%, ctx=3053, majf=0, minf=1 00:22:59.889 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:22:59.889 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:59.889 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:59.889 issued rwts: total=0,8023,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:59.889 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:59.889 job10: (groupid=0, jobs=1): err= 0: pid=4048162: Fri Apr 26 13:06:04 2024 00:22:59.889 write: IOPS=646, BW=162MiB/s (169MB/s)(1631MiB/10097msec); 0 zone resets 00:22:59.889 slat (usec): min=23, max=234579, avg=1492.21, stdev=3916.23 00:22:59.889 clat (msec): min=11, max=310, avg=97.48, stdev=27.48 00:22:59.889 lat (msec): min=11, max=310, avg=98.97, stdev=27.68 00:22:59.889 clat percentiles (msec): 00:22:59.889 | 1.00th=[ 31], 5.00th=[ 70], 10.00th=[ 73], 20.00th=[ 79], 00:22:59.889 | 30.00th=[ 84], 40.00th=[ 93], 50.00th=[ 99], 60.00th=[ 101], 00:22:59.889 | 70.00th=[ 104], 80.00th=[ 109], 90.00th=[ 130], 95.00th=[ 136], 00:22:59.889 | 99.00th=[ 199], 99.50th=[ 268], 99.90th=[ 292], 99.95th=[ 305], 00:22:59.889 | 99.99th=[ 313] 00:22:59.889 bw ( KiB/s): min=120832, max=218112, per=7.81%, avg=165427.20, stdev=29165.75, samples=20 00:22:59.889 iops : min= 472, max= 852, avg=646.20, stdev=113.93, samples=20 00:22:59.889 lat (msec) : 20=0.32%, 50=2.08%, 100=59.30%, 250=37.53%, 500=0.77% 00:22:59.889 cpu : usr=1.42%, sys=2.26%, ctx=1831, majf=0, minf=1 00:22:59.889 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:22:59.889 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:59.890 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:59.890 issued rwts: total=0,6525,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:59.890 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:59.890 00:22:59.890 Run status group 0 (all jobs): 00:22:59.890 WRITE: bw=2068MiB/s (2169MB/s), 151MiB/s-253MiB/s (158MB/s-266MB/s), io=20.4GiB (21.9GB), run=10035-10099msec 00:22:59.890 00:22:59.890 Disk stats (read/write): 00:22:59.890 nvme0n1: ios=46/14810, merge=0/0, ticks=1550/1226537, in_queue=1228087, util=99.72% 00:22:59.890 nvme10n1: ios=44/16509, merge=0/0, ticks=1764/1197367, in_queue=1199131, util=99.91% 00:22:59.890 nvme1n1: ios=47/11952, merge=0/0, ticks=158/1196938, in_queue=1197096, util=97.87% 00:22:59.890 nvme2n1: ios=43/20146, merge=0/0, ticks=2026/1198262, in_queue=1200288, util=99.97% 00:22:59.890 nvme3n1: ios=45/15019, merge=0/0, ticks=1895/1205470, in_queue=1207365, util=100.00% 00:22:59.890 nvme4n1: ios=45/13990, merge=0/0, ticks=1817/1224705, in_queue=1226522, util=99.98% 00:22:59.890 nvme5n1: ios=0/11869, merge=0/0, ticks=0/1198641, in_queue=1198641, util=97.98% 00:22:59.890 nvme6n1: ios=43/18724, merge=0/0, ticks=1218/1201455, in_queue=1202673, util=100.00% 00:22:59.890 nvme7n1: ios=0/12642, merge=0/0, ticks=0/1199565, in_queue=1199565, util=98.65% 00:22:59.890 nvme8n1: ios=42/15293, merge=0/0, ticks=3384/1201017, in_queue=1204401, util=100.00% 00:22:59.890 nvme9n1: ios=41/13049, merge=0/0, ticks=1694/1204277, in_queue=1205971, util=100.00% 00:22:59.890 13:06:04 -- target/multiconnection.sh@36 -- # sync 00:22:59.890 13:06:04 -- target/multiconnection.sh@37 -- # seq 1 11 00:22:59.890 13:06:04 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:59.890 13:06:04 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:22:59.890 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:22:59.890 13:06:04 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:22:59.890 13:06:04 -- common/autotest_common.sh@1205 -- # local i=0 00:22:59.890 13:06:04 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:22:59.890 13:06:04 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK1 00:22:59.890 13:06:04 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:22:59.890 13:06:04 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK1 00:22:59.890 13:06:04 -- common/autotest_common.sh@1217 -- # return 0 00:22:59.890 13:06:04 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:59.890 13:06:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:59.890 13:06:04 -- common/autotest_common.sh@10 -- # set +x 00:22:59.890 13:06:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:59.890 13:06:04 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:59.890 13:06:04 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:23:00.151 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:23:00.151 13:06:05 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:23:00.151 13:06:05 -- common/autotest_common.sh@1205 -- # local i=0 00:23:00.151 13:06:05 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:23:00.151 13:06:05 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK2 00:23:00.151 13:06:05 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:23:00.151 13:06:05 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK2 00:23:00.151 13:06:05 -- common/autotest_common.sh@1217 -- # return 0 00:23:00.151 13:06:05 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:23:00.151 13:06:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:00.151 13:06:05 -- common/autotest_common.sh@10 -- # set +x 00:23:00.151 13:06:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:00.151 13:06:05 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:00.151 13:06:05 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:23:00.412 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:23:00.412 13:06:05 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:23:00.412 13:06:05 -- common/autotest_common.sh@1205 -- # local i=0 00:23:00.412 13:06:05 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:23:00.412 13:06:05 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK3 00:23:00.412 13:06:05 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:23:00.412 13:06:05 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK3 00:23:00.412 13:06:05 -- common/autotest_common.sh@1217 -- # return 0 00:23:00.413 13:06:05 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:23:00.413 13:06:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:00.413 13:06:05 -- common/autotest_common.sh@10 -- # set +x 00:23:00.413 13:06:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:00.413 13:06:05 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:00.413 13:06:05 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:23:00.675 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:23:00.675 13:06:05 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:23:00.675 13:06:05 -- common/autotest_common.sh@1205 -- # local i=0 00:23:00.675 13:06:05 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:23:00.675 13:06:05 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK4 00:23:00.675 13:06:05 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:23:00.675 13:06:05 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK4 00:23:00.675 13:06:05 -- common/autotest_common.sh@1217 -- # return 0 00:23:00.675 13:06:05 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:23:00.675 13:06:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:00.675 13:06:05 -- common/autotest_common.sh@10 -- # set +x 00:23:00.675 13:06:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:00.675 13:06:05 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:00.675 13:06:05 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:23:01.247 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:23:01.247 13:06:06 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:23:01.247 13:06:06 -- common/autotest_common.sh@1205 -- # local i=0 00:23:01.247 13:06:06 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:23:01.247 13:06:06 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK5 00:23:01.247 13:06:06 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:23:01.247 13:06:06 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK5 00:23:01.247 13:06:06 -- common/autotest_common.sh@1217 -- # return 0 00:23:01.247 13:06:06 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:23:01.247 13:06:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:01.247 13:06:06 -- common/autotest_common.sh@10 -- # set +x 00:23:01.247 13:06:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:01.247 13:06:06 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:01.247 13:06:06 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:23:01.247 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:23:01.247 13:06:06 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:23:01.247 13:06:06 -- common/autotest_common.sh@1205 -- # local i=0 00:23:01.247 13:06:06 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:23:01.247 13:06:06 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK6 00:23:01.512 13:06:06 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:23:01.512 13:06:06 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK6 00:23:01.512 13:06:06 -- common/autotest_common.sh@1217 -- # return 0 00:23:01.512 13:06:06 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:23:01.512 13:06:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:01.512 13:06:06 -- common/autotest_common.sh@10 -- # set +x 00:23:01.512 13:06:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:01.512 13:06:06 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:01.512 13:06:06 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:23:01.512 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:23:01.512 13:06:06 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:23:01.512 13:06:06 -- common/autotest_common.sh@1205 -- # local i=0 00:23:01.512 13:06:06 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:23:01.512 13:06:06 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK7 00:23:01.512 13:06:06 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:23:01.512 13:06:06 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK7 00:23:01.512 13:06:06 -- common/autotest_common.sh@1217 -- # return 0 00:23:01.512 13:06:06 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:23:01.512 13:06:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:01.512 13:06:06 -- common/autotest_common.sh@10 -- # set +x 00:23:01.512 13:06:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:01.512 13:06:06 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:01.512 13:06:06 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:23:01.823 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:23:01.823 13:06:06 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:23:01.823 13:06:06 -- common/autotest_common.sh@1205 -- # local i=0 00:23:01.823 13:06:06 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK8 00:23:01.823 13:06:06 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:23:01.823 13:06:06 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK8 00:23:01.823 13:06:06 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:23:01.823 13:06:06 -- common/autotest_common.sh@1217 -- # return 0 00:23:01.823 13:06:06 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:23:01.823 13:06:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:01.823 13:06:06 -- common/autotest_common.sh@10 -- # set +x 00:23:01.823 13:06:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:01.823 13:06:06 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:01.823 13:06:06 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:23:01.823 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:23:01.823 13:06:06 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:23:01.823 13:06:06 -- common/autotest_common.sh@1205 -- # local i=0 00:23:02.106 13:06:06 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK9 00:23:02.106 13:06:06 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:23:02.106 13:06:06 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:23:02.106 13:06:06 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK9 00:23:02.106 13:06:06 -- common/autotest_common.sh@1217 -- # return 0 00:23:02.106 13:06:06 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:23:02.106 13:06:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:02.106 13:06:06 -- common/autotest_common.sh@10 -- # set +x 00:23:02.106 13:06:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:02.106 13:06:06 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:02.106 13:06:06 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:23:02.106 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:23:02.106 13:06:06 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:23:02.106 13:06:06 -- common/autotest_common.sh@1205 -- # local i=0 00:23:02.106 13:06:06 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:23:02.106 13:06:06 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK10 00:23:02.106 13:06:07 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:23:02.106 13:06:07 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK10 00:23:02.106 13:06:07 -- common/autotest_common.sh@1217 -- # return 0 00:23:02.106 13:06:07 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:23:02.106 13:06:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:02.106 13:06:07 -- common/autotest_common.sh@10 -- # set +x 00:23:02.106 13:06:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:02.106 13:06:07 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:02.106 13:06:07 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:23:02.106 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:23:02.106 13:06:07 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:23:02.106 13:06:07 -- common/autotest_common.sh@1205 -- # local i=0 00:23:02.106 13:06:07 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:23:02.106 13:06:07 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK11 00:23:02.106 13:06:07 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:23:02.106 13:06:07 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK11 00:23:02.106 13:06:07 -- common/autotest_common.sh@1217 -- # return 0 00:23:02.106 13:06:07 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:23:02.106 13:06:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:02.106 13:06:07 -- common/autotest_common.sh@10 -- # set +x 00:23:02.106 13:06:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:02.106 13:06:07 -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:23:02.106 13:06:07 -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:23:02.106 13:06:07 -- target/multiconnection.sh@47 -- # nvmftestfini 00:23:02.106 13:06:07 -- nvmf/common.sh@477 -- # nvmfcleanup 00:23:02.106 13:06:07 -- nvmf/common.sh@117 -- # sync 00:23:02.106 13:06:07 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:02.106 13:06:07 -- nvmf/common.sh@120 -- # set +e 00:23:02.106 13:06:07 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:02.106 13:06:07 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:02.106 rmmod nvme_tcp 00:23:02.367 rmmod nvme_fabrics 00:23:02.367 rmmod nvme_keyring 00:23:02.367 13:06:07 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:02.367 13:06:07 -- nvmf/common.sh@124 -- # set -e 00:23:02.367 13:06:07 -- nvmf/common.sh@125 -- # return 0 00:23:02.367 13:06:07 -- nvmf/common.sh@478 -- # '[' -n 4037267 ']' 00:23:02.367 13:06:07 -- nvmf/common.sh@479 -- # killprocess 4037267 00:23:02.367 13:06:07 -- common/autotest_common.sh@936 -- # '[' -z 4037267 ']' 00:23:02.367 13:06:07 -- common/autotest_common.sh@940 -- # kill -0 4037267 00:23:02.367 13:06:07 -- common/autotest_common.sh@941 -- # uname 00:23:02.367 13:06:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:02.367 13:06:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4037267 00:23:02.367 13:06:07 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:02.367 13:06:07 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:02.367 13:06:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4037267' 00:23:02.367 killing process with pid 4037267 00:23:02.367 13:06:07 -- common/autotest_common.sh@955 -- # kill 4037267 00:23:02.367 13:06:07 -- common/autotest_common.sh@960 -- # wait 4037267 00:23:02.628 13:06:07 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:23:02.628 13:06:07 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:23:02.628 13:06:07 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:23:02.628 13:06:07 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:02.628 13:06:07 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:02.628 13:06:07 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:02.628 13:06:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:02.628 13:06:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:05.174 13:06:09 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:05.174 00:23:05.174 real 1m17.466s 00:23:05.174 user 4m50.250s 00:23:05.174 sys 0m23.968s 00:23:05.174 13:06:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:05.174 13:06:09 -- common/autotest_common.sh@10 -- # set +x 00:23:05.174 ************************************ 00:23:05.174 END TEST nvmf_multiconnection 00:23:05.174 ************************************ 00:23:05.174 13:06:09 -- nvmf/nvmf.sh@67 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:23:05.174 13:06:09 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:05.174 13:06:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:05.174 13:06:09 -- common/autotest_common.sh@10 -- # set +x 00:23:05.174 ************************************ 00:23:05.174 START TEST nvmf_initiator_timeout 00:23:05.174 ************************************ 00:23:05.174 13:06:09 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:23:05.174 * Looking for test storage... 00:23:05.174 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:05.174 13:06:09 -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:05.174 13:06:09 -- nvmf/common.sh@7 -- # uname -s 00:23:05.174 13:06:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:05.174 13:06:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:05.174 13:06:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:05.174 13:06:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:05.174 13:06:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:05.174 13:06:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:05.174 13:06:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:05.174 13:06:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:05.174 13:06:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:05.174 13:06:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:05.174 13:06:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:05.174 13:06:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:05.174 13:06:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:05.174 13:06:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:05.174 13:06:09 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:05.174 13:06:09 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:05.174 13:06:09 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:05.174 13:06:09 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:05.174 13:06:09 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:05.174 13:06:09 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:05.174 13:06:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.174 13:06:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.174 13:06:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.174 13:06:09 -- paths/export.sh@5 -- # export PATH 00:23:05.175 13:06:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.175 13:06:09 -- nvmf/common.sh@47 -- # : 0 00:23:05.175 13:06:09 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:05.175 13:06:09 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:05.175 13:06:09 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:05.175 13:06:09 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:05.175 13:06:09 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:05.175 13:06:09 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:05.175 13:06:09 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:05.175 13:06:09 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:05.175 13:06:09 -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:05.175 13:06:09 -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:05.175 13:06:09 -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:23:05.175 13:06:09 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:23:05.175 13:06:09 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:05.175 13:06:09 -- nvmf/common.sh@437 -- # prepare_net_devs 00:23:05.175 13:06:09 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:23:05.175 13:06:09 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:23:05.175 13:06:09 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:05.175 13:06:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:05.175 13:06:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:05.175 13:06:09 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:23:05.175 13:06:09 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:23:05.175 13:06:09 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:05.175 13:06:09 -- common/autotest_common.sh@10 -- # set +x 00:23:13.365 13:06:16 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:13.365 13:06:16 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:13.365 13:06:16 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:13.365 13:06:16 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:13.366 13:06:16 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:13.366 13:06:16 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:13.366 13:06:16 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:13.366 13:06:16 -- nvmf/common.sh@295 -- # net_devs=() 00:23:13.366 13:06:16 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:13.366 13:06:16 -- nvmf/common.sh@296 -- # e810=() 00:23:13.366 13:06:16 -- nvmf/common.sh@296 -- # local -ga e810 00:23:13.366 13:06:16 -- nvmf/common.sh@297 -- # x722=() 00:23:13.366 13:06:16 -- nvmf/common.sh@297 -- # local -ga x722 00:23:13.366 13:06:16 -- nvmf/common.sh@298 -- # mlx=() 00:23:13.366 13:06:16 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:13.366 13:06:16 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:13.366 13:06:16 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:13.366 13:06:16 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:13.366 13:06:16 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:13.366 13:06:16 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:13.366 13:06:16 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:13.366 13:06:16 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:13.366 13:06:16 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:13.366 13:06:16 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:13.366 13:06:16 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:13.366 13:06:16 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:13.366 13:06:16 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:13.366 13:06:16 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:13.366 13:06:16 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:13.366 13:06:16 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:13.366 13:06:16 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:13.366 13:06:16 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:13.366 13:06:16 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:13.366 13:06:16 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:13.366 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:13.366 13:06:16 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:13.366 13:06:16 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:13.366 13:06:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:13.366 13:06:16 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:13.366 13:06:16 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:13.366 13:06:16 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:13.366 13:06:16 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:13.366 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:13.366 13:06:16 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:13.366 13:06:16 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:13.366 13:06:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:13.366 13:06:16 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:13.366 13:06:16 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:13.366 13:06:16 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:13.366 13:06:16 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:13.366 13:06:16 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:13.366 13:06:16 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:13.366 13:06:16 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:13.366 13:06:16 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:13.366 13:06:16 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:13.366 13:06:16 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:13.366 Found net devices under 0000:31:00.0: cvl_0_0 00:23:13.366 13:06:16 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:13.366 13:06:16 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:13.366 13:06:16 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:13.366 13:06:16 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:13.366 13:06:16 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:13.366 13:06:16 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:13.366 Found net devices under 0000:31:00.1: cvl_0_1 00:23:13.366 13:06:16 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:13.366 13:06:16 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:23:13.366 13:06:16 -- nvmf/common.sh@403 -- # is_hw=yes 00:23:13.366 13:06:16 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:23:13.366 13:06:16 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:23:13.366 13:06:16 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:23:13.366 13:06:16 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:13.366 13:06:16 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:13.366 13:06:16 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:13.366 13:06:16 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:13.366 13:06:16 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:13.366 13:06:16 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:13.366 13:06:16 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:13.366 13:06:16 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:13.366 13:06:16 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:13.366 13:06:16 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:13.366 13:06:16 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:13.366 13:06:16 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:13.366 13:06:16 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:13.366 13:06:17 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:13.366 13:06:17 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:13.366 13:06:17 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:13.366 13:06:17 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:13.366 13:06:17 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:13.366 13:06:17 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:13.366 13:06:17 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:13.366 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:13.366 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.626 ms 00:23:13.366 00:23:13.366 --- 10.0.0.2 ping statistics --- 00:23:13.366 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:13.366 rtt min/avg/max/mdev = 0.626/0.626/0.626/0.000 ms 00:23:13.366 13:06:17 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:13.366 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:13.366 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.414 ms 00:23:13.366 00:23:13.366 --- 10.0.0.1 ping statistics --- 00:23:13.366 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:13.366 rtt min/avg/max/mdev = 0.414/0.414/0.414/0.000 ms 00:23:13.366 13:06:17 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:13.366 13:06:17 -- nvmf/common.sh@411 -- # return 0 00:23:13.366 13:06:17 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:23:13.366 13:06:17 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:13.366 13:06:17 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:23:13.366 13:06:17 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:23:13.366 13:06:17 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:13.366 13:06:17 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:23:13.366 13:06:17 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:23:13.366 13:06:17 -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:23:13.366 13:06:17 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:23:13.366 13:06:17 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:13.366 13:06:17 -- common/autotest_common.sh@10 -- # set +x 00:23:13.366 13:06:17 -- nvmf/common.sh@470 -- # nvmfpid=4055415 00:23:13.366 13:06:17 -- nvmf/common.sh@471 -- # waitforlisten 4055415 00:23:13.366 13:06:17 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:13.366 13:06:17 -- common/autotest_common.sh@817 -- # '[' -z 4055415 ']' 00:23:13.366 13:06:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:13.366 13:06:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:13.366 13:06:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:13.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:13.366 13:06:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:13.366 13:06:17 -- common/autotest_common.sh@10 -- # set +x 00:23:13.366 [2024-04-26 13:06:17.378574] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:23:13.366 [2024-04-26 13:06:17.378636] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:13.366 EAL: No free 2048 kB hugepages reported on node 1 00:23:13.366 [2024-04-26 13:06:17.453631] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:13.367 [2024-04-26 13:06:17.527680] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:13.367 [2024-04-26 13:06:17.527725] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:13.367 [2024-04-26 13:06:17.527732] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:13.367 [2024-04-26 13:06:17.527738] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:13.367 [2024-04-26 13:06:17.527744] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:13.367 [2024-04-26 13:06:17.527897] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:13.367 [2024-04-26 13:06:17.528099] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:13.367 [2024-04-26 13:06:17.528101] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:13.367 [2024-04-26 13:06:17.527947] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:13.367 13:06:18 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:13.367 13:06:18 -- common/autotest_common.sh@850 -- # return 0 00:23:13.367 13:06:18 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:23:13.367 13:06:18 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:13.367 13:06:18 -- common/autotest_common.sh@10 -- # set +x 00:23:13.367 13:06:18 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:13.367 13:06:18 -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:23:13.367 13:06:18 -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:13.367 13:06:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:13.367 13:06:18 -- common/autotest_common.sh@10 -- # set +x 00:23:13.367 Malloc0 00:23:13.367 13:06:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:13.367 13:06:18 -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:23:13.367 13:06:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:13.367 13:06:18 -- common/autotest_common.sh@10 -- # set +x 00:23:13.367 Delay0 00:23:13.367 13:06:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:13.367 13:06:18 -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:13.367 13:06:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:13.367 13:06:18 -- common/autotest_common.sh@10 -- # set +x 00:23:13.367 [2024-04-26 13:06:18.239614] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:13.367 13:06:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:13.367 13:06:18 -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:23:13.367 13:06:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:13.367 13:06:18 -- common/autotest_common.sh@10 -- # set +x 00:23:13.367 13:06:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:13.367 13:06:18 -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:13.367 13:06:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:13.367 13:06:18 -- common/autotest_common.sh@10 -- # set +x 00:23:13.367 13:06:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:13.367 13:06:18 -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:13.367 13:06:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:13.367 13:06:18 -- common/autotest_common.sh@10 -- # set +x 00:23:13.367 [2024-04-26 13:06:18.279891] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:13.367 13:06:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:13.367 13:06:18 -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:23:15.276 13:06:19 -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:23:15.276 13:06:19 -- common/autotest_common.sh@1184 -- # local i=0 00:23:15.276 13:06:19 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:23:15.276 13:06:19 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:23:15.276 13:06:19 -- common/autotest_common.sh@1191 -- # sleep 2 00:23:17.197 13:06:21 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:23:17.197 13:06:21 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:23:17.197 13:06:21 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:23:17.197 13:06:21 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:23:17.197 13:06:21 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:23:17.197 13:06:21 -- common/autotest_common.sh@1194 -- # return 0 00:23:17.197 13:06:21 -- target/initiator_timeout.sh@35 -- # fio_pid=4056375 00:23:17.197 13:06:21 -- target/initiator_timeout.sh@37 -- # sleep 3 00:23:17.197 13:06:21 -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:23:17.197 [global] 00:23:17.197 thread=1 00:23:17.197 invalidate=1 00:23:17.197 rw=write 00:23:17.197 time_based=1 00:23:17.197 runtime=60 00:23:17.197 ioengine=libaio 00:23:17.197 direct=1 00:23:17.197 bs=4096 00:23:17.197 iodepth=1 00:23:17.197 norandommap=0 00:23:17.197 numjobs=1 00:23:17.197 00:23:17.197 verify_dump=1 00:23:17.197 verify_backlog=512 00:23:17.197 verify_state_save=0 00:23:17.197 do_verify=1 00:23:17.197 verify=crc32c-intel 00:23:17.197 [job0] 00:23:17.197 filename=/dev/nvme0n1 00:23:17.197 Could not set queue depth (nvme0n1) 00:23:17.197 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:23:17.197 fio-3.35 00:23:17.197 Starting 1 thread 00:23:20.499 13:06:24 -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:23:20.499 13:06:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:20.499 13:06:24 -- common/autotest_common.sh@10 -- # set +x 00:23:20.499 true 00:23:20.499 13:06:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:20.499 13:06:24 -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:23:20.499 13:06:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:20.499 13:06:24 -- common/autotest_common.sh@10 -- # set +x 00:23:20.499 true 00:23:20.499 13:06:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:20.499 13:06:24 -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:23:20.499 13:06:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:20.499 13:06:24 -- common/autotest_common.sh@10 -- # set +x 00:23:20.499 true 00:23:20.499 13:06:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:20.499 13:06:24 -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:23:20.499 13:06:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:20.499 13:06:24 -- common/autotest_common.sh@10 -- # set +x 00:23:20.499 true 00:23:20.499 13:06:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:20.499 13:06:24 -- target/initiator_timeout.sh@45 -- # sleep 3 00:23:23.045 13:06:27 -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:23:23.045 13:06:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:23.045 13:06:27 -- common/autotest_common.sh@10 -- # set +x 00:23:23.045 true 00:23:23.045 13:06:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:23.045 13:06:27 -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:23:23.045 13:06:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:23.045 13:06:27 -- common/autotest_common.sh@10 -- # set +x 00:23:23.045 true 00:23:23.045 13:06:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:23.045 13:06:27 -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:23:23.045 13:06:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:23.045 13:06:27 -- common/autotest_common.sh@10 -- # set +x 00:23:23.045 true 00:23:23.045 13:06:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:23.045 13:06:27 -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:23:23.045 13:06:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:23.045 13:06:27 -- common/autotest_common.sh@10 -- # set +x 00:23:23.045 true 00:23:23.045 13:06:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:23.045 13:06:27 -- target/initiator_timeout.sh@53 -- # fio_status=0 00:23:23.045 13:06:27 -- target/initiator_timeout.sh@54 -- # wait 4056375 00:24:19.340 00:24:19.340 job0: (groupid=0, jobs=1): err= 0: pid=4056620: Fri Apr 26 13:07:22 2024 00:24:19.340 read: IOPS=125, BW=501KiB/s (513kB/s)(29.3MiB/60001msec) 00:24:19.340 slat (usec): min=6, max=6854, avg=27.69, stdev=106.75 00:24:19.340 clat (usec): min=302, max=42037k, avg=7350.10, stdev=485026.14 00:24:19.340 lat (usec): min=310, max=42037k, avg=7377.78, stdev=485026.13 00:24:19.340 clat percentiles (usec): 00:24:19.340 | 1.00th=[ 635], 5.00th=[ 766], 10.00th=[ 824], 00:24:19.340 | 20.00th=[ 865], 30.00th=[ 898], 40.00th=[ 930], 00:24:19.340 | 50.00th=[ 971], 60.00th=[ 1004], 70.00th=[ 1020], 00:24:19.340 | 80.00th=[ 1045], 90.00th=[ 1090], 95.00th=[ 1123], 00:24:19.340 | 99.00th=[ 42206], 99.50th=[ 42206], 99.90th=[ 42206], 00:24:19.340 | 99.95th=[ 42206], 99.99th=[17112761] 00:24:19.340 write: IOPS=127, BW=512KiB/s (524kB/s)(30.0MiB/60001msec); 0 zone resets 00:24:19.340 slat (usec): min=9, max=30965, avg=35.44, stdev=353.08 00:24:19.340 clat (usec): min=190, max=921, avg=543.97, stdev=114.57 00:24:19.340 lat (usec): min=201, max=31886, avg=579.41, stdev=375.83 00:24:19.340 clat percentiles (usec): 00:24:19.340 | 1.00th=[ 249], 5.00th=[ 338], 10.00th=[ 375], 20.00th=[ 445], 00:24:19.340 | 30.00th=[ 482], 40.00th=[ 529], 50.00th=[ 553], 60.00th=[ 578], 00:24:19.340 | 70.00th=[ 603], 80.00th=[ 652], 90.00th=[ 693], 95.00th=[ 717], 00:24:19.340 | 99.00th=[ 766], 99.50th=[ 791], 99.90th=[ 840], 99.95th=[ 857], 00:24:19.340 | 99.99th=[ 922] 00:24:19.340 bw ( KiB/s): min= 144, max= 4096, per=100.00%, avg=2604.17, stdev=1471.22, samples=23 00:24:19.340 iops : min= 36, max= 1024, avg=651.04, stdev=367.81, samples=23 00:24:19.340 lat (usec) : 250=0.55%, 500=16.03%, 750=34.99%, 1000=28.34% 00:24:19.340 lat (msec) : 2=19.10%, 50=0.97%, >=2000=0.01% 00:24:19.340 cpu : usr=0.41%, sys=0.76%, ctx=15201, majf=0, minf=1 00:24:19.340 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:19.340 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:19.340 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:19.340 issued rwts: total=7512,7680,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:19.340 latency : target=0, window=0, percentile=100.00%, depth=1 00:24:19.340 00:24:19.340 Run status group 0 (all jobs): 00:24:19.340 READ: bw=501KiB/s (513kB/s), 501KiB/s-501KiB/s (513kB/s-513kB/s), io=29.3MiB (30.8MB), run=60001-60001msec 00:24:19.340 WRITE: bw=512KiB/s (524kB/s), 512KiB/s-512KiB/s (524kB/s-524kB/s), io=30.0MiB (31.5MB), run=60001-60001msec 00:24:19.340 00:24:19.340 Disk stats (read/write): 00:24:19.340 nvme0n1: ios=7465/7680, merge=0/0, ticks=14431/3898, in_queue=18329, util=99.75% 00:24:19.340 13:07:22 -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:24:19.340 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:24:19.340 13:07:22 -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:24:19.340 13:07:22 -- common/autotest_common.sh@1205 -- # local i=0 00:24:19.340 13:07:22 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:24:19.340 13:07:22 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:24:19.340 13:07:22 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:24:19.340 13:07:22 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:24:19.340 13:07:22 -- common/autotest_common.sh@1217 -- # return 0 00:24:19.340 13:07:22 -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:24:19.340 13:07:22 -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:24:19.340 nvmf hotplug test: fio successful as expected 00:24:19.340 13:07:22 -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:19.340 13:07:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:19.340 13:07:22 -- common/autotest_common.sh@10 -- # set +x 00:24:19.340 13:07:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:19.340 13:07:22 -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:24:19.340 13:07:22 -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:24:19.340 13:07:22 -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:24:19.340 13:07:22 -- nvmf/common.sh@477 -- # nvmfcleanup 00:24:19.340 13:07:22 -- nvmf/common.sh@117 -- # sync 00:24:19.340 13:07:22 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:19.340 13:07:22 -- nvmf/common.sh@120 -- # set +e 00:24:19.340 13:07:22 -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:19.340 13:07:22 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:19.340 rmmod nvme_tcp 00:24:19.340 rmmod nvme_fabrics 00:24:19.340 rmmod nvme_keyring 00:24:19.340 13:07:22 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:19.340 13:07:22 -- nvmf/common.sh@124 -- # set -e 00:24:19.340 13:07:22 -- nvmf/common.sh@125 -- # return 0 00:24:19.340 13:07:22 -- nvmf/common.sh@478 -- # '[' -n 4055415 ']' 00:24:19.340 13:07:22 -- nvmf/common.sh@479 -- # killprocess 4055415 00:24:19.340 13:07:22 -- common/autotest_common.sh@936 -- # '[' -z 4055415 ']' 00:24:19.340 13:07:22 -- common/autotest_common.sh@940 -- # kill -0 4055415 00:24:19.340 13:07:22 -- common/autotest_common.sh@941 -- # uname 00:24:19.340 13:07:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:19.340 13:07:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4055415 00:24:19.340 13:07:22 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:19.340 13:07:22 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:19.340 13:07:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4055415' 00:24:19.340 killing process with pid 4055415 00:24:19.340 13:07:22 -- common/autotest_common.sh@955 -- # kill 4055415 00:24:19.340 13:07:22 -- common/autotest_common.sh@960 -- # wait 4055415 00:24:19.340 13:07:22 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:24:19.340 13:07:22 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:24:19.340 13:07:22 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:24:19.340 13:07:22 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:19.340 13:07:22 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:19.340 13:07:22 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:19.340 13:07:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:19.340 13:07:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:19.911 13:07:24 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:19.911 00:24:19.911 real 1m15.067s 00:24:19.911 user 4m37.509s 00:24:19.911 sys 0m7.692s 00:24:19.911 13:07:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:19.911 13:07:24 -- common/autotest_common.sh@10 -- # set +x 00:24:19.911 ************************************ 00:24:19.911 END TEST nvmf_initiator_timeout 00:24:19.911 ************************************ 00:24:19.911 13:07:24 -- nvmf/nvmf.sh@70 -- # [[ phy == phy ]] 00:24:19.911 13:07:24 -- nvmf/nvmf.sh@71 -- # '[' tcp = tcp ']' 00:24:19.911 13:07:24 -- nvmf/nvmf.sh@72 -- # gather_supported_nvmf_pci_devs 00:24:19.911 13:07:24 -- nvmf/common.sh@285 -- # xtrace_disable 00:24:19.911 13:07:24 -- common/autotest_common.sh@10 -- # set +x 00:24:28.054 13:07:31 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:28.054 13:07:31 -- nvmf/common.sh@291 -- # pci_devs=() 00:24:28.054 13:07:31 -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:28.054 13:07:31 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:28.054 13:07:31 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:28.054 13:07:31 -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:28.054 13:07:31 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:28.054 13:07:31 -- nvmf/common.sh@295 -- # net_devs=() 00:24:28.054 13:07:31 -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:28.054 13:07:31 -- nvmf/common.sh@296 -- # e810=() 00:24:28.054 13:07:31 -- nvmf/common.sh@296 -- # local -ga e810 00:24:28.054 13:07:31 -- nvmf/common.sh@297 -- # x722=() 00:24:28.054 13:07:31 -- nvmf/common.sh@297 -- # local -ga x722 00:24:28.054 13:07:31 -- nvmf/common.sh@298 -- # mlx=() 00:24:28.054 13:07:31 -- nvmf/common.sh@298 -- # local -ga mlx 00:24:28.054 13:07:31 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:28.054 13:07:31 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:28.054 13:07:31 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:28.054 13:07:31 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:28.054 13:07:31 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:28.054 13:07:31 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:28.054 13:07:31 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:28.054 13:07:31 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:28.054 13:07:31 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:28.054 13:07:31 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:28.054 13:07:31 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:28.054 13:07:31 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:28.054 13:07:31 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:28.054 13:07:31 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:28.054 13:07:31 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:28.054 13:07:31 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:28.054 13:07:31 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:28.054 13:07:31 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:28.054 13:07:31 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:28.054 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:28.054 13:07:31 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:28.054 13:07:31 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:28.054 13:07:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:28.054 13:07:31 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:28.054 13:07:31 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:28.054 13:07:31 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:28.054 13:07:31 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:28.054 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:28.054 13:07:31 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:28.054 13:07:31 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:28.054 13:07:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:28.054 13:07:31 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:28.054 13:07:31 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:28.054 13:07:31 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:28.054 13:07:31 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:28.054 13:07:31 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:28.054 13:07:31 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:28.054 13:07:31 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:28.054 13:07:31 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:24:28.054 13:07:31 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:28.054 13:07:31 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:28.054 Found net devices under 0000:31:00.0: cvl_0_0 00:24:28.054 13:07:31 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:24:28.054 13:07:31 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:28.054 13:07:31 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:28.054 13:07:31 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:24:28.054 13:07:31 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:28.054 13:07:31 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:28.054 Found net devices under 0000:31:00.1: cvl_0_1 00:24:28.054 13:07:31 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:24:28.054 13:07:31 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:24:28.054 13:07:31 -- nvmf/nvmf.sh@73 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:28.054 13:07:31 -- nvmf/nvmf.sh@74 -- # (( 2 > 0 )) 00:24:28.054 13:07:31 -- nvmf/nvmf.sh@75 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:24:28.054 13:07:31 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:28.054 13:07:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:28.054 13:07:31 -- common/autotest_common.sh@10 -- # set +x 00:24:28.054 ************************************ 00:24:28.054 START TEST nvmf_perf_adq 00:24:28.054 ************************************ 00:24:28.054 13:07:31 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:24:28.054 * Looking for test storage... 00:24:28.054 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:28.054 13:07:32 -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:28.054 13:07:32 -- nvmf/common.sh@7 -- # uname -s 00:24:28.054 13:07:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:28.054 13:07:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:28.054 13:07:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:28.054 13:07:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:28.054 13:07:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:28.054 13:07:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:28.054 13:07:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:28.054 13:07:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:28.054 13:07:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:28.055 13:07:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:28.055 13:07:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:28.055 13:07:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:28.055 13:07:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:28.055 13:07:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:28.055 13:07:32 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:28.055 13:07:32 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:28.055 13:07:32 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:28.055 13:07:32 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:28.055 13:07:32 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:28.055 13:07:32 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:28.055 13:07:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.055 13:07:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.055 13:07:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.055 13:07:32 -- paths/export.sh@5 -- # export PATH 00:24:28.055 13:07:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.055 13:07:32 -- nvmf/common.sh@47 -- # : 0 00:24:28.055 13:07:32 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:28.055 13:07:32 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:28.055 13:07:32 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:28.055 13:07:32 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:28.055 13:07:32 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:28.055 13:07:32 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:28.055 13:07:32 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:28.055 13:07:32 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:28.055 13:07:32 -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:24:28.055 13:07:32 -- nvmf/common.sh@285 -- # xtrace_disable 00:24:28.055 13:07:32 -- common/autotest_common.sh@10 -- # set +x 00:24:34.767 13:07:39 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:34.767 13:07:39 -- nvmf/common.sh@291 -- # pci_devs=() 00:24:34.767 13:07:39 -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:34.767 13:07:39 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:34.767 13:07:39 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:34.767 13:07:39 -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:34.767 13:07:39 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:34.767 13:07:39 -- nvmf/common.sh@295 -- # net_devs=() 00:24:34.767 13:07:39 -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:34.767 13:07:39 -- nvmf/common.sh@296 -- # e810=() 00:24:34.767 13:07:39 -- nvmf/common.sh@296 -- # local -ga e810 00:24:34.767 13:07:39 -- nvmf/common.sh@297 -- # x722=() 00:24:34.767 13:07:39 -- nvmf/common.sh@297 -- # local -ga x722 00:24:34.767 13:07:39 -- nvmf/common.sh@298 -- # mlx=() 00:24:34.767 13:07:39 -- nvmf/common.sh@298 -- # local -ga mlx 00:24:34.767 13:07:39 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:34.767 13:07:39 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:34.767 13:07:39 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:34.767 13:07:39 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:34.768 13:07:39 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:34.768 13:07:39 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:34.768 13:07:39 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:34.768 13:07:39 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:34.768 13:07:39 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:34.768 13:07:39 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:34.768 13:07:39 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:34.768 13:07:39 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:34.768 13:07:39 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:34.768 13:07:39 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:34.768 13:07:39 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:34.768 13:07:39 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:34.768 13:07:39 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:34.768 13:07:39 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:34.768 13:07:39 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:34.768 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:34.768 13:07:39 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:34.768 13:07:39 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:34.768 13:07:39 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:34.768 13:07:39 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:34.768 13:07:39 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:34.768 13:07:39 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:34.768 13:07:39 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:34.768 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:34.768 13:07:39 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:34.768 13:07:39 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:34.768 13:07:39 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:34.768 13:07:39 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:34.768 13:07:39 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:34.768 13:07:39 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:34.768 13:07:39 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:34.768 13:07:39 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:34.768 13:07:39 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:34.768 13:07:39 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:34.768 13:07:39 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:24:34.768 13:07:39 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:34.768 13:07:39 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:34.768 Found net devices under 0000:31:00.0: cvl_0_0 00:24:34.768 13:07:39 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:24:34.768 13:07:39 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:34.768 13:07:39 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:34.768 13:07:39 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:24:34.768 13:07:39 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:34.768 13:07:39 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:34.768 Found net devices under 0000:31:00.1: cvl_0_1 00:24:34.768 13:07:39 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:24:34.768 13:07:39 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:24:34.768 13:07:39 -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:34.768 13:07:39 -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:24:34.768 13:07:39 -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:24:34.768 13:07:39 -- target/perf_adq.sh@59 -- # adq_reload_driver 00:24:34.768 13:07:39 -- target/perf_adq.sh@52 -- # rmmod ice 00:24:35.710 13:07:40 -- target/perf_adq.sh@53 -- # modprobe ice 00:24:37.622 13:07:42 -- target/perf_adq.sh@54 -- # sleep 5 00:24:42.909 13:07:47 -- target/perf_adq.sh@67 -- # nvmftestinit 00:24:42.909 13:07:47 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:24:42.909 13:07:47 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:42.909 13:07:47 -- nvmf/common.sh@437 -- # prepare_net_devs 00:24:42.909 13:07:47 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:24:42.909 13:07:47 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:24:42.909 13:07:47 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:42.909 13:07:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:42.909 13:07:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:42.909 13:07:47 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:24:42.909 13:07:47 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:24:42.909 13:07:47 -- nvmf/common.sh@285 -- # xtrace_disable 00:24:42.909 13:07:47 -- common/autotest_common.sh@10 -- # set +x 00:24:42.909 13:07:47 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:42.909 13:07:47 -- nvmf/common.sh@291 -- # pci_devs=() 00:24:42.909 13:07:47 -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:42.909 13:07:47 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:42.909 13:07:47 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:42.909 13:07:47 -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:42.909 13:07:47 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:42.909 13:07:47 -- nvmf/common.sh@295 -- # net_devs=() 00:24:42.909 13:07:47 -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:42.909 13:07:47 -- nvmf/common.sh@296 -- # e810=() 00:24:42.909 13:07:47 -- nvmf/common.sh@296 -- # local -ga e810 00:24:42.909 13:07:47 -- nvmf/common.sh@297 -- # x722=() 00:24:42.909 13:07:47 -- nvmf/common.sh@297 -- # local -ga x722 00:24:42.909 13:07:47 -- nvmf/common.sh@298 -- # mlx=() 00:24:42.909 13:07:47 -- nvmf/common.sh@298 -- # local -ga mlx 00:24:42.909 13:07:47 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:42.909 13:07:47 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:42.909 13:07:47 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:42.909 13:07:47 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:42.909 13:07:47 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:42.909 13:07:47 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:42.909 13:07:47 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:42.909 13:07:47 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:42.909 13:07:47 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:42.909 13:07:47 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:42.909 13:07:47 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:42.909 13:07:47 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:42.909 13:07:47 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:42.910 13:07:47 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:42.910 13:07:47 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:42.910 13:07:47 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:42.910 13:07:47 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:42.910 13:07:47 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:42.910 13:07:47 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:42.910 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:42.910 13:07:47 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:42.910 13:07:47 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:42.910 13:07:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:42.910 13:07:47 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:42.910 13:07:47 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:42.910 13:07:47 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:42.910 13:07:47 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:42.910 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:42.910 13:07:47 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:42.910 13:07:47 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:42.910 13:07:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:42.910 13:07:47 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:42.910 13:07:47 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:42.910 13:07:47 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:42.910 13:07:47 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:42.910 13:07:47 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:42.910 13:07:47 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:42.910 13:07:47 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:42.910 13:07:47 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:24:42.910 13:07:47 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:42.910 13:07:47 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:42.910 Found net devices under 0000:31:00.0: cvl_0_0 00:24:42.910 13:07:47 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:24:42.910 13:07:47 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:42.910 13:07:47 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:42.910 13:07:47 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:24:42.910 13:07:47 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:42.910 13:07:47 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:42.910 Found net devices under 0000:31:00.1: cvl_0_1 00:24:42.910 13:07:47 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:24:42.910 13:07:47 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:24:42.910 13:07:47 -- nvmf/common.sh@403 -- # is_hw=yes 00:24:42.910 13:07:47 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:24:42.910 13:07:47 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:24:42.910 13:07:47 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:24:42.910 13:07:47 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:42.910 13:07:47 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:42.910 13:07:47 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:42.910 13:07:47 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:42.910 13:07:47 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:42.910 13:07:47 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:42.910 13:07:47 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:42.910 13:07:47 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:42.910 13:07:47 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:42.910 13:07:47 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:42.910 13:07:47 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:42.910 13:07:47 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:42.910 13:07:47 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:42.910 13:07:47 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:42.910 13:07:47 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:42.910 13:07:47 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:42.910 13:07:47 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:42.910 13:07:47 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:42.910 13:07:47 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:42.910 13:07:47 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:42.910 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:42.910 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.653 ms 00:24:42.910 00:24:42.910 --- 10.0.0.2 ping statistics --- 00:24:42.910 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:42.910 rtt min/avg/max/mdev = 0.653/0.653/0.653/0.000 ms 00:24:42.910 13:07:47 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:42.910 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:42.910 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.335 ms 00:24:42.910 00:24:42.910 --- 10.0.0.1 ping statistics --- 00:24:42.910 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:42.910 rtt min/avg/max/mdev = 0.335/0.335/0.335/0.000 ms 00:24:42.910 13:07:47 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:42.910 13:07:47 -- nvmf/common.sh@411 -- # return 0 00:24:42.910 13:07:47 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:24:42.910 13:07:47 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:42.910 13:07:47 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:24:42.910 13:07:47 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:24:42.910 13:07:47 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:42.910 13:07:47 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:24:42.910 13:07:47 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:24:42.910 13:07:47 -- target/perf_adq.sh@68 -- # nvmfappstart -m 0xF --wait-for-rpc 00:24:42.910 13:07:47 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:24:42.910 13:07:47 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:42.910 13:07:47 -- common/autotest_common.sh@10 -- # set +x 00:24:42.910 13:07:47 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:24:42.910 13:07:47 -- nvmf/common.sh@470 -- # nvmfpid=4077745 00:24:42.910 13:07:47 -- nvmf/common.sh@471 -- # waitforlisten 4077745 00:24:42.910 13:07:47 -- common/autotest_common.sh@817 -- # '[' -z 4077745 ']' 00:24:42.910 13:07:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:42.910 13:07:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:42.910 13:07:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:42.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:42.910 13:07:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:42.910 13:07:47 -- common/autotest_common.sh@10 -- # set +x 00:24:42.910 [2024-04-26 13:07:47.961786] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:24:42.910 [2024-04-26 13:07:47.961835] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:43.171 EAL: No free 2048 kB hugepages reported on node 1 00:24:43.171 [2024-04-26 13:07:48.026669] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:43.171 [2024-04-26 13:07:48.092534] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:43.171 [2024-04-26 13:07:48.092573] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:43.171 [2024-04-26 13:07:48.092582] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:43.171 [2024-04-26 13:07:48.092589] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:43.171 [2024-04-26 13:07:48.092596] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:43.171 [2024-04-26 13:07:48.092788] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:43.171 [2024-04-26 13:07:48.092922] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:43.171 [2024-04-26 13:07:48.093190] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:43.171 [2024-04-26 13:07:48.093195] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:43.741 13:07:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:43.741 13:07:48 -- common/autotest_common.sh@850 -- # return 0 00:24:43.741 13:07:48 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:24:43.741 13:07:48 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:43.741 13:07:48 -- common/autotest_common.sh@10 -- # set +x 00:24:43.741 13:07:48 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:43.741 13:07:48 -- target/perf_adq.sh@69 -- # adq_configure_nvmf_target 0 00:24:43.742 13:07:48 -- target/perf_adq.sh@42 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:24:43.742 13:07:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:43.742 13:07:48 -- common/autotest_common.sh@10 -- # set +x 00:24:44.015 13:07:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:44.015 13:07:48 -- target/perf_adq.sh@43 -- # rpc_cmd framework_start_init 00:24:44.015 13:07:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:44.015 13:07:48 -- common/autotest_common.sh@10 -- # set +x 00:24:44.015 13:07:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:44.015 13:07:48 -- target/perf_adq.sh@44 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:24:44.015 13:07:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:44.015 13:07:48 -- common/autotest_common.sh@10 -- # set +x 00:24:44.015 [2024-04-26 13:07:48.899783] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:44.015 13:07:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:44.015 13:07:48 -- target/perf_adq.sh@45 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:44.015 13:07:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:44.015 13:07:48 -- common/autotest_common.sh@10 -- # set +x 00:24:44.015 Malloc1 00:24:44.015 13:07:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:44.015 13:07:48 -- target/perf_adq.sh@46 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:44.015 13:07:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:44.015 13:07:48 -- common/autotest_common.sh@10 -- # set +x 00:24:44.015 13:07:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:44.015 13:07:48 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:44.015 13:07:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:44.015 13:07:48 -- common/autotest_common.sh@10 -- # set +x 00:24:44.015 13:07:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:44.015 13:07:48 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:44.015 13:07:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:44.015 13:07:48 -- common/autotest_common.sh@10 -- # set +x 00:24:44.015 [2024-04-26 13:07:48.959218] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:44.015 13:07:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:44.015 13:07:48 -- target/perf_adq.sh@73 -- # perfpid=4077847 00:24:44.015 13:07:48 -- target/perf_adq.sh@74 -- # sleep 2 00:24:44.015 13:07:48 -- target/perf_adq.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:24:44.015 EAL: No free 2048 kB hugepages reported on node 1 00:24:45.924 13:07:50 -- target/perf_adq.sh@76 -- # rpc_cmd nvmf_get_stats 00:24:45.924 13:07:50 -- target/perf_adq.sh@76 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:24:45.924 13:07:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:45.924 13:07:50 -- target/perf_adq.sh@76 -- # wc -l 00:24:45.924 13:07:50 -- common/autotest_common.sh@10 -- # set +x 00:24:46.184 13:07:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:46.184 13:07:51 -- target/perf_adq.sh@76 -- # count=4 00:24:46.184 13:07:51 -- target/perf_adq.sh@77 -- # [[ 4 -ne 4 ]] 00:24:46.184 13:07:51 -- target/perf_adq.sh@81 -- # wait 4077847 00:24:54.316 Initializing NVMe Controllers 00:24:54.316 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:54.316 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:24:54.316 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:24:54.316 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:24:54.316 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:24:54.316 Initialization complete. Launching workers. 00:24:54.316 ======================================================== 00:24:54.316 Latency(us) 00:24:54.316 Device Information : IOPS MiB/s Average min max 00:24:54.316 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10894.50 42.56 5886.75 1523.30 47077.52 00:24:54.316 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 14270.50 55.74 4484.57 1341.89 9426.17 00:24:54.316 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 14405.50 56.27 4443.56 1408.71 11243.64 00:24:54.316 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 13092.20 51.14 4888.46 1142.97 10033.47 00:24:54.316 ======================================================== 00:24:54.316 Total : 52662.70 205.71 4863.84 1142.97 47077.52 00:24:54.316 00:24:54.316 13:07:59 -- target/perf_adq.sh@82 -- # nvmftestfini 00:24:54.316 13:07:59 -- nvmf/common.sh@477 -- # nvmfcleanup 00:24:54.316 13:07:59 -- nvmf/common.sh@117 -- # sync 00:24:54.316 13:07:59 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:54.316 13:07:59 -- nvmf/common.sh@120 -- # set +e 00:24:54.316 13:07:59 -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:54.316 13:07:59 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:54.316 rmmod nvme_tcp 00:24:54.316 rmmod nvme_fabrics 00:24:54.316 rmmod nvme_keyring 00:24:54.316 13:07:59 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:54.316 13:07:59 -- nvmf/common.sh@124 -- # set -e 00:24:54.316 13:07:59 -- nvmf/common.sh@125 -- # return 0 00:24:54.316 13:07:59 -- nvmf/common.sh@478 -- # '[' -n 4077745 ']' 00:24:54.316 13:07:59 -- nvmf/common.sh@479 -- # killprocess 4077745 00:24:54.316 13:07:59 -- common/autotest_common.sh@936 -- # '[' -z 4077745 ']' 00:24:54.316 13:07:59 -- common/autotest_common.sh@940 -- # kill -0 4077745 00:24:54.316 13:07:59 -- common/autotest_common.sh@941 -- # uname 00:24:54.316 13:07:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:54.316 13:07:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4077745 00:24:54.316 13:07:59 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:54.316 13:07:59 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:54.316 13:07:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4077745' 00:24:54.316 killing process with pid 4077745 00:24:54.316 13:07:59 -- common/autotest_common.sh@955 -- # kill 4077745 00:24:54.316 13:07:59 -- common/autotest_common.sh@960 -- # wait 4077745 00:24:54.576 13:07:59 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:24:54.576 13:07:59 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:24:54.576 13:07:59 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:24:54.576 13:07:59 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:54.576 13:07:59 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:54.576 13:07:59 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:54.576 13:07:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:54.576 13:07:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:56.489 13:08:01 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:56.489 13:08:01 -- target/perf_adq.sh@84 -- # adq_reload_driver 00:24:56.489 13:08:01 -- target/perf_adq.sh@52 -- # rmmod ice 00:24:57.873 13:08:02 -- target/perf_adq.sh@53 -- # modprobe ice 00:25:00.427 13:08:04 -- target/perf_adq.sh@54 -- # sleep 5 00:25:05.729 13:08:09 -- target/perf_adq.sh@87 -- # nvmftestinit 00:25:05.729 13:08:09 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:25:05.729 13:08:09 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:05.729 13:08:09 -- nvmf/common.sh@437 -- # prepare_net_devs 00:25:05.729 13:08:09 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:25:05.729 13:08:09 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:25:05.729 13:08:09 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:05.729 13:08:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:05.729 13:08:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:05.729 13:08:09 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:25:05.729 13:08:09 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:25:05.729 13:08:09 -- nvmf/common.sh@285 -- # xtrace_disable 00:25:05.729 13:08:09 -- common/autotest_common.sh@10 -- # set +x 00:25:05.729 13:08:09 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:05.729 13:08:09 -- nvmf/common.sh@291 -- # pci_devs=() 00:25:05.729 13:08:09 -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:05.729 13:08:09 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:05.729 13:08:09 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:05.729 13:08:09 -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:05.729 13:08:09 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:05.729 13:08:09 -- nvmf/common.sh@295 -- # net_devs=() 00:25:05.729 13:08:09 -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:05.729 13:08:09 -- nvmf/common.sh@296 -- # e810=() 00:25:05.729 13:08:09 -- nvmf/common.sh@296 -- # local -ga e810 00:25:05.729 13:08:09 -- nvmf/common.sh@297 -- # x722=() 00:25:05.729 13:08:09 -- nvmf/common.sh@297 -- # local -ga x722 00:25:05.729 13:08:09 -- nvmf/common.sh@298 -- # mlx=() 00:25:05.729 13:08:09 -- nvmf/common.sh@298 -- # local -ga mlx 00:25:05.729 13:08:09 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:05.729 13:08:09 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:05.729 13:08:09 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:05.729 13:08:09 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:05.729 13:08:09 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:05.729 13:08:09 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:05.729 13:08:09 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:05.729 13:08:09 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:05.729 13:08:09 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:05.729 13:08:09 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:05.729 13:08:09 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:05.729 13:08:09 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:05.729 13:08:09 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:05.729 13:08:09 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:05.729 13:08:09 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:05.729 13:08:09 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:05.729 13:08:09 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:05.729 13:08:09 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:05.729 13:08:09 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:05.729 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:05.729 13:08:09 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:05.729 13:08:09 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:05.729 13:08:09 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:05.729 13:08:09 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:05.729 13:08:09 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:05.729 13:08:09 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:05.729 13:08:09 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:05.729 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:05.729 13:08:09 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:05.729 13:08:09 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:05.729 13:08:09 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:05.729 13:08:09 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:05.729 13:08:09 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:05.729 13:08:09 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:05.729 13:08:09 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:05.729 13:08:09 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:05.729 13:08:09 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:05.729 13:08:09 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:05.729 13:08:09 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:25:05.729 13:08:09 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:05.729 13:08:09 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:05.729 Found net devices under 0000:31:00.0: cvl_0_0 00:25:05.729 13:08:09 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:25:05.729 13:08:09 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:05.729 13:08:09 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:05.729 13:08:09 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:25:05.729 13:08:09 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:05.729 13:08:09 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:05.729 Found net devices under 0000:31:00.1: cvl_0_1 00:25:05.729 13:08:09 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:25:05.729 13:08:09 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:25:05.729 13:08:09 -- nvmf/common.sh@403 -- # is_hw=yes 00:25:05.729 13:08:09 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:25:05.729 13:08:09 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:25:05.729 13:08:09 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:25:05.729 13:08:09 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:05.729 13:08:09 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:05.729 13:08:09 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:05.729 13:08:09 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:05.729 13:08:09 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:05.729 13:08:09 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:05.729 13:08:09 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:05.729 13:08:09 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:05.729 13:08:09 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:05.729 13:08:09 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:05.729 13:08:09 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:05.729 13:08:09 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:05.729 13:08:09 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:05.729 13:08:10 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:05.729 13:08:10 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:05.729 13:08:10 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:05.729 13:08:10 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:05.729 13:08:10 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:05.729 13:08:10 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:05.729 13:08:10 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:05.729 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:05.729 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.565 ms 00:25:05.729 00:25:05.729 --- 10.0.0.2 ping statistics --- 00:25:05.729 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:05.729 rtt min/avg/max/mdev = 0.565/0.565/0.565/0.000 ms 00:25:05.729 13:08:10 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:05.729 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:05.729 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.228 ms 00:25:05.729 00:25:05.729 --- 10.0.0.1 ping statistics --- 00:25:05.729 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:05.729 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:25:05.729 13:08:10 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:05.729 13:08:10 -- nvmf/common.sh@411 -- # return 0 00:25:05.729 13:08:10 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:25:05.729 13:08:10 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:05.729 13:08:10 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:25:05.729 13:08:10 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:25:05.729 13:08:10 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:05.729 13:08:10 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:25:05.729 13:08:10 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:25:05.729 13:08:10 -- target/perf_adq.sh@88 -- # adq_configure_driver 00:25:05.729 13:08:10 -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:25:05.729 13:08:10 -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:25:05.730 13:08:10 -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:25:05.730 net.core.busy_poll = 1 00:25:05.730 13:08:10 -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:25:05.730 net.core.busy_read = 1 00:25:05.730 13:08:10 -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:25:05.730 13:08:10 -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:25:05.730 13:08:10 -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:25:05.730 13:08:10 -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:25:05.730 13:08:10 -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:25:05.730 13:08:10 -- target/perf_adq.sh@89 -- # nvmfappstart -m 0xF --wait-for-rpc 00:25:05.730 13:08:10 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:25:05.730 13:08:10 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:05.730 13:08:10 -- common/autotest_common.sh@10 -- # set +x 00:25:05.730 13:08:10 -- nvmf/common.sh@470 -- # nvmfpid=4082512 00:25:05.730 13:08:10 -- nvmf/common.sh@471 -- # waitforlisten 4082512 00:25:05.730 13:08:10 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:25:05.730 13:08:10 -- common/autotest_common.sh@817 -- # '[' -z 4082512 ']' 00:25:05.730 13:08:10 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:05.730 13:08:10 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:05.730 13:08:10 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:05.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:05.730 13:08:10 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:05.730 13:08:10 -- common/autotest_common.sh@10 -- # set +x 00:25:05.730 [2024-04-26 13:08:10.667632] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:25:05.730 [2024-04-26 13:08:10.667698] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:05.730 EAL: No free 2048 kB hugepages reported on node 1 00:25:05.730 [2024-04-26 13:08:10.740066] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:05.990 [2024-04-26 13:08:10.813634] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:05.990 [2024-04-26 13:08:10.813675] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:05.990 [2024-04-26 13:08:10.813683] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:05.990 [2024-04-26 13:08:10.813691] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:05.990 [2024-04-26 13:08:10.813698] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:05.990 [2024-04-26 13:08:10.813872] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:05.990 [2024-04-26 13:08:10.813986] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:05.990 [2024-04-26 13:08:10.814178] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:05.990 [2024-04-26 13:08:10.814178] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:06.560 13:08:11 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:06.560 13:08:11 -- common/autotest_common.sh@850 -- # return 0 00:25:06.560 13:08:11 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:25:06.560 13:08:11 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:06.560 13:08:11 -- common/autotest_common.sh@10 -- # set +x 00:25:06.560 13:08:11 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:06.560 13:08:11 -- target/perf_adq.sh@90 -- # adq_configure_nvmf_target 1 00:25:06.560 13:08:11 -- target/perf_adq.sh@42 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:25:06.560 13:08:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:06.560 13:08:11 -- common/autotest_common.sh@10 -- # set +x 00:25:06.560 13:08:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:06.560 13:08:11 -- target/perf_adq.sh@43 -- # rpc_cmd framework_start_init 00:25:06.560 13:08:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:06.560 13:08:11 -- common/autotest_common.sh@10 -- # set +x 00:25:06.560 13:08:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:06.560 13:08:11 -- target/perf_adq.sh@44 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:25:06.560 13:08:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:06.560 13:08:11 -- common/autotest_common.sh@10 -- # set +x 00:25:06.560 [2024-04-26 13:08:11.576096] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:06.560 13:08:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:06.560 13:08:11 -- target/perf_adq.sh@45 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:06.560 13:08:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:06.560 13:08:11 -- common/autotest_common.sh@10 -- # set +x 00:25:06.560 Malloc1 00:25:06.560 13:08:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:06.560 13:08:11 -- target/perf_adq.sh@46 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:06.560 13:08:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:06.560 13:08:11 -- common/autotest_common.sh@10 -- # set +x 00:25:06.560 13:08:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:06.560 13:08:11 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:06.560 13:08:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:06.560 13:08:11 -- common/autotest_common.sh@10 -- # set +x 00:25:06.821 13:08:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:06.821 13:08:11 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:06.821 13:08:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:06.821 13:08:11 -- common/autotest_common.sh@10 -- # set +x 00:25:06.821 [2024-04-26 13:08:11.628564] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:06.821 13:08:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:06.821 13:08:11 -- target/perf_adq.sh@94 -- # perfpid=4082594 00:25:06.821 13:08:11 -- target/perf_adq.sh@95 -- # sleep 2 00:25:06.821 13:08:11 -- target/perf_adq.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:06.821 EAL: No free 2048 kB hugepages reported on node 1 00:25:08.733 13:08:13 -- target/perf_adq.sh@97 -- # rpc_cmd nvmf_get_stats 00:25:08.733 13:08:13 -- target/perf_adq.sh@97 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:25:08.733 13:08:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:08.733 13:08:13 -- target/perf_adq.sh@97 -- # wc -l 00:25:08.733 13:08:13 -- common/autotest_common.sh@10 -- # set +x 00:25:08.733 13:08:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:08.733 13:08:13 -- target/perf_adq.sh@97 -- # count=2 00:25:08.733 13:08:13 -- target/perf_adq.sh@98 -- # [[ 2 -lt 2 ]] 00:25:08.733 13:08:13 -- target/perf_adq.sh@103 -- # wait 4082594 00:25:16.874 Initializing NVMe Controllers 00:25:16.874 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:16.874 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:25:16.874 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:25:16.874 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:25:16.874 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:25:16.874 Initialization complete. Launching workers. 00:25:16.874 ======================================================== 00:25:16.874 Latency(us) 00:25:16.874 Device Information : IOPS MiB/s Average min max 00:25:16.874 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10585.70 41.35 6065.55 1315.37 50218.75 00:25:16.874 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 11135.40 43.50 5764.85 1234.48 49461.24 00:25:16.874 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 8716.00 34.05 7364.39 1372.86 50077.58 00:25:16.874 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 9379.30 36.64 6824.79 910.63 50603.93 00:25:16.874 ======================================================== 00:25:16.874 Total : 39816.40 155.53 6444.62 910.63 50603.93 00:25:16.874 00:25:16.874 13:08:21 -- target/perf_adq.sh@104 -- # nvmftestfini 00:25:16.874 13:08:21 -- nvmf/common.sh@477 -- # nvmfcleanup 00:25:16.874 13:08:21 -- nvmf/common.sh@117 -- # sync 00:25:16.874 13:08:21 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:16.874 13:08:21 -- nvmf/common.sh@120 -- # set +e 00:25:16.874 13:08:21 -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:16.874 13:08:21 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:16.874 rmmod nvme_tcp 00:25:16.874 rmmod nvme_fabrics 00:25:16.874 rmmod nvme_keyring 00:25:16.874 13:08:21 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:16.874 13:08:21 -- nvmf/common.sh@124 -- # set -e 00:25:16.874 13:08:21 -- nvmf/common.sh@125 -- # return 0 00:25:16.874 13:08:21 -- nvmf/common.sh@478 -- # '[' -n 4082512 ']' 00:25:16.874 13:08:21 -- nvmf/common.sh@479 -- # killprocess 4082512 00:25:16.874 13:08:21 -- common/autotest_common.sh@936 -- # '[' -z 4082512 ']' 00:25:16.874 13:08:21 -- common/autotest_common.sh@940 -- # kill -0 4082512 00:25:16.874 13:08:21 -- common/autotest_common.sh@941 -- # uname 00:25:16.874 13:08:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:16.874 13:08:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4082512 00:25:17.135 13:08:21 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:17.135 13:08:21 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:17.135 13:08:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4082512' 00:25:17.135 killing process with pid 4082512 00:25:17.135 13:08:21 -- common/autotest_common.sh@955 -- # kill 4082512 00:25:17.135 13:08:21 -- common/autotest_common.sh@960 -- # wait 4082512 00:25:17.135 13:08:22 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:25:17.135 13:08:22 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:25:17.135 13:08:22 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:25:17.135 13:08:22 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:17.135 13:08:22 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:17.135 13:08:22 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:17.135 13:08:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:17.135 13:08:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:20.518 13:08:25 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:20.518 13:08:25 -- target/perf_adq.sh@106 -- # trap - SIGINT SIGTERM EXIT 00:25:20.518 00:25:20.518 real 0m53.256s 00:25:20.518 user 2m49.914s 00:25:20.518 sys 0m10.348s 00:25:20.518 13:08:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:20.518 13:08:25 -- common/autotest_common.sh@10 -- # set +x 00:25:20.518 ************************************ 00:25:20.518 END TEST nvmf_perf_adq 00:25:20.518 ************************************ 00:25:20.518 13:08:25 -- nvmf/nvmf.sh@81 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:25:20.518 13:08:25 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:20.518 13:08:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:20.518 13:08:25 -- common/autotest_common.sh@10 -- # set +x 00:25:20.518 ************************************ 00:25:20.518 START TEST nvmf_shutdown 00:25:20.518 ************************************ 00:25:20.518 13:08:25 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:25:20.518 * Looking for test storage... 00:25:20.518 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:20.518 13:08:25 -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:20.518 13:08:25 -- nvmf/common.sh@7 -- # uname -s 00:25:20.518 13:08:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:20.518 13:08:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:20.518 13:08:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:20.518 13:08:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:20.518 13:08:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:20.518 13:08:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:20.518 13:08:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:20.518 13:08:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:20.519 13:08:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:20.519 13:08:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:20.519 13:08:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:20.519 13:08:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:20.519 13:08:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:20.519 13:08:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:20.519 13:08:25 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:20.519 13:08:25 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:20.519 13:08:25 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:20.519 13:08:25 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:20.519 13:08:25 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:20.519 13:08:25 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:20.519 13:08:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:20.519 13:08:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:20.519 13:08:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:20.519 13:08:25 -- paths/export.sh@5 -- # export PATH 00:25:20.519 13:08:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:20.519 13:08:25 -- nvmf/common.sh@47 -- # : 0 00:25:20.519 13:08:25 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:20.519 13:08:25 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:20.519 13:08:25 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:20.519 13:08:25 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:20.519 13:08:25 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:20.519 13:08:25 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:20.519 13:08:25 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:20.519 13:08:25 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:20.519 13:08:25 -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:20.519 13:08:25 -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:20.519 13:08:25 -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:25:20.519 13:08:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:20.519 13:08:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:20.519 13:08:25 -- common/autotest_common.sh@10 -- # set +x 00:25:20.780 ************************************ 00:25:20.780 START TEST nvmf_shutdown_tc1 00:25:20.780 ************************************ 00:25:20.780 13:08:25 -- common/autotest_common.sh@1111 -- # nvmf_shutdown_tc1 00:25:20.780 13:08:25 -- target/shutdown.sh@74 -- # starttarget 00:25:20.780 13:08:25 -- target/shutdown.sh@15 -- # nvmftestinit 00:25:20.780 13:08:25 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:25:20.780 13:08:25 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:20.780 13:08:25 -- nvmf/common.sh@437 -- # prepare_net_devs 00:25:20.780 13:08:25 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:25:20.780 13:08:25 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:25:20.780 13:08:25 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:20.780 13:08:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:20.780 13:08:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:20.780 13:08:25 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:25:20.780 13:08:25 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:25:20.780 13:08:25 -- nvmf/common.sh@285 -- # xtrace_disable 00:25:20.780 13:08:25 -- common/autotest_common.sh@10 -- # set +x 00:25:28.917 13:08:32 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:28.917 13:08:32 -- nvmf/common.sh@291 -- # pci_devs=() 00:25:28.917 13:08:32 -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:28.917 13:08:32 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:28.917 13:08:32 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:28.917 13:08:32 -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:28.917 13:08:32 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:28.917 13:08:32 -- nvmf/common.sh@295 -- # net_devs=() 00:25:28.917 13:08:32 -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:28.917 13:08:32 -- nvmf/common.sh@296 -- # e810=() 00:25:28.917 13:08:32 -- nvmf/common.sh@296 -- # local -ga e810 00:25:28.917 13:08:32 -- nvmf/common.sh@297 -- # x722=() 00:25:28.917 13:08:32 -- nvmf/common.sh@297 -- # local -ga x722 00:25:28.917 13:08:32 -- nvmf/common.sh@298 -- # mlx=() 00:25:28.917 13:08:32 -- nvmf/common.sh@298 -- # local -ga mlx 00:25:28.917 13:08:32 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:28.917 13:08:32 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:28.917 13:08:32 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:28.918 13:08:32 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:28.918 13:08:32 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:28.918 13:08:32 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:28.918 13:08:32 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:28.918 13:08:32 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:28.918 13:08:32 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:28.918 13:08:32 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:28.918 13:08:32 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:28.918 13:08:32 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:28.918 13:08:32 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:28.918 13:08:32 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:28.918 13:08:32 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:28.918 13:08:32 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:28.918 13:08:32 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:28.918 13:08:32 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:28.918 13:08:32 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:28.918 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:28.918 13:08:32 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:28.918 13:08:32 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:28.918 13:08:32 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:28.918 13:08:32 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:28.918 13:08:32 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:28.918 13:08:32 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:28.918 13:08:32 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:28.918 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:28.918 13:08:32 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:28.918 13:08:32 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:28.918 13:08:32 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:28.918 13:08:32 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:28.918 13:08:32 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:28.918 13:08:32 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:28.918 13:08:32 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:28.918 13:08:32 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:28.918 13:08:32 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:28.918 13:08:32 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:28.918 13:08:32 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:25:28.918 13:08:32 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:28.918 13:08:32 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:28.918 Found net devices under 0000:31:00.0: cvl_0_0 00:25:28.918 13:08:32 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:25:28.918 13:08:32 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:28.918 13:08:32 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:28.918 13:08:32 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:25:28.918 13:08:32 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:28.918 13:08:32 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:28.918 Found net devices under 0000:31:00.1: cvl_0_1 00:25:28.918 13:08:32 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:25:28.918 13:08:32 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:25:28.918 13:08:32 -- nvmf/common.sh@403 -- # is_hw=yes 00:25:28.918 13:08:32 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:25:28.918 13:08:32 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:25:28.918 13:08:32 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:25:28.918 13:08:32 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:28.918 13:08:32 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:28.918 13:08:32 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:28.918 13:08:32 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:28.918 13:08:32 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:28.918 13:08:32 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:28.918 13:08:32 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:28.918 13:08:32 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:28.918 13:08:32 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:28.918 13:08:32 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:28.918 13:08:32 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:28.918 13:08:32 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:28.918 13:08:32 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:28.918 13:08:32 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:28.918 13:08:32 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:28.918 13:08:32 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:28.918 13:08:32 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:28.918 13:08:32 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:28.918 13:08:32 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:28.918 13:08:32 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:28.918 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:28.918 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.696 ms 00:25:28.918 00:25:28.918 --- 10.0.0.2 ping statistics --- 00:25:28.918 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:28.918 rtt min/avg/max/mdev = 0.696/0.696/0.696/0.000 ms 00:25:28.918 13:08:32 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:28.918 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:28.918 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.306 ms 00:25:28.918 00:25:28.918 --- 10.0.0.1 ping statistics --- 00:25:28.918 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:28.918 rtt min/avg/max/mdev = 0.306/0.306/0.306/0.000 ms 00:25:28.918 13:08:32 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:28.918 13:08:32 -- nvmf/common.sh@411 -- # return 0 00:25:28.918 13:08:32 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:25:28.918 13:08:32 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:28.918 13:08:32 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:25:28.918 13:08:32 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:25:28.918 13:08:32 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:28.918 13:08:32 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:25:28.918 13:08:32 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:25:28.918 13:08:32 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:25:28.918 13:08:32 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:25:28.918 13:08:32 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:28.918 13:08:32 -- common/autotest_common.sh@10 -- # set +x 00:25:28.918 13:08:32 -- nvmf/common.sh@470 -- # nvmfpid=4089132 00:25:28.918 13:08:32 -- nvmf/common.sh@471 -- # waitforlisten 4089132 00:25:28.918 13:08:32 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:25:28.918 13:08:32 -- common/autotest_common.sh@817 -- # '[' -z 4089132 ']' 00:25:28.918 13:08:32 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:28.918 13:08:32 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:28.918 13:08:32 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:28.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:28.918 13:08:32 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:28.918 13:08:32 -- common/autotest_common.sh@10 -- # set +x 00:25:28.918 [2024-04-26 13:08:32.907859] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:25:28.918 [2024-04-26 13:08:32.907923] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:28.918 EAL: No free 2048 kB hugepages reported on node 1 00:25:28.918 [2024-04-26 13:08:32.980450] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:28.918 [2024-04-26 13:08:33.074267] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:28.918 [2024-04-26 13:08:33.074320] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:28.918 [2024-04-26 13:08:33.074328] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:28.918 [2024-04-26 13:08:33.074335] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:28.918 [2024-04-26 13:08:33.074341] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:28.918 [2024-04-26 13:08:33.074491] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:28.918 [2024-04-26 13:08:33.074656] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:28.918 [2024-04-26 13:08:33.074786] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:28.918 [2024-04-26 13:08:33.074787] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:25:28.918 13:08:33 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:28.918 13:08:33 -- common/autotest_common.sh@850 -- # return 0 00:25:28.918 13:08:33 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:25:28.918 13:08:33 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:28.918 13:08:33 -- common/autotest_common.sh@10 -- # set +x 00:25:28.919 13:08:33 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:28.919 13:08:33 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:28.919 13:08:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:28.919 13:08:33 -- common/autotest_common.sh@10 -- # set +x 00:25:28.919 [2024-04-26 13:08:33.743415] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:28.919 13:08:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:28.919 13:08:33 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:25:28.919 13:08:33 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:25:28.919 13:08:33 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:28.919 13:08:33 -- common/autotest_common.sh@10 -- # set +x 00:25:28.919 13:08:33 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:28.919 13:08:33 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:28.919 13:08:33 -- target/shutdown.sh@28 -- # cat 00:25:28.919 13:08:33 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:28.919 13:08:33 -- target/shutdown.sh@28 -- # cat 00:25:28.919 13:08:33 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:28.919 13:08:33 -- target/shutdown.sh@28 -- # cat 00:25:28.919 13:08:33 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:28.919 13:08:33 -- target/shutdown.sh@28 -- # cat 00:25:28.919 13:08:33 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:28.919 13:08:33 -- target/shutdown.sh@28 -- # cat 00:25:28.919 13:08:33 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:28.919 13:08:33 -- target/shutdown.sh@28 -- # cat 00:25:28.919 13:08:33 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:28.919 13:08:33 -- target/shutdown.sh@28 -- # cat 00:25:28.919 13:08:33 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:28.919 13:08:33 -- target/shutdown.sh@28 -- # cat 00:25:28.919 13:08:33 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:28.919 13:08:33 -- target/shutdown.sh@28 -- # cat 00:25:28.919 13:08:33 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:28.919 13:08:33 -- target/shutdown.sh@28 -- # cat 00:25:28.919 13:08:33 -- target/shutdown.sh@35 -- # rpc_cmd 00:25:28.919 13:08:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:28.919 13:08:33 -- common/autotest_common.sh@10 -- # set +x 00:25:28.919 Malloc1 00:25:28.919 [2024-04-26 13:08:33.844116] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:28.919 Malloc2 00:25:28.919 Malloc3 00:25:28.919 Malloc4 00:25:29.179 Malloc5 00:25:29.179 Malloc6 00:25:29.179 Malloc7 00:25:29.179 Malloc8 00:25:29.179 Malloc9 00:25:29.179 Malloc10 00:25:29.179 13:08:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:29.179 13:08:34 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:25:29.179 13:08:34 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:29.179 13:08:34 -- common/autotest_common.sh@10 -- # set +x 00:25:29.439 13:08:34 -- target/shutdown.sh@78 -- # perfpid=4089517 00:25:29.439 13:08:34 -- target/shutdown.sh@79 -- # waitforlisten 4089517 /var/tmp/bdevperf.sock 00:25:29.439 13:08:34 -- common/autotest_common.sh@817 -- # '[' -z 4089517 ']' 00:25:29.439 13:08:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:29.439 13:08:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:29.439 13:08:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:29.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:29.439 13:08:34 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:25:29.439 13:08:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:29.439 13:08:34 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:25:29.439 13:08:34 -- common/autotest_common.sh@10 -- # set +x 00:25:29.439 13:08:34 -- nvmf/common.sh@521 -- # config=() 00:25:29.439 13:08:34 -- nvmf/common.sh@521 -- # local subsystem config 00:25:29.439 13:08:34 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:29.439 13:08:34 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:29.439 { 00:25:29.439 "params": { 00:25:29.439 "name": "Nvme$subsystem", 00:25:29.439 "trtype": "$TEST_TRANSPORT", 00:25:29.439 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:29.439 "adrfam": "ipv4", 00:25:29.439 "trsvcid": "$NVMF_PORT", 00:25:29.439 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:29.439 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:29.439 "hdgst": ${hdgst:-false}, 00:25:29.439 "ddgst": ${ddgst:-false} 00:25:29.439 }, 00:25:29.439 "method": "bdev_nvme_attach_controller" 00:25:29.439 } 00:25:29.439 EOF 00:25:29.439 )") 00:25:29.439 13:08:34 -- nvmf/common.sh@543 -- # cat 00:25:29.439 13:08:34 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:29.439 13:08:34 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:29.439 { 00:25:29.439 "params": { 00:25:29.439 "name": "Nvme$subsystem", 00:25:29.439 "trtype": "$TEST_TRANSPORT", 00:25:29.439 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:29.439 "adrfam": "ipv4", 00:25:29.439 "trsvcid": "$NVMF_PORT", 00:25:29.439 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:29.439 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:29.439 "hdgst": ${hdgst:-false}, 00:25:29.439 "ddgst": ${ddgst:-false} 00:25:29.439 }, 00:25:29.439 "method": "bdev_nvme_attach_controller" 00:25:29.439 } 00:25:29.439 EOF 00:25:29.439 )") 00:25:29.439 13:08:34 -- nvmf/common.sh@543 -- # cat 00:25:29.439 13:08:34 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:29.439 13:08:34 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:29.439 { 00:25:29.439 "params": { 00:25:29.439 "name": "Nvme$subsystem", 00:25:29.439 "trtype": "$TEST_TRANSPORT", 00:25:29.439 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:29.439 "adrfam": "ipv4", 00:25:29.439 "trsvcid": "$NVMF_PORT", 00:25:29.439 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:29.440 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:29.440 "hdgst": ${hdgst:-false}, 00:25:29.440 "ddgst": ${ddgst:-false} 00:25:29.440 }, 00:25:29.440 "method": "bdev_nvme_attach_controller" 00:25:29.440 } 00:25:29.440 EOF 00:25:29.440 )") 00:25:29.440 13:08:34 -- nvmf/common.sh@543 -- # cat 00:25:29.440 13:08:34 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:29.440 13:08:34 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:29.440 { 00:25:29.440 "params": { 00:25:29.440 "name": "Nvme$subsystem", 00:25:29.440 "trtype": "$TEST_TRANSPORT", 00:25:29.440 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:29.440 "adrfam": "ipv4", 00:25:29.440 "trsvcid": "$NVMF_PORT", 00:25:29.440 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:29.440 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:29.440 "hdgst": ${hdgst:-false}, 00:25:29.440 "ddgst": ${ddgst:-false} 00:25:29.440 }, 00:25:29.440 "method": "bdev_nvme_attach_controller" 00:25:29.440 } 00:25:29.440 EOF 00:25:29.440 )") 00:25:29.440 13:08:34 -- nvmf/common.sh@543 -- # cat 00:25:29.440 13:08:34 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:29.440 13:08:34 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:29.440 { 00:25:29.440 "params": { 00:25:29.440 "name": "Nvme$subsystem", 00:25:29.440 "trtype": "$TEST_TRANSPORT", 00:25:29.440 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:29.440 "adrfam": "ipv4", 00:25:29.440 "trsvcid": "$NVMF_PORT", 00:25:29.440 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:29.440 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:29.440 "hdgst": ${hdgst:-false}, 00:25:29.440 "ddgst": ${ddgst:-false} 00:25:29.440 }, 00:25:29.440 "method": "bdev_nvme_attach_controller" 00:25:29.440 } 00:25:29.440 EOF 00:25:29.440 )") 00:25:29.440 13:08:34 -- nvmf/common.sh@543 -- # cat 00:25:29.440 13:08:34 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:29.440 13:08:34 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:29.440 { 00:25:29.440 "params": { 00:25:29.440 "name": "Nvme$subsystem", 00:25:29.440 "trtype": "$TEST_TRANSPORT", 00:25:29.440 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:29.440 "adrfam": "ipv4", 00:25:29.440 "trsvcid": "$NVMF_PORT", 00:25:29.440 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:29.440 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:29.440 "hdgst": ${hdgst:-false}, 00:25:29.440 "ddgst": ${ddgst:-false} 00:25:29.440 }, 00:25:29.440 "method": "bdev_nvme_attach_controller" 00:25:29.440 } 00:25:29.440 EOF 00:25:29.440 )") 00:25:29.440 13:08:34 -- nvmf/common.sh@543 -- # cat 00:25:29.440 13:08:34 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:29.440 13:08:34 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:29.440 { 00:25:29.440 "params": { 00:25:29.440 "name": "Nvme$subsystem", 00:25:29.440 "trtype": "$TEST_TRANSPORT", 00:25:29.440 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:29.440 "adrfam": "ipv4", 00:25:29.440 "trsvcid": "$NVMF_PORT", 00:25:29.440 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:29.440 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:29.440 "hdgst": ${hdgst:-false}, 00:25:29.440 "ddgst": ${ddgst:-false} 00:25:29.440 }, 00:25:29.440 "method": "bdev_nvme_attach_controller" 00:25:29.440 } 00:25:29.440 EOF 00:25:29.440 )") 00:25:29.440 13:08:34 -- nvmf/common.sh@543 -- # cat 00:25:29.440 [2024-04-26 13:08:34.295995] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:25:29.440 [2024-04-26 13:08:34.296061] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:25:29.440 13:08:34 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:29.440 13:08:34 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:29.440 { 00:25:29.440 "params": { 00:25:29.440 "name": "Nvme$subsystem", 00:25:29.440 "trtype": "$TEST_TRANSPORT", 00:25:29.440 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:29.440 "adrfam": "ipv4", 00:25:29.440 "trsvcid": "$NVMF_PORT", 00:25:29.440 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:29.440 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:29.440 "hdgst": ${hdgst:-false}, 00:25:29.440 "ddgst": ${ddgst:-false} 00:25:29.440 }, 00:25:29.440 "method": "bdev_nvme_attach_controller" 00:25:29.440 } 00:25:29.440 EOF 00:25:29.440 )") 00:25:29.440 13:08:34 -- nvmf/common.sh@543 -- # cat 00:25:29.440 13:08:34 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:29.440 13:08:34 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:29.440 { 00:25:29.440 "params": { 00:25:29.440 "name": "Nvme$subsystem", 00:25:29.440 "trtype": "$TEST_TRANSPORT", 00:25:29.440 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:29.440 "adrfam": "ipv4", 00:25:29.440 "trsvcid": "$NVMF_PORT", 00:25:29.440 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:29.440 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:29.440 "hdgst": ${hdgst:-false}, 00:25:29.440 "ddgst": ${ddgst:-false} 00:25:29.440 }, 00:25:29.440 "method": "bdev_nvme_attach_controller" 00:25:29.440 } 00:25:29.440 EOF 00:25:29.440 )") 00:25:29.440 13:08:34 -- nvmf/common.sh@543 -- # cat 00:25:29.440 13:08:34 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:29.440 13:08:34 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:29.440 { 00:25:29.440 "params": { 00:25:29.440 "name": "Nvme$subsystem", 00:25:29.440 "trtype": "$TEST_TRANSPORT", 00:25:29.440 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:29.440 "adrfam": "ipv4", 00:25:29.440 "trsvcid": "$NVMF_PORT", 00:25:29.440 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:29.440 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:29.440 "hdgst": ${hdgst:-false}, 00:25:29.440 "ddgst": ${ddgst:-false} 00:25:29.440 }, 00:25:29.440 "method": "bdev_nvme_attach_controller" 00:25:29.440 } 00:25:29.440 EOF 00:25:29.440 )") 00:25:29.440 13:08:34 -- nvmf/common.sh@543 -- # cat 00:25:29.440 13:08:34 -- nvmf/common.sh@545 -- # jq . 00:25:29.440 EAL: No free 2048 kB hugepages reported on node 1 00:25:29.440 13:08:34 -- nvmf/common.sh@546 -- # IFS=, 00:25:29.440 13:08:34 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:25:29.440 "params": { 00:25:29.440 "name": "Nvme1", 00:25:29.440 "trtype": "tcp", 00:25:29.440 "traddr": "10.0.0.2", 00:25:29.440 "adrfam": "ipv4", 00:25:29.440 "trsvcid": "4420", 00:25:29.440 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:29.440 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:29.440 "hdgst": false, 00:25:29.440 "ddgst": false 00:25:29.440 }, 00:25:29.440 "method": "bdev_nvme_attach_controller" 00:25:29.440 },{ 00:25:29.440 "params": { 00:25:29.440 "name": "Nvme2", 00:25:29.440 "trtype": "tcp", 00:25:29.440 "traddr": "10.0.0.2", 00:25:29.440 "adrfam": "ipv4", 00:25:29.440 "trsvcid": "4420", 00:25:29.440 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:29.440 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:29.440 "hdgst": false, 00:25:29.440 "ddgst": false 00:25:29.440 }, 00:25:29.440 "method": "bdev_nvme_attach_controller" 00:25:29.440 },{ 00:25:29.440 "params": { 00:25:29.440 "name": "Nvme3", 00:25:29.440 "trtype": "tcp", 00:25:29.440 "traddr": "10.0.0.2", 00:25:29.440 "adrfam": "ipv4", 00:25:29.440 "trsvcid": "4420", 00:25:29.440 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:25:29.440 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:25:29.440 "hdgst": false, 00:25:29.440 "ddgst": false 00:25:29.440 }, 00:25:29.440 "method": "bdev_nvme_attach_controller" 00:25:29.440 },{ 00:25:29.440 "params": { 00:25:29.440 "name": "Nvme4", 00:25:29.440 "trtype": "tcp", 00:25:29.440 "traddr": "10.0.0.2", 00:25:29.440 "adrfam": "ipv4", 00:25:29.440 "trsvcid": "4420", 00:25:29.440 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:25:29.440 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:25:29.440 "hdgst": false, 00:25:29.440 "ddgst": false 00:25:29.440 }, 00:25:29.440 "method": "bdev_nvme_attach_controller" 00:25:29.440 },{ 00:25:29.440 "params": { 00:25:29.440 "name": "Nvme5", 00:25:29.440 "trtype": "tcp", 00:25:29.440 "traddr": "10.0.0.2", 00:25:29.440 "adrfam": "ipv4", 00:25:29.440 "trsvcid": "4420", 00:25:29.440 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:25:29.440 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:25:29.440 "hdgst": false, 00:25:29.440 "ddgst": false 00:25:29.440 }, 00:25:29.440 "method": "bdev_nvme_attach_controller" 00:25:29.440 },{ 00:25:29.440 "params": { 00:25:29.440 "name": "Nvme6", 00:25:29.440 "trtype": "tcp", 00:25:29.440 "traddr": "10.0.0.2", 00:25:29.440 "adrfam": "ipv4", 00:25:29.440 "trsvcid": "4420", 00:25:29.440 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:25:29.440 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:25:29.440 "hdgst": false, 00:25:29.440 "ddgst": false 00:25:29.440 }, 00:25:29.440 "method": "bdev_nvme_attach_controller" 00:25:29.440 },{ 00:25:29.440 "params": { 00:25:29.440 "name": "Nvme7", 00:25:29.440 "trtype": "tcp", 00:25:29.440 "traddr": "10.0.0.2", 00:25:29.440 "adrfam": "ipv4", 00:25:29.440 "trsvcid": "4420", 00:25:29.440 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:25:29.440 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:25:29.440 "hdgst": false, 00:25:29.440 "ddgst": false 00:25:29.440 }, 00:25:29.440 "method": "bdev_nvme_attach_controller" 00:25:29.440 },{ 00:25:29.440 "params": { 00:25:29.440 "name": "Nvme8", 00:25:29.440 "trtype": "tcp", 00:25:29.440 "traddr": "10.0.0.2", 00:25:29.440 "adrfam": "ipv4", 00:25:29.441 "trsvcid": "4420", 00:25:29.441 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:25:29.441 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:25:29.441 "hdgst": false, 00:25:29.441 "ddgst": false 00:25:29.441 }, 00:25:29.441 "method": "bdev_nvme_attach_controller" 00:25:29.441 },{ 00:25:29.441 "params": { 00:25:29.441 "name": "Nvme9", 00:25:29.441 "trtype": "tcp", 00:25:29.441 "traddr": "10.0.0.2", 00:25:29.441 "adrfam": "ipv4", 00:25:29.441 "trsvcid": "4420", 00:25:29.441 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:25:29.441 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:25:29.441 "hdgst": false, 00:25:29.441 "ddgst": false 00:25:29.441 }, 00:25:29.441 "method": "bdev_nvme_attach_controller" 00:25:29.441 },{ 00:25:29.441 "params": { 00:25:29.441 "name": "Nvme10", 00:25:29.441 "trtype": "tcp", 00:25:29.441 "traddr": "10.0.0.2", 00:25:29.441 "adrfam": "ipv4", 00:25:29.441 "trsvcid": "4420", 00:25:29.441 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:25:29.441 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:25:29.441 "hdgst": false, 00:25:29.441 "ddgst": false 00:25:29.441 }, 00:25:29.441 "method": "bdev_nvme_attach_controller" 00:25:29.441 }' 00:25:29.441 [2024-04-26 13:08:34.358460] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:29.441 [2024-04-26 13:08:34.421221] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:30.822 13:08:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:30.822 13:08:35 -- common/autotest_common.sh@850 -- # return 0 00:25:30.822 13:08:35 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:25:30.822 13:08:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:30.822 13:08:35 -- common/autotest_common.sh@10 -- # set +x 00:25:30.822 13:08:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:30.822 13:08:35 -- target/shutdown.sh@83 -- # kill -9 4089517 00:25:30.822 13:08:35 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:25:30.822 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 4089517 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:25:30.822 13:08:35 -- target/shutdown.sh@87 -- # sleep 1 00:25:31.763 13:08:36 -- target/shutdown.sh@88 -- # kill -0 4089132 00:25:31.763 13:08:36 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:25:31.763 13:08:36 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:25:31.763 13:08:36 -- nvmf/common.sh@521 -- # config=() 00:25:31.763 13:08:36 -- nvmf/common.sh@521 -- # local subsystem config 00:25:31.763 13:08:36 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:31.763 13:08:36 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:31.763 { 00:25:31.763 "params": { 00:25:31.763 "name": "Nvme$subsystem", 00:25:31.763 "trtype": "$TEST_TRANSPORT", 00:25:31.763 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:31.763 "adrfam": "ipv4", 00:25:31.763 "trsvcid": "$NVMF_PORT", 00:25:31.763 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:31.763 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:31.763 "hdgst": ${hdgst:-false}, 00:25:31.763 "ddgst": ${ddgst:-false} 00:25:31.763 }, 00:25:31.763 "method": "bdev_nvme_attach_controller" 00:25:31.763 } 00:25:31.763 EOF 00:25:31.763 )") 00:25:31.763 13:08:36 -- nvmf/common.sh@543 -- # cat 00:25:31.763 13:08:36 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:31.763 13:08:36 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:31.763 { 00:25:31.763 "params": { 00:25:31.763 "name": "Nvme$subsystem", 00:25:31.763 "trtype": "$TEST_TRANSPORT", 00:25:31.763 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:31.763 "adrfam": "ipv4", 00:25:31.763 "trsvcid": "$NVMF_PORT", 00:25:31.763 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:31.763 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:31.763 "hdgst": ${hdgst:-false}, 00:25:31.763 "ddgst": ${ddgst:-false} 00:25:31.763 }, 00:25:31.763 "method": "bdev_nvme_attach_controller" 00:25:31.763 } 00:25:31.763 EOF 00:25:31.763 )") 00:25:31.763 13:08:36 -- nvmf/common.sh@543 -- # cat 00:25:32.023 13:08:36 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:32.023 13:08:36 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:32.023 { 00:25:32.023 "params": { 00:25:32.023 "name": "Nvme$subsystem", 00:25:32.023 "trtype": "$TEST_TRANSPORT", 00:25:32.023 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:32.023 "adrfam": "ipv4", 00:25:32.023 "trsvcid": "$NVMF_PORT", 00:25:32.023 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:32.023 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:32.023 "hdgst": ${hdgst:-false}, 00:25:32.023 "ddgst": ${ddgst:-false} 00:25:32.023 }, 00:25:32.023 "method": "bdev_nvme_attach_controller" 00:25:32.023 } 00:25:32.023 EOF 00:25:32.023 )") 00:25:32.023 13:08:36 -- nvmf/common.sh@543 -- # cat 00:25:32.023 13:08:36 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:32.023 13:08:36 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:32.023 { 00:25:32.023 "params": { 00:25:32.023 "name": "Nvme$subsystem", 00:25:32.023 "trtype": "$TEST_TRANSPORT", 00:25:32.023 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:32.023 "adrfam": "ipv4", 00:25:32.023 "trsvcid": "$NVMF_PORT", 00:25:32.023 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:32.023 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:32.023 "hdgst": ${hdgst:-false}, 00:25:32.023 "ddgst": ${ddgst:-false} 00:25:32.023 }, 00:25:32.023 "method": "bdev_nvme_attach_controller" 00:25:32.023 } 00:25:32.023 EOF 00:25:32.023 )") 00:25:32.023 13:08:36 -- nvmf/common.sh@543 -- # cat 00:25:32.023 13:08:36 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:32.023 13:08:36 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:32.023 { 00:25:32.023 "params": { 00:25:32.023 "name": "Nvme$subsystem", 00:25:32.023 "trtype": "$TEST_TRANSPORT", 00:25:32.023 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:32.023 "adrfam": "ipv4", 00:25:32.023 "trsvcid": "$NVMF_PORT", 00:25:32.023 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:32.023 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:32.023 "hdgst": ${hdgst:-false}, 00:25:32.023 "ddgst": ${ddgst:-false} 00:25:32.023 }, 00:25:32.023 "method": "bdev_nvme_attach_controller" 00:25:32.023 } 00:25:32.023 EOF 00:25:32.023 )") 00:25:32.023 13:08:36 -- nvmf/common.sh@543 -- # cat 00:25:32.023 13:08:36 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:32.023 13:08:36 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:32.023 { 00:25:32.023 "params": { 00:25:32.023 "name": "Nvme$subsystem", 00:25:32.023 "trtype": "$TEST_TRANSPORT", 00:25:32.023 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:32.023 "adrfam": "ipv4", 00:25:32.023 "trsvcid": "$NVMF_PORT", 00:25:32.023 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:32.023 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:32.023 "hdgst": ${hdgst:-false}, 00:25:32.023 "ddgst": ${ddgst:-false} 00:25:32.023 }, 00:25:32.023 "method": "bdev_nvme_attach_controller" 00:25:32.023 } 00:25:32.023 EOF 00:25:32.023 )") 00:25:32.023 13:08:36 -- nvmf/common.sh@543 -- # cat 00:25:32.023 13:08:36 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:32.023 13:08:36 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:32.023 { 00:25:32.023 "params": { 00:25:32.023 "name": "Nvme$subsystem", 00:25:32.023 "trtype": "$TEST_TRANSPORT", 00:25:32.023 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:32.023 "adrfam": "ipv4", 00:25:32.023 "trsvcid": "$NVMF_PORT", 00:25:32.023 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:32.023 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:32.023 "hdgst": ${hdgst:-false}, 00:25:32.023 "ddgst": ${ddgst:-false} 00:25:32.023 }, 00:25:32.023 "method": "bdev_nvme_attach_controller" 00:25:32.023 } 00:25:32.023 EOF 00:25:32.023 )") 00:25:32.024 13:08:36 -- nvmf/common.sh@543 -- # cat 00:25:32.024 [2024-04-26 13:08:36.862997] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:25:32.024 13:08:36 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:32.024 [2024-04-26 13:08:36.863063] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4090002 ] 00:25:32.024 13:08:36 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:32.024 { 00:25:32.024 "params": { 00:25:32.024 "name": "Nvme$subsystem", 00:25:32.024 "trtype": "$TEST_TRANSPORT", 00:25:32.024 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:32.024 "adrfam": "ipv4", 00:25:32.024 "trsvcid": "$NVMF_PORT", 00:25:32.024 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:32.024 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:32.024 "hdgst": ${hdgst:-false}, 00:25:32.024 "ddgst": ${ddgst:-false} 00:25:32.024 }, 00:25:32.024 "method": "bdev_nvme_attach_controller" 00:25:32.024 } 00:25:32.024 EOF 00:25:32.024 )") 00:25:32.024 13:08:36 -- nvmf/common.sh@543 -- # cat 00:25:32.024 13:08:36 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:32.024 13:08:36 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:32.024 { 00:25:32.024 "params": { 00:25:32.024 "name": "Nvme$subsystem", 00:25:32.024 "trtype": "$TEST_TRANSPORT", 00:25:32.024 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:32.024 "adrfam": "ipv4", 00:25:32.024 "trsvcid": "$NVMF_PORT", 00:25:32.024 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:32.024 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:32.024 "hdgst": ${hdgst:-false}, 00:25:32.024 "ddgst": ${ddgst:-false} 00:25:32.024 }, 00:25:32.024 "method": "bdev_nvme_attach_controller" 00:25:32.024 } 00:25:32.024 EOF 00:25:32.024 )") 00:25:32.024 13:08:36 -- nvmf/common.sh@543 -- # cat 00:25:32.024 13:08:36 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:32.024 13:08:36 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:32.024 { 00:25:32.024 "params": { 00:25:32.024 "name": "Nvme$subsystem", 00:25:32.024 "trtype": "$TEST_TRANSPORT", 00:25:32.024 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:32.024 "adrfam": "ipv4", 00:25:32.024 "trsvcid": "$NVMF_PORT", 00:25:32.024 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:32.024 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:32.024 "hdgst": ${hdgst:-false}, 00:25:32.024 "ddgst": ${ddgst:-false} 00:25:32.024 }, 00:25:32.024 "method": "bdev_nvme_attach_controller" 00:25:32.024 } 00:25:32.024 EOF 00:25:32.024 )") 00:25:32.024 13:08:36 -- nvmf/common.sh@543 -- # cat 00:25:32.024 13:08:36 -- nvmf/common.sh@545 -- # jq . 00:25:32.024 13:08:36 -- nvmf/common.sh@546 -- # IFS=, 00:25:32.024 13:08:36 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:25:32.024 "params": { 00:25:32.024 "name": "Nvme1", 00:25:32.024 "trtype": "tcp", 00:25:32.024 "traddr": "10.0.0.2", 00:25:32.024 "adrfam": "ipv4", 00:25:32.024 "trsvcid": "4420", 00:25:32.024 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:32.024 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:32.024 "hdgst": false, 00:25:32.024 "ddgst": false 00:25:32.024 }, 00:25:32.024 "method": "bdev_nvme_attach_controller" 00:25:32.024 },{ 00:25:32.024 "params": { 00:25:32.024 "name": "Nvme2", 00:25:32.024 "trtype": "tcp", 00:25:32.024 "traddr": "10.0.0.2", 00:25:32.024 "adrfam": "ipv4", 00:25:32.024 "trsvcid": "4420", 00:25:32.024 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:32.024 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:32.024 "hdgst": false, 00:25:32.024 "ddgst": false 00:25:32.024 }, 00:25:32.024 "method": "bdev_nvme_attach_controller" 00:25:32.024 },{ 00:25:32.024 "params": { 00:25:32.024 "name": "Nvme3", 00:25:32.024 "trtype": "tcp", 00:25:32.024 "traddr": "10.0.0.2", 00:25:32.024 "adrfam": "ipv4", 00:25:32.024 "trsvcid": "4420", 00:25:32.024 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:25:32.024 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:25:32.024 "hdgst": false, 00:25:32.024 "ddgst": false 00:25:32.024 }, 00:25:32.024 "method": "bdev_nvme_attach_controller" 00:25:32.024 },{ 00:25:32.024 "params": { 00:25:32.024 "name": "Nvme4", 00:25:32.024 "trtype": "tcp", 00:25:32.024 "traddr": "10.0.0.2", 00:25:32.024 "adrfam": "ipv4", 00:25:32.024 "trsvcid": "4420", 00:25:32.024 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:25:32.024 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:25:32.024 "hdgst": false, 00:25:32.024 "ddgst": false 00:25:32.024 }, 00:25:32.024 "method": "bdev_nvme_attach_controller" 00:25:32.024 },{ 00:25:32.024 "params": { 00:25:32.024 "name": "Nvme5", 00:25:32.024 "trtype": "tcp", 00:25:32.024 "traddr": "10.0.0.2", 00:25:32.024 "adrfam": "ipv4", 00:25:32.024 "trsvcid": "4420", 00:25:32.024 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:25:32.024 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:25:32.024 "hdgst": false, 00:25:32.024 "ddgst": false 00:25:32.024 }, 00:25:32.024 "method": "bdev_nvme_attach_controller" 00:25:32.024 },{ 00:25:32.024 "params": { 00:25:32.024 "name": "Nvme6", 00:25:32.024 "trtype": "tcp", 00:25:32.024 "traddr": "10.0.0.2", 00:25:32.024 "adrfam": "ipv4", 00:25:32.024 "trsvcid": "4420", 00:25:32.024 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:25:32.024 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:25:32.024 "hdgst": false, 00:25:32.024 "ddgst": false 00:25:32.024 }, 00:25:32.024 "method": "bdev_nvme_attach_controller" 00:25:32.024 },{ 00:25:32.024 "params": { 00:25:32.024 "name": "Nvme7", 00:25:32.024 "trtype": "tcp", 00:25:32.024 "traddr": "10.0.0.2", 00:25:32.024 "adrfam": "ipv4", 00:25:32.024 "trsvcid": "4420", 00:25:32.024 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:25:32.024 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:25:32.024 "hdgst": false, 00:25:32.024 "ddgst": false 00:25:32.024 }, 00:25:32.024 "method": "bdev_nvme_attach_controller" 00:25:32.024 },{ 00:25:32.024 "params": { 00:25:32.024 "name": "Nvme8", 00:25:32.024 "trtype": "tcp", 00:25:32.024 "traddr": "10.0.0.2", 00:25:32.024 "adrfam": "ipv4", 00:25:32.024 "trsvcid": "4420", 00:25:32.024 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:25:32.024 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:25:32.024 "hdgst": false, 00:25:32.024 "ddgst": false 00:25:32.024 }, 00:25:32.024 "method": "bdev_nvme_attach_controller" 00:25:32.024 },{ 00:25:32.024 "params": { 00:25:32.024 "name": "Nvme9", 00:25:32.024 "trtype": "tcp", 00:25:32.024 "traddr": "10.0.0.2", 00:25:32.024 "adrfam": "ipv4", 00:25:32.024 "trsvcid": "4420", 00:25:32.024 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:25:32.024 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:25:32.024 "hdgst": false, 00:25:32.024 "ddgst": false 00:25:32.024 }, 00:25:32.024 "method": "bdev_nvme_attach_controller" 00:25:32.024 },{ 00:25:32.024 "params": { 00:25:32.024 "name": "Nvme10", 00:25:32.024 "trtype": "tcp", 00:25:32.024 "traddr": "10.0.0.2", 00:25:32.024 "adrfam": "ipv4", 00:25:32.024 "trsvcid": "4420", 00:25:32.024 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:25:32.024 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:25:32.024 "hdgst": false, 00:25:32.024 "ddgst": false 00:25:32.024 }, 00:25:32.024 "method": "bdev_nvme_attach_controller" 00:25:32.024 }' 00:25:32.024 EAL: No free 2048 kB hugepages reported on node 1 00:25:32.024 [2024-04-26 13:08:36.923747] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:32.024 [2024-04-26 13:08:36.986891] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:33.408 Running I/O for 1 seconds... 00:25:34.791 00:25:34.791 Latency(us) 00:25:34.791 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:34.791 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:34.791 Verification LBA range: start 0x0 length 0x400 00:25:34.791 Nvme1n1 : 1.14 224.26 14.02 0.00 0.00 281157.12 15728.64 251658.24 00:25:34.791 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:34.791 Verification LBA range: start 0x0 length 0x400 00:25:34.791 Nvme2n1 : 1.13 227.32 14.21 0.00 0.00 273351.68 17694.72 246415.36 00:25:34.791 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:34.791 Verification LBA range: start 0x0 length 0x400 00:25:34.791 Nvme3n1 : 1.12 229.07 14.32 0.00 0.00 266919.68 17257.81 288358.40 00:25:34.791 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:34.791 Verification LBA range: start 0x0 length 0x400 00:25:34.791 Nvme4n1 : 1.13 226.42 14.15 0.00 0.00 265350.19 16711.68 253405.87 00:25:34.791 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:34.791 Verification LBA range: start 0x0 length 0x400 00:25:34.791 Nvme5n1 : 1.13 225.99 14.12 0.00 0.00 260989.23 15291.73 255153.49 00:25:34.791 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:34.791 Verification LBA range: start 0x0 length 0x400 00:25:34.791 Nvme6n1 : 1.18 271.99 17.00 0.00 0.00 213444.10 14964.05 244667.73 00:25:34.791 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:34.791 Verification LBA range: start 0x0 length 0x400 00:25:34.791 Nvme7n1 : 1.17 276.34 17.27 0.00 0.00 205415.75 4560.21 251658.24 00:25:34.791 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:34.791 Verification LBA range: start 0x0 length 0x400 00:25:34.791 Nvme8n1 : 1.22 265.73 16.61 0.00 0.00 204245.30 17039.36 244667.73 00:25:34.791 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:34.791 Verification LBA range: start 0x0 length 0x400 00:25:34.791 Nvme9n1 : 1.17 218.26 13.64 0.00 0.00 251563.31 22063.79 265639.25 00:25:34.791 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:34.791 Verification LBA range: start 0x0 length 0x400 00:25:34.791 Nvme10n1 : 1.19 269.31 16.83 0.00 0.00 200568.05 1242.45 258648.75 00:25:34.791 =================================================================================================================== 00:25:34.791 Total : 2434.70 152.17 0.00 0.00 238882.21 1242.45 288358.40 00:25:34.791 13:08:39 -- target/shutdown.sh@94 -- # stoptarget 00:25:34.791 13:08:39 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:25:34.791 13:08:39 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:25:34.791 13:08:39 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:34.791 13:08:39 -- target/shutdown.sh@45 -- # nvmftestfini 00:25:34.791 13:08:39 -- nvmf/common.sh@477 -- # nvmfcleanup 00:25:34.791 13:08:39 -- nvmf/common.sh@117 -- # sync 00:25:34.791 13:08:39 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:34.791 13:08:39 -- nvmf/common.sh@120 -- # set +e 00:25:34.791 13:08:39 -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:34.791 13:08:39 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:34.791 rmmod nvme_tcp 00:25:34.791 rmmod nvme_fabrics 00:25:34.791 rmmod nvme_keyring 00:25:34.791 13:08:39 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:34.791 13:08:39 -- nvmf/common.sh@124 -- # set -e 00:25:34.791 13:08:39 -- nvmf/common.sh@125 -- # return 0 00:25:34.792 13:08:39 -- nvmf/common.sh@478 -- # '[' -n 4089132 ']' 00:25:34.792 13:08:39 -- nvmf/common.sh@479 -- # killprocess 4089132 00:25:34.792 13:08:39 -- common/autotest_common.sh@936 -- # '[' -z 4089132 ']' 00:25:34.792 13:08:39 -- common/autotest_common.sh@940 -- # kill -0 4089132 00:25:34.792 13:08:39 -- common/autotest_common.sh@941 -- # uname 00:25:34.792 13:08:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:34.792 13:08:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4089132 00:25:34.792 13:08:39 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:34.792 13:08:39 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:34.792 13:08:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4089132' 00:25:34.792 killing process with pid 4089132 00:25:34.792 13:08:39 -- common/autotest_common.sh@955 -- # kill 4089132 00:25:34.792 13:08:39 -- common/autotest_common.sh@960 -- # wait 4089132 00:25:35.053 13:08:39 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:25:35.053 13:08:39 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:25:35.053 13:08:39 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:25:35.053 13:08:39 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:35.053 13:08:39 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:35.053 13:08:39 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:35.053 13:08:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:35.053 13:08:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:36.968 13:08:42 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:36.968 00:25:36.968 real 0m16.368s 00:25:36.968 user 0m33.555s 00:25:36.968 sys 0m6.382s 00:25:36.968 13:08:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:37.230 13:08:42 -- common/autotest_common.sh@10 -- # set +x 00:25:37.230 ************************************ 00:25:37.230 END TEST nvmf_shutdown_tc1 00:25:37.230 ************************************ 00:25:37.230 13:08:42 -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:25:37.230 13:08:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:37.230 13:08:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:37.230 13:08:42 -- common/autotest_common.sh@10 -- # set +x 00:25:37.230 ************************************ 00:25:37.230 START TEST nvmf_shutdown_tc2 00:25:37.230 ************************************ 00:25:37.230 13:08:42 -- common/autotest_common.sh@1111 -- # nvmf_shutdown_tc2 00:25:37.230 13:08:42 -- target/shutdown.sh@99 -- # starttarget 00:25:37.230 13:08:42 -- target/shutdown.sh@15 -- # nvmftestinit 00:25:37.230 13:08:42 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:25:37.230 13:08:42 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:37.230 13:08:42 -- nvmf/common.sh@437 -- # prepare_net_devs 00:25:37.230 13:08:42 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:25:37.230 13:08:42 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:25:37.230 13:08:42 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:37.230 13:08:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:37.230 13:08:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:37.230 13:08:42 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:25:37.230 13:08:42 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:25:37.230 13:08:42 -- nvmf/common.sh@285 -- # xtrace_disable 00:25:37.230 13:08:42 -- common/autotest_common.sh@10 -- # set +x 00:25:37.230 13:08:42 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:37.230 13:08:42 -- nvmf/common.sh@291 -- # pci_devs=() 00:25:37.230 13:08:42 -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:37.230 13:08:42 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:37.230 13:08:42 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:37.230 13:08:42 -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:37.230 13:08:42 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:37.230 13:08:42 -- nvmf/common.sh@295 -- # net_devs=() 00:25:37.230 13:08:42 -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:37.230 13:08:42 -- nvmf/common.sh@296 -- # e810=() 00:25:37.230 13:08:42 -- nvmf/common.sh@296 -- # local -ga e810 00:25:37.230 13:08:42 -- nvmf/common.sh@297 -- # x722=() 00:25:37.230 13:08:42 -- nvmf/common.sh@297 -- # local -ga x722 00:25:37.230 13:08:42 -- nvmf/common.sh@298 -- # mlx=() 00:25:37.230 13:08:42 -- nvmf/common.sh@298 -- # local -ga mlx 00:25:37.230 13:08:42 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:37.230 13:08:42 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:37.230 13:08:42 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:37.230 13:08:42 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:37.230 13:08:42 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:37.230 13:08:42 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:37.230 13:08:42 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:37.230 13:08:42 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:37.230 13:08:42 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:37.230 13:08:42 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:37.230 13:08:42 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:37.230 13:08:42 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:37.230 13:08:42 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:37.230 13:08:42 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:37.230 13:08:42 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:37.230 13:08:42 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:37.230 13:08:42 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:37.230 13:08:42 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:37.230 13:08:42 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:37.230 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:37.230 13:08:42 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:37.230 13:08:42 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:37.230 13:08:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:37.230 13:08:42 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:37.230 13:08:42 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:37.230 13:08:42 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:37.230 13:08:42 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:37.230 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:37.230 13:08:42 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:37.230 13:08:42 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:37.230 13:08:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:37.230 13:08:42 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:37.230 13:08:42 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:37.230 13:08:42 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:37.230 13:08:42 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:37.230 13:08:42 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:37.230 13:08:42 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:37.230 13:08:42 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:37.230 13:08:42 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:25:37.230 13:08:42 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:37.230 13:08:42 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:37.230 Found net devices under 0000:31:00.0: cvl_0_0 00:25:37.230 13:08:42 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:25:37.230 13:08:42 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:37.230 13:08:42 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:37.230 13:08:42 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:25:37.230 13:08:42 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:37.230 13:08:42 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:37.230 Found net devices under 0000:31:00.1: cvl_0_1 00:25:37.230 13:08:42 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:25:37.230 13:08:42 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:25:37.230 13:08:42 -- nvmf/common.sh@403 -- # is_hw=yes 00:25:37.230 13:08:42 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:25:37.230 13:08:42 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:25:37.230 13:08:42 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:25:37.230 13:08:42 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:37.230 13:08:42 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:37.230 13:08:42 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:37.230 13:08:42 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:37.230 13:08:42 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:37.230 13:08:42 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:37.230 13:08:42 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:37.230 13:08:42 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:37.230 13:08:42 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:37.230 13:08:42 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:37.230 13:08:42 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:37.230 13:08:42 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:37.230 13:08:42 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:37.491 13:08:42 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:37.491 13:08:42 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:37.491 13:08:42 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:37.491 13:08:42 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:37.491 13:08:42 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:37.491 13:08:42 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:37.491 13:08:42 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:37.491 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:37.491 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.649 ms 00:25:37.491 00:25:37.491 --- 10.0.0.2 ping statistics --- 00:25:37.491 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:37.491 rtt min/avg/max/mdev = 0.649/0.649/0.649/0.000 ms 00:25:37.491 13:08:42 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:37.751 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:37.751 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.338 ms 00:25:37.751 00:25:37.751 --- 10.0.0.1 ping statistics --- 00:25:37.751 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:37.751 rtt min/avg/max/mdev = 0.338/0.338/0.338/0.000 ms 00:25:37.751 13:08:42 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:37.751 13:08:42 -- nvmf/common.sh@411 -- # return 0 00:25:37.751 13:08:42 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:25:37.751 13:08:42 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:37.751 13:08:42 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:25:37.751 13:08:42 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:25:37.751 13:08:42 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:37.751 13:08:42 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:25:37.751 13:08:42 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:25:37.751 13:08:42 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:25:37.751 13:08:42 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:25:37.751 13:08:42 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:37.751 13:08:42 -- common/autotest_common.sh@10 -- # set +x 00:25:37.751 13:08:42 -- nvmf/common.sh@470 -- # nvmfpid=4091330 00:25:37.751 13:08:42 -- nvmf/common.sh@471 -- # waitforlisten 4091330 00:25:37.751 13:08:42 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:25:37.751 13:08:42 -- common/autotest_common.sh@817 -- # '[' -z 4091330 ']' 00:25:37.751 13:08:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:37.751 13:08:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:37.751 13:08:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:37.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:37.751 13:08:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:37.751 13:08:42 -- common/autotest_common.sh@10 -- # set +x 00:25:37.751 [2024-04-26 13:08:42.668759] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:25:37.751 [2024-04-26 13:08:42.668806] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:37.751 EAL: No free 2048 kB hugepages reported on node 1 00:25:37.751 [2024-04-26 13:08:42.750345] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:37.751 [2024-04-26 13:08:42.804548] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:37.751 [2024-04-26 13:08:42.804581] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:37.751 [2024-04-26 13:08:42.804586] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:37.751 [2024-04-26 13:08:42.804591] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:37.751 [2024-04-26 13:08:42.804595] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:37.751 [2024-04-26 13:08:42.804703] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:37.751 [2024-04-26 13:08:42.804879] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:37.751 [2024-04-26 13:08:42.805168] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:37.751 [2024-04-26 13:08:42.805169] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:25:38.697 13:08:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:38.697 13:08:43 -- common/autotest_common.sh@850 -- # return 0 00:25:38.697 13:08:43 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:25:38.697 13:08:43 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:38.697 13:08:43 -- common/autotest_common.sh@10 -- # set +x 00:25:38.697 13:08:43 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:38.697 13:08:43 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:38.697 13:08:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:38.697 13:08:43 -- common/autotest_common.sh@10 -- # set +x 00:25:38.697 [2024-04-26 13:08:43.480161] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:38.697 13:08:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:38.697 13:08:43 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:25:38.697 13:08:43 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:25:38.697 13:08:43 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:38.697 13:08:43 -- common/autotest_common.sh@10 -- # set +x 00:25:38.697 13:08:43 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:38.697 13:08:43 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:38.697 13:08:43 -- target/shutdown.sh@28 -- # cat 00:25:38.697 13:08:43 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:38.697 13:08:43 -- target/shutdown.sh@28 -- # cat 00:25:38.697 13:08:43 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:38.697 13:08:43 -- target/shutdown.sh@28 -- # cat 00:25:38.697 13:08:43 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:38.697 13:08:43 -- target/shutdown.sh@28 -- # cat 00:25:38.697 13:08:43 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:38.697 13:08:43 -- target/shutdown.sh@28 -- # cat 00:25:38.697 13:08:43 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:38.697 13:08:43 -- target/shutdown.sh@28 -- # cat 00:25:38.697 13:08:43 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:38.697 13:08:43 -- target/shutdown.sh@28 -- # cat 00:25:38.697 13:08:43 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:38.697 13:08:43 -- target/shutdown.sh@28 -- # cat 00:25:38.697 13:08:43 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:38.697 13:08:43 -- target/shutdown.sh@28 -- # cat 00:25:38.697 13:08:43 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:38.697 13:08:43 -- target/shutdown.sh@28 -- # cat 00:25:38.697 13:08:43 -- target/shutdown.sh@35 -- # rpc_cmd 00:25:38.697 13:08:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:38.697 13:08:43 -- common/autotest_common.sh@10 -- # set +x 00:25:38.697 Malloc1 00:25:38.697 [2024-04-26 13:08:43.579061] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:38.697 Malloc2 00:25:38.697 Malloc3 00:25:38.697 Malloc4 00:25:38.697 Malloc5 00:25:38.697 Malloc6 00:25:38.959 Malloc7 00:25:38.959 Malloc8 00:25:38.959 Malloc9 00:25:38.959 Malloc10 00:25:38.959 13:08:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:38.959 13:08:43 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:25:38.959 13:08:43 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:38.959 13:08:43 -- common/autotest_common.sh@10 -- # set +x 00:25:38.959 13:08:43 -- target/shutdown.sh@103 -- # perfpid=4091539 00:25:38.959 13:08:43 -- target/shutdown.sh@104 -- # waitforlisten 4091539 /var/tmp/bdevperf.sock 00:25:38.959 13:08:43 -- common/autotest_common.sh@817 -- # '[' -z 4091539 ']' 00:25:38.959 13:08:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:38.959 13:08:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:38.959 13:08:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:38.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:38.959 13:08:43 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:25:38.959 13:08:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:38.959 13:08:43 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:25:38.959 13:08:43 -- common/autotest_common.sh@10 -- # set +x 00:25:38.959 13:08:43 -- nvmf/common.sh@521 -- # config=() 00:25:38.959 13:08:43 -- nvmf/common.sh@521 -- # local subsystem config 00:25:38.959 13:08:43 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:38.959 13:08:43 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:38.959 { 00:25:38.959 "params": { 00:25:38.959 "name": "Nvme$subsystem", 00:25:38.959 "trtype": "$TEST_TRANSPORT", 00:25:38.959 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:38.959 "adrfam": "ipv4", 00:25:38.959 "trsvcid": "$NVMF_PORT", 00:25:38.959 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:38.959 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:38.959 "hdgst": ${hdgst:-false}, 00:25:38.959 "ddgst": ${ddgst:-false} 00:25:38.959 }, 00:25:38.959 "method": "bdev_nvme_attach_controller" 00:25:38.959 } 00:25:38.959 EOF 00:25:38.959 )") 00:25:38.959 13:08:43 -- nvmf/common.sh@543 -- # cat 00:25:38.959 13:08:43 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:38.959 13:08:43 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:38.959 { 00:25:38.959 "params": { 00:25:38.959 "name": "Nvme$subsystem", 00:25:38.959 "trtype": "$TEST_TRANSPORT", 00:25:38.959 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:38.959 "adrfam": "ipv4", 00:25:38.959 "trsvcid": "$NVMF_PORT", 00:25:38.959 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:38.959 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:38.959 "hdgst": ${hdgst:-false}, 00:25:38.959 "ddgst": ${ddgst:-false} 00:25:38.959 }, 00:25:38.959 "method": "bdev_nvme_attach_controller" 00:25:38.959 } 00:25:38.959 EOF 00:25:38.959 )") 00:25:38.959 13:08:43 -- nvmf/common.sh@543 -- # cat 00:25:38.959 13:08:43 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:38.959 13:08:43 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:38.959 { 00:25:38.959 "params": { 00:25:38.959 "name": "Nvme$subsystem", 00:25:38.959 "trtype": "$TEST_TRANSPORT", 00:25:38.959 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:38.959 "adrfam": "ipv4", 00:25:38.959 "trsvcid": "$NVMF_PORT", 00:25:38.959 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:38.959 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:38.959 "hdgst": ${hdgst:-false}, 00:25:38.959 "ddgst": ${ddgst:-false} 00:25:38.959 }, 00:25:38.959 "method": "bdev_nvme_attach_controller" 00:25:38.959 } 00:25:38.959 EOF 00:25:38.959 )") 00:25:38.959 13:08:43 -- nvmf/common.sh@543 -- # cat 00:25:38.959 13:08:43 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:38.959 13:08:43 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:38.959 { 00:25:38.959 "params": { 00:25:38.959 "name": "Nvme$subsystem", 00:25:38.959 "trtype": "$TEST_TRANSPORT", 00:25:38.959 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:38.959 "adrfam": "ipv4", 00:25:38.959 "trsvcid": "$NVMF_PORT", 00:25:38.959 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:38.959 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:38.959 "hdgst": ${hdgst:-false}, 00:25:38.959 "ddgst": ${ddgst:-false} 00:25:38.959 }, 00:25:38.959 "method": "bdev_nvme_attach_controller" 00:25:38.959 } 00:25:38.959 EOF 00:25:38.959 )") 00:25:38.959 13:08:44 -- nvmf/common.sh@543 -- # cat 00:25:38.959 13:08:44 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:38.959 13:08:44 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:38.959 { 00:25:38.959 "params": { 00:25:38.959 "name": "Nvme$subsystem", 00:25:38.959 "trtype": "$TEST_TRANSPORT", 00:25:38.959 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:38.959 "adrfam": "ipv4", 00:25:38.959 "trsvcid": "$NVMF_PORT", 00:25:38.959 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:38.959 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:38.959 "hdgst": ${hdgst:-false}, 00:25:38.959 "ddgst": ${ddgst:-false} 00:25:38.959 }, 00:25:38.959 "method": "bdev_nvme_attach_controller" 00:25:38.959 } 00:25:38.959 EOF 00:25:38.959 )") 00:25:38.959 13:08:44 -- nvmf/common.sh@543 -- # cat 00:25:38.959 13:08:44 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:38.959 13:08:44 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:38.959 { 00:25:38.959 "params": { 00:25:38.959 "name": "Nvme$subsystem", 00:25:38.959 "trtype": "$TEST_TRANSPORT", 00:25:38.960 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:38.960 "adrfam": "ipv4", 00:25:38.960 "trsvcid": "$NVMF_PORT", 00:25:38.960 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:38.960 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:38.960 "hdgst": ${hdgst:-false}, 00:25:38.960 "ddgst": ${ddgst:-false} 00:25:38.960 }, 00:25:38.960 "method": "bdev_nvme_attach_controller" 00:25:38.960 } 00:25:38.960 EOF 00:25:38.960 )") 00:25:38.960 13:08:44 -- nvmf/common.sh@543 -- # cat 00:25:39.221 13:08:44 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:39.221 [2024-04-26 13:08:44.021694] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:25:39.221 [2024-04-26 13:08:44.021747] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4091539 ] 00:25:39.221 13:08:44 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:39.221 { 00:25:39.221 "params": { 00:25:39.221 "name": "Nvme$subsystem", 00:25:39.221 "trtype": "$TEST_TRANSPORT", 00:25:39.221 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:39.221 "adrfam": "ipv4", 00:25:39.221 "trsvcid": "$NVMF_PORT", 00:25:39.221 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:39.221 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:39.221 "hdgst": ${hdgst:-false}, 00:25:39.221 "ddgst": ${ddgst:-false} 00:25:39.221 }, 00:25:39.221 "method": "bdev_nvme_attach_controller" 00:25:39.221 } 00:25:39.221 EOF 00:25:39.221 )") 00:25:39.221 13:08:44 -- nvmf/common.sh@543 -- # cat 00:25:39.221 13:08:44 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:39.221 13:08:44 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:39.221 { 00:25:39.221 "params": { 00:25:39.221 "name": "Nvme$subsystem", 00:25:39.221 "trtype": "$TEST_TRANSPORT", 00:25:39.221 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:39.221 "adrfam": "ipv4", 00:25:39.221 "trsvcid": "$NVMF_PORT", 00:25:39.221 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:39.221 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:39.221 "hdgst": ${hdgst:-false}, 00:25:39.221 "ddgst": ${ddgst:-false} 00:25:39.221 }, 00:25:39.221 "method": "bdev_nvme_attach_controller" 00:25:39.221 } 00:25:39.221 EOF 00:25:39.221 )") 00:25:39.221 13:08:44 -- nvmf/common.sh@543 -- # cat 00:25:39.221 13:08:44 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:39.221 13:08:44 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:39.221 { 00:25:39.221 "params": { 00:25:39.221 "name": "Nvme$subsystem", 00:25:39.221 "trtype": "$TEST_TRANSPORT", 00:25:39.221 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:39.221 "adrfam": "ipv4", 00:25:39.221 "trsvcid": "$NVMF_PORT", 00:25:39.221 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:39.221 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:39.221 "hdgst": ${hdgst:-false}, 00:25:39.221 "ddgst": ${ddgst:-false} 00:25:39.221 }, 00:25:39.221 "method": "bdev_nvme_attach_controller" 00:25:39.221 } 00:25:39.221 EOF 00:25:39.221 )") 00:25:39.221 13:08:44 -- nvmf/common.sh@543 -- # cat 00:25:39.221 13:08:44 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:39.221 13:08:44 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:39.221 { 00:25:39.221 "params": { 00:25:39.221 "name": "Nvme$subsystem", 00:25:39.221 "trtype": "$TEST_TRANSPORT", 00:25:39.221 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:39.221 "adrfam": "ipv4", 00:25:39.221 "trsvcid": "$NVMF_PORT", 00:25:39.221 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:39.221 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:39.221 "hdgst": ${hdgst:-false}, 00:25:39.221 "ddgst": ${ddgst:-false} 00:25:39.221 }, 00:25:39.221 "method": "bdev_nvme_attach_controller" 00:25:39.221 } 00:25:39.221 EOF 00:25:39.222 )") 00:25:39.222 13:08:44 -- nvmf/common.sh@543 -- # cat 00:25:39.222 13:08:44 -- nvmf/common.sh@545 -- # jq . 00:25:39.222 EAL: No free 2048 kB hugepages reported on node 1 00:25:39.222 13:08:44 -- nvmf/common.sh@546 -- # IFS=, 00:25:39.222 13:08:44 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:25:39.222 "params": { 00:25:39.222 "name": "Nvme1", 00:25:39.222 "trtype": "tcp", 00:25:39.222 "traddr": "10.0.0.2", 00:25:39.222 "adrfam": "ipv4", 00:25:39.222 "trsvcid": "4420", 00:25:39.222 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:39.222 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:39.222 "hdgst": false, 00:25:39.222 "ddgst": false 00:25:39.222 }, 00:25:39.222 "method": "bdev_nvme_attach_controller" 00:25:39.222 },{ 00:25:39.222 "params": { 00:25:39.222 "name": "Nvme2", 00:25:39.222 "trtype": "tcp", 00:25:39.222 "traddr": "10.0.0.2", 00:25:39.222 "adrfam": "ipv4", 00:25:39.222 "trsvcid": "4420", 00:25:39.222 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:39.222 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:39.222 "hdgst": false, 00:25:39.222 "ddgst": false 00:25:39.222 }, 00:25:39.222 "method": "bdev_nvme_attach_controller" 00:25:39.222 },{ 00:25:39.222 "params": { 00:25:39.222 "name": "Nvme3", 00:25:39.222 "trtype": "tcp", 00:25:39.222 "traddr": "10.0.0.2", 00:25:39.222 "adrfam": "ipv4", 00:25:39.222 "trsvcid": "4420", 00:25:39.222 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:25:39.222 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:25:39.222 "hdgst": false, 00:25:39.222 "ddgst": false 00:25:39.222 }, 00:25:39.222 "method": "bdev_nvme_attach_controller" 00:25:39.222 },{ 00:25:39.222 "params": { 00:25:39.222 "name": "Nvme4", 00:25:39.222 "trtype": "tcp", 00:25:39.222 "traddr": "10.0.0.2", 00:25:39.222 "adrfam": "ipv4", 00:25:39.222 "trsvcid": "4420", 00:25:39.222 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:25:39.222 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:25:39.222 "hdgst": false, 00:25:39.222 "ddgst": false 00:25:39.222 }, 00:25:39.222 "method": "bdev_nvme_attach_controller" 00:25:39.222 },{ 00:25:39.222 "params": { 00:25:39.222 "name": "Nvme5", 00:25:39.222 "trtype": "tcp", 00:25:39.222 "traddr": "10.0.0.2", 00:25:39.222 "adrfam": "ipv4", 00:25:39.222 "trsvcid": "4420", 00:25:39.222 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:25:39.222 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:25:39.222 "hdgst": false, 00:25:39.222 "ddgst": false 00:25:39.222 }, 00:25:39.222 "method": "bdev_nvme_attach_controller" 00:25:39.222 },{ 00:25:39.222 "params": { 00:25:39.222 "name": "Nvme6", 00:25:39.222 "trtype": "tcp", 00:25:39.222 "traddr": "10.0.0.2", 00:25:39.222 "adrfam": "ipv4", 00:25:39.222 "trsvcid": "4420", 00:25:39.222 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:25:39.222 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:25:39.222 "hdgst": false, 00:25:39.222 "ddgst": false 00:25:39.222 }, 00:25:39.222 "method": "bdev_nvme_attach_controller" 00:25:39.222 },{ 00:25:39.222 "params": { 00:25:39.222 "name": "Nvme7", 00:25:39.222 "trtype": "tcp", 00:25:39.222 "traddr": "10.0.0.2", 00:25:39.222 "adrfam": "ipv4", 00:25:39.222 "trsvcid": "4420", 00:25:39.222 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:25:39.222 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:25:39.222 "hdgst": false, 00:25:39.222 "ddgst": false 00:25:39.222 }, 00:25:39.222 "method": "bdev_nvme_attach_controller" 00:25:39.222 },{ 00:25:39.222 "params": { 00:25:39.222 "name": "Nvme8", 00:25:39.222 "trtype": "tcp", 00:25:39.222 "traddr": "10.0.0.2", 00:25:39.222 "adrfam": "ipv4", 00:25:39.222 "trsvcid": "4420", 00:25:39.222 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:25:39.222 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:25:39.222 "hdgst": false, 00:25:39.222 "ddgst": false 00:25:39.222 }, 00:25:39.222 "method": "bdev_nvme_attach_controller" 00:25:39.222 },{ 00:25:39.222 "params": { 00:25:39.222 "name": "Nvme9", 00:25:39.222 "trtype": "tcp", 00:25:39.222 "traddr": "10.0.0.2", 00:25:39.222 "adrfam": "ipv4", 00:25:39.222 "trsvcid": "4420", 00:25:39.222 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:25:39.222 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:25:39.222 "hdgst": false, 00:25:39.222 "ddgst": false 00:25:39.222 }, 00:25:39.222 "method": "bdev_nvme_attach_controller" 00:25:39.222 },{ 00:25:39.222 "params": { 00:25:39.222 "name": "Nvme10", 00:25:39.222 "trtype": "tcp", 00:25:39.222 "traddr": "10.0.0.2", 00:25:39.222 "adrfam": "ipv4", 00:25:39.222 "trsvcid": "4420", 00:25:39.222 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:25:39.222 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:25:39.222 "hdgst": false, 00:25:39.222 "ddgst": false 00:25:39.222 }, 00:25:39.222 "method": "bdev_nvme_attach_controller" 00:25:39.222 }' 00:25:39.222 [2024-04-26 13:08:44.082885] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:39.222 [2024-04-26 13:08:44.145731] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:40.608 Running I/O for 10 seconds... 00:25:40.608 13:08:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:40.608 13:08:45 -- common/autotest_common.sh@850 -- # return 0 00:25:40.608 13:08:45 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:25:40.608 13:08:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:40.608 13:08:45 -- common/autotest_common.sh@10 -- # set +x 00:25:40.870 13:08:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:40.870 13:08:45 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:25:40.870 13:08:45 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:25:40.870 13:08:45 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:25:40.870 13:08:45 -- target/shutdown.sh@57 -- # local ret=1 00:25:40.870 13:08:45 -- target/shutdown.sh@58 -- # local i 00:25:40.871 13:08:45 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:25:40.871 13:08:45 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:25:40.871 13:08:45 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:40.871 13:08:45 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:25:40.871 13:08:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:40.871 13:08:45 -- common/autotest_common.sh@10 -- # set +x 00:25:40.871 13:08:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:40.871 13:08:45 -- target/shutdown.sh@60 -- # read_io_count=3 00:25:40.871 13:08:45 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:25:40.871 13:08:45 -- target/shutdown.sh@67 -- # sleep 0.25 00:25:41.131 13:08:46 -- target/shutdown.sh@59 -- # (( i-- )) 00:25:41.131 13:08:46 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:25:41.131 13:08:46 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:41.131 13:08:46 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:25:41.131 13:08:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:41.131 13:08:46 -- common/autotest_common.sh@10 -- # set +x 00:25:41.131 13:08:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:41.131 13:08:46 -- target/shutdown.sh@60 -- # read_io_count=67 00:25:41.131 13:08:46 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:25:41.131 13:08:46 -- target/shutdown.sh@67 -- # sleep 0.25 00:25:41.392 13:08:46 -- target/shutdown.sh@59 -- # (( i-- )) 00:25:41.392 13:08:46 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:25:41.392 13:08:46 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:41.392 13:08:46 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:25:41.392 13:08:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:41.392 13:08:46 -- common/autotest_common.sh@10 -- # set +x 00:25:41.392 13:08:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:41.392 13:08:46 -- target/shutdown.sh@60 -- # read_io_count=131 00:25:41.392 13:08:46 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:25:41.392 13:08:46 -- target/shutdown.sh@64 -- # ret=0 00:25:41.392 13:08:46 -- target/shutdown.sh@65 -- # break 00:25:41.392 13:08:46 -- target/shutdown.sh@69 -- # return 0 00:25:41.392 13:08:46 -- target/shutdown.sh@110 -- # killprocess 4091539 00:25:41.392 13:08:46 -- common/autotest_common.sh@936 -- # '[' -z 4091539 ']' 00:25:41.392 13:08:46 -- common/autotest_common.sh@940 -- # kill -0 4091539 00:25:41.392 13:08:46 -- common/autotest_common.sh@941 -- # uname 00:25:41.392 13:08:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:41.392 13:08:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4091539 00:25:41.392 13:08:46 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:41.392 13:08:46 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:41.392 13:08:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4091539' 00:25:41.392 killing process with pid 4091539 00:25:41.392 13:08:46 -- common/autotest_common.sh@955 -- # kill 4091539 00:25:41.392 13:08:46 -- common/autotest_common.sh@960 -- # wait 4091539 00:25:41.654 Received shutdown signal, test time was about 0.994074 seconds 00:25:41.654 00:25:41.654 Latency(us) 00:25:41.654 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:41.654 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:41.654 Verification LBA range: start 0x0 length 0x400 00:25:41.654 Nvme1n1 : 0.93 206.16 12.89 0.00 0.00 306752.28 15619.41 246415.36 00:25:41.654 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:41.654 Verification LBA range: start 0x0 length 0x400 00:25:41.654 Nvme2n1 : 0.96 267.40 16.71 0.00 0.00 231847.68 18131.63 248162.99 00:25:41.654 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:41.654 Verification LBA range: start 0x0 length 0x400 00:25:41.654 Nvme3n1 : 0.94 216.52 13.53 0.00 0.00 277239.27 3822.93 241172.48 00:25:41.654 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:41.654 Verification LBA range: start 0x0 length 0x400 00:25:41.654 Nvme4n1 : 0.94 271.75 16.98 0.00 0.00 218544.53 10267.31 249910.61 00:25:41.654 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:41.654 Verification LBA range: start 0x0 length 0x400 00:25:41.654 Nvme5n1 : 0.95 268.13 16.76 0.00 0.00 216962.35 19223.89 221948.59 00:25:41.654 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:41.654 Verification LBA range: start 0x0 length 0x400 00:25:41.654 Nvme6n1 : 0.97 262.11 16.38 0.00 0.00 216888.11 15947.09 239424.85 00:25:41.654 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:41.654 Verification LBA range: start 0x0 length 0x400 00:25:41.654 Nvme7n1 : 0.96 266.30 16.64 0.00 0.00 209080.53 18896.21 249910.61 00:25:41.654 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:41.654 Verification LBA range: start 0x0 length 0x400 00:25:41.654 Nvme8n1 : 0.99 261.78 16.36 0.00 0.00 199326.27 23702.19 246415.36 00:25:41.654 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:41.654 Verification LBA range: start 0x0 length 0x400 00:25:41.654 Nvme9n1 : 0.94 203.26 12.70 0.00 0.00 260607.72 20097.71 267386.88 00:25:41.654 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:41.654 Verification LBA range: start 0x0 length 0x400 00:25:41.654 Nvme10n1 : 0.95 202.60 12.66 0.00 0.00 255664.64 20643.84 246415.36 00:25:41.654 =================================================================================================================== 00:25:41.654 Total : 2426.00 151.63 0.00 0.00 235477.21 3822.93 267386.88 00:25:41.654 13:08:46 -- target/shutdown.sh@113 -- # sleep 1 00:25:43.039 13:08:47 -- target/shutdown.sh@114 -- # kill -0 4091330 00:25:43.039 13:08:47 -- target/shutdown.sh@116 -- # stoptarget 00:25:43.039 13:08:47 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:25:43.039 13:08:47 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:25:43.039 13:08:47 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:43.039 13:08:47 -- target/shutdown.sh@45 -- # nvmftestfini 00:25:43.039 13:08:47 -- nvmf/common.sh@477 -- # nvmfcleanup 00:25:43.039 13:08:47 -- nvmf/common.sh@117 -- # sync 00:25:43.039 13:08:47 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:43.039 13:08:47 -- nvmf/common.sh@120 -- # set +e 00:25:43.039 13:08:47 -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:43.039 13:08:47 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:43.039 rmmod nvme_tcp 00:25:43.039 rmmod nvme_fabrics 00:25:43.039 rmmod nvme_keyring 00:25:43.039 13:08:47 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:43.039 13:08:47 -- nvmf/common.sh@124 -- # set -e 00:25:43.039 13:08:47 -- nvmf/common.sh@125 -- # return 0 00:25:43.039 13:08:47 -- nvmf/common.sh@478 -- # '[' -n 4091330 ']' 00:25:43.039 13:08:47 -- nvmf/common.sh@479 -- # killprocess 4091330 00:25:43.039 13:08:47 -- common/autotest_common.sh@936 -- # '[' -z 4091330 ']' 00:25:43.039 13:08:47 -- common/autotest_common.sh@940 -- # kill -0 4091330 00:25:43.039 13:08:47 -- common/autotest_common.sh@941 -- # uname 00:25:43.039 13:08:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:43.039 13:08:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4091330 00:25:43.039 13:08:47 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:43.039 13:08:47 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:43.039 13:08:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4091330' 00:25:43.039 killing process with pid 4091330 00:25:43.039 13:08:47 -- common/autotest_common.sh@955 -- # kill 4091330 00:25:43.039 13:08:47 -- common/autotest_common.sh@960 -- # wait 4091330 00:25:43.039 13:08:48 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:25:43.039 13:08:48 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:25:43.039 13:08:48 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:25:43.039 13:08:48 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:43.039 13:08:48 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:43.039 13:08:48 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:43.039 13:08:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:43.039 13:08:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:45.587 13:08:50 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:45.587 00:25:45.587 real 0m7.917s 00:25:45.587 user 0m23.850s 00:25:45.587 sys 0m1.253s 00:25:45.587 13:08:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:45.587 13:08:50 -- common/autotest_common.sh@10 -- # set +x 00:25:45.587 ************************************ 00:25:45.587 END TEST nvmf_shutdown_tc2 00:25:45.587 ************************************ 00:25:45.587 13:08:50 -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:25:45.587 13:08:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:45.587 13:08:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:45.587 13:08:50 -- common/autotest_common.sh@10 -- # set +x 00:25:45.587 ************************************ 00:25:45.587 START TEST nvmf_shutdown_tc3 00:25:45.587 ************************************ 00:25:45.587 13:08:50 -- common/autotest_common.sh@1111 -- # nvmf_shutdown_tc3 00:25:45.587 13:08:50 -- target/shutdown.sh@121 -- # starttarget 00:25:45.587 13:08:50 -- target/shutdown.sh@15 -- # nvmftestinit 00:25:45.587 13:08:50 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:25:45.587 13:08:50 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:45.587 13:08:50 -- nvmf/common.sh@437 -- # prepare_net_devs 00:25:45.587 13:08:50 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:25:45.587 13:08:50 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:25:45.587 13:08:50 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:45.587 13:08:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:45.587 13:08:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:45.587 13:08:50 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:25:45.587 13:08:50 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:25:45.587 13:08:50 -- nvmf/common.sh@285 -- # xtrace_disable 00:25:45.587 13:08:50 -- common/autotest_common.sh@10 -- # set +x 00:25:45.587 13:08:50 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:45.587 13:08:50 -- nvmf/common.sh@291 -- # pci_devs=() 00:25:45.587 13:08:50 -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:45.587 13:08:50 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:45.587 13:08:50 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:45.587 13:08:50 -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:45.587 13:08:50 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:45.587 13:08:50 -- nvmf/common.sh@295 -- # net_devs=() 00:25:45.587 13:08:50 -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:45.587 13:08:50 -- nvmf/common.sh@296 -- # e810=() 00:25:45.587 13:08:50 -- nvmf/common.sh@296 -- # local -ga e810 00:25:45.587 13:08:50 -- nvmf/common.sh@297 -- # x722=() 00:25:45.587 13:08:50 -- nvmf/common.sh@297 -- # local -ga x722 00:25:45.587 13:08:50 -- nvmf/common.sh@298 -- # mlx=() 00:25:45.587 13:08:50 -- nvmf/common.sh@298 -- # local -ga mlx 00:25:45.587 13:08:50 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:45.587 13:08:50 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:45.587 13:08:50 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:45.587 13:08:50 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:45.587 13:08:50 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:45.587 13:08:50 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:45.587 13:08:50 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:45.587 13:08:50 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:45.587 13:08:50 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:45.587 13:08:50 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:45.587 13:08:50 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:45.587 13:08:50 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:45.587 13:08:50 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:45.587 13:08:50 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:45.587 13:08:50 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:45.587 13:08:50 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:45.587 13:08:50 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:45.587 13:08:50 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:45.587 13:08:50 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:45.587 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:45.587 13:08:50 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:45.587 13:08:50 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:45.587 13:08:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:45.587 13:08:50 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:45.587 13:08:50 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:45.587 13:08:50 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:45.587 13:08:50 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:45.587 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:45.587 13:08:50 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:45.587 13:08:50 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:45.587 13:08:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:45.587 13:08:50 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:45.587 13:08:50 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:45.587 13:08:50 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:45.587 13:08:50 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:45.587 13:08:50 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:45.587 13:08:50 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:45.587 13:08:50 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:45.587 13:08:50 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:25:45.587 13:08:50 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:45.587 13:08:50 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:45.587 Found net devices under 0000:31:00.0: cvl_0_0 00:25:45.588 13:08:50 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:25:45.588 13:08:50 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:45.588 13:08:50 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:45.588 13:08:50 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:25:45.588 13:08:50 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:45.588 13:08:50 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:45.588 Found net devices under 0000:31:00.1: cvl_0_1 00:25:45.588 13:08:50 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:25:45.588 13:08:50 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:25:45.588 13:08:50 -- nvmf/common.sh@403 -- # is_hw=yes 00:25:45.588 13:08:50 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:25:45.588 13:08:50 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:25:45.588 13:08:50 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:25:45.588 13:08:50 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:45.588 13:08:50 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:45.588 13:08:50 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:45.588 13:08:50 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:45.588 13:08:50 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:45.588 13:08:50 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:45.588 13:08:50 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:45.588 13:08:50 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:45.588 13:08:50 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:45.588 13:08:50 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:45.588 13:08:50 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:45.588 13:08:50 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:45.588 13:08:50 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:45.588 13:08:50 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:45.588 13:08:50 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:45.588 13:08:50 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:45.588 13:08:50 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:45.849 13:08:50 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:45.849 13:08:50 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:45.849 13:08:50 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:45.849 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:45.849 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.485 ms 00:25:45.849 00:25:45.849 --- 10.0.0.2 ping statistics --- 00:25:45.849 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:45.849 rtt min/avg/max/mdev = 0.485/0.485/0.485/0.000 ms 00:25:45.849 13:08:50 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:45.849 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:45.849 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:25:45.849 00:25:45.849 --- 10.0.0.1 ping statistics --- 00:25:45.849 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:45.849 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:25:45.849 13:08:50 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:45.849 13:08:50 -- nvmf/common.sh@411 -- # return 0 00:25:45.849 13:08:50 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:25:45.849 13:08:50 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:45.849 13:08:50 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:25:45.849 13:08:50 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:25:45.849 13:08:50 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:45.849 13:08:50 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:25:45.849 13:08:50 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:25:45.849 13:08:50 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:25:45.849 13:08:50 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:25:45.849 13:08:50 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:45.849 13:08:50 -- common/autotest_common.sh@10 -- # set +x 00:25:45.849 13:08:50 -- nvmf/common.sh@470 -- # nvmfpid=4092868 00:25:45.849 13:08:50 -- nvmf/common.sh@471 -- # waitforlisten 4092868 00:25:45.849 13:08:50 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:25:45.849 13:08:50 -- common/autotest_common.sh@817 -- # '[' -z 4092868 ']' 00:25:45.849 13:08:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:45.849 13:08:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:45.849 13:08:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:45.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:45.849 13:08:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:45.849 13:08:50 -- common/autotest_common.sh@10 -- # set +x 00:25:45.849 [2024-04-26 13:08:50.808386] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:25:45.849 [2024-04-26 13:08:50.808451] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:45.849 EAL: No free 2048 kB hugepages reported on node 1 00:25:45.849 [2024-04-26 13:08:50.896885] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:46.111 [2024-04-26 13:08:50.958134] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:46.111 [2024-04-26 13:08:50.958171] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:46.111 [2024-04-26 13:08:50.958176] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:46.111 [2024-04-26 13:08:50.958181] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:46.111 [2024-04-26 13:08:50.958185] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:46.111 [2024-04-26 13:08:50.958329] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:46.111 [2024-04-26 13:08:50.958467] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:46.111 [2024-04-26 13:08:50.958628] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:46.111 [2024-04-26 13:08:50.958630] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:25:46.685 13:08:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:46.685 13:08:51 -- common/autotest_common.sh@850 -- # return 0 00:25:46.685 13:08:51 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:25:46.685 13:08:51 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:46.685 13:08:51 -- common/autotest_common.sh@10 -- # set +x 00:25:46.685 13:08:51 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:46.685 13:08:51 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:46.685 13:08:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:46.685 13:08:51 -- common/autotest_common.sh@10 -- # set +x 00:25:46.685 [2024-04-26 13:08:51.623131] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:46.685 13:08:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:46.685 13:08:51 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:25:46.685 13:08:51 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:25:46.685 13:08:51 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:46.685 13:08:51 -- common/autotest_common.sh@10 -- # set +x 00:25:46.685 13:08:51 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:46.685 13:08:51 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:46.685 13:08:51 -- target/shutdown.sh@28 -- # cat 00:25:46.685 13:08:51 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:46.685 13:08:51 -- target/shutdown.sh@28 -- # cat 00:25:46.685 13:08:51 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:46.685 13:08:51 -- target/shutdown.sh@28 -- # cat 00:25:46.685 13:08:51 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:46.685 13:08:51 -- target/shutdown.sh@28 -- # cat 00:25:46.685 13:08:51 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:46.685 13:08:51 -- target/shutdown.sh@28 -- # cat 00:25:46.685 13:08:51 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:46.685 13:08:51 -- target/shutdown.sh@28 -- # cat 00:25:46.685 13:08:51 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:46.685 13:08:51 -- target/shutdown.sh@28 -- # cat 00:25:46.685 13:08:51 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:46.685 13:08:51 -- target/shutdown.sh@28 -- # cat 00:25:46.685 13:08:51 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:46.685 13:08:51 -- target/shutdown.sh@28 -- # cat 00:25:46.685 13:08:51 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:46.685 13:08:51 -- target/shutdown.sh@28 -- # cat 00:25:46.685 13:08:51 -- target/shutdown.sh@35 -- # rpc_cmd 00:25:46.685 13:08:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:46.685 13:08:51 -- common/autotest_common.sh@10 -- # set +x 00:25:46.685 Malloc1 00:25:46.685 [2024-04-26 13:08:51.721905] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:46.685 Malloc2 00:25:46.946 Malloc3 00:25:46.946 Malloc4 00:25:46.946 Malloc5 00:25:46.946 Malloc6 00:25:46.946 Malloc7 00:25:46.946 Malloc8 00:25:47.209 Malloc9 00:25:47.209 Malloc10 00:25:47.209 13:08:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:47.209 13:08:52 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:25:47.209 13:08:52 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:47.209 13:08:52 -- common/autotest_common.sh@10 -- # set +x 00:25:47.209 13:08:52 -- target/shutdown.sh@125 -- # perfpid=4093246 00:25:47.209 13:08:52 -- target/shutdown.sh@126 -- # waitforlisten 4093246 /var/tmp/bdevperf.sock 00:25:47.209 13:08:52 -- common/autotest_common.sh@817 -- # '[' -z 4093246 ']' 00:25:47.209 13:08:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:47.209 13:08:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:47.209 13:08:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:47.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:47.209 13:08:52 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:25:47.209 13:08:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:47.209 13:08:52 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:25:47.209 13:08:52 -- common/autotest_common.sh@10 -- # set +x 00:25:47.209 13:08:52 -- nvmf/common.sh@521 -- # config=() 00:25:47.209 13:08:52 -- nvmf/common.sh@521 -- # local subsystem config 00:25:47.209 13:08:52 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:47.209 13:08:52 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:47.209 { 00:25:47.209 "params": { 00:25:47.209 "name": "Nvme$subsystem", 00:25:47.209 "trtype": "$TEST_TRANSPORT", 00:25:47.209 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:47.209 "adrfam": "ipv4", 00:25:47.209 "trsvcid": "$NVMF_PORT", 00:25:47.209 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:47.209 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:47.209 "hdgst": ${hdgst:-false}, 00:25:47.209 "ddgst": ${ddgst:-false} 00:25:47.209 }, 00:25:47.209 "method": "bdev_nvme_attach_controller" 00:25:47.209 } 00:25:47.209 EOF 00:25:47.209 )") 00:25:47.209 13:08:52 -- nvmf/common.sh@543 -- # cat 00:25:47.209 13:08:52 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:47.209 13:08:52 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:47.209 { 00:25:47.209 "params": { 00:25:47.209 "name": "Nvme$subsystem", 00:25:47.209 "trtype": "$TEST_TRANSPORT", 00:25:47.209 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:47.209 "adrfam": "ipv4", 00:25:47.209 "trsvcid": "$NVMF_PORT", 00:25:47.209 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:47.209 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:47.209 "hdgst": ${hdgst:-false}, 00:25:47.209 "ddgst": ${ddgst:-false} 00:25:47.209 }, 00:25:47.209 "method": "bdev_nvme_attach_controller" 00:25:47.209 } 00:25:47.209 EOF 00:25:47.209 )") 00:25:47.209 13:08:52 -- nvmf/common.sh@543 -- # cat 00:25:47.209 13:08:52 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:47.209 13:08:52 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:47.209 { 00:25:47.209 "params": { 00:25:47.209 "name": "Nvme$subsystem", 00:25:47.209 "trtype": "$TEST_TRANSPORT", 00:25:47.209 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:47.209 "adrfam": "ipv4", 00:25:47.209 "trsvcid": "$NVMF_PORT", 00:25:47.209 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:47.209 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:47.209 "hdgst": ${hdgst:-false}, 00:25:47.209 "ddgst": ${ddgst:-false} 00:25:47.209 }, 00:25:47.209 "method": "bdev_nvme_attach_controller" 00:25:47.209 } 00:25:47.209 EOF 00:25:47.209 )") 00:25:47.209 13:08:52 -- nvmf/common.sh@543 -- # cat 00:25:47.209 13:08:52 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:47.209 13:08:52 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:47.209 { 00:25:47.209 "params": { 00:25:47.209 "name": "Nvme$subsystem", 00:25:47.209 "trtype": "$TEST_TRANSPORT", 00:25:47.209 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:47.209 "adrfam": "ipv4", 00:25:47.209 "trsvcid": "$NVMF_PORT", 00:25:47.209 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:47.209 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:47.209 "hdgst": ${hdgst:-false}, 00:25:47.209 "ddgst": ${ddgst:-false} 00:25:47.209 }, 00:25:47.209 "method": "bdev_nvme_attach_controller" 00:25:47.209 } 00:25:47.209 EOF 00:25:47.209 )") 00:25:47.209 13:08:52 -- nvmf/common.sh@543 -- # cat 00:25:47.209 13:08:52 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:47.209 13:08:52 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:47.209 { 00:25:47.209 "params": { 00:25:47.209 "name": "Nvme$subsystem", 00:25:47.209 "trtype": "$TEST_TRANSPORT", 00:25:47.210 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:47.210 "adrfam": "ipv4", 00:25:47.210 "trsvcid": "$NVMF_PORT", 00:25:47.210 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:47.210 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:47.210 "hdgst": ${hdgst:-false}, 00:25:47.210 "ddgst": ${ddgst:-false} 00:25:47.210 }, 00:25:47.210 "method": "bdev_nvme_attach_controller" 00:25:47.210 } 00:25:47.210 EOF 00:25:47.210 )") 00:25:47.210 13:08:52 -- nvmf/common.sh@543 -- # cat 00:25:47.210 13:08:52 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:47.210 13:08:52 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:47.210 { 00:25:47.210 "params": { 00:25:47.210 "name": "Nvme$subsystem", 00:25:47.210 "trtype": "$TEST_TRANSPORT", 00:25:47.210 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:47.210 "adrfam": "ipv4", 00:25:47.210 "trsvcid": "$NVMF_PORT", 00:25:47.210 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:47.210 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:47.210 "hdgst": ${hdgst:-false}, 00:25:47.210 "ddgst": ${ddgst:-false} 00:25:47.210 }, 00:25:47.210 "method": "bdev_nvme_attach_controller" 00:25:47.210 } 00:25:47.210 EOF 00:25:47.210 )") 00:25:47.210 [2024-04-26 13:08:52.160563] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:25:47.210 [2024-04-26 13:08:52.160613] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4093246 ] 00:25:47.210 13:08:52 -- nvmf/common.sh@543 -- # cat 00:25:47.210 13:08:52 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:47.210 13:08:52 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:47.210 { 00:25:47.210 "params": { 00:25:47.210 "name": "Nvme$subsystem", 00:25:47.210 "trtype": "$TEST_TRANSPORT", 00:25:47.210 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:47.210 "adrfam": "ipv4", 00:25:47.210 "trsvcid": "$NVMF_PORT", 00:25:47.210 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:47.210 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:47.210 "hdgst": ${hdgst:-false}, 00:25:47.210 "ddgst": ${ddgst:-false} 00:25:47.210 }, 00:25:47.210 "method": "bdev_nvme_attach_controller" 00:25:47.210 } 00:25:47.210 EOF 00:25:47.210 )") 00:25:47.210 13:08:52 -- nvmf/common.sh@543 -- # cat 00:25:47.210 13:08:52 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:47.210 13:08:52 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:47.210 { 00:25:47.210 "params": { 00:25:47.210 "name": "Nvme$subsystem", 00:25:47.210 "trtype": "$TEST_TRANSPORT", 00:25:47.210 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:47.210 "adrfam": "ipv4", 00:25:47.210 "trsvcid": "$NVMF_PORT", 00:25:47.210 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:47.210 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:47.210 "hdgst": ${hdgst:-false}, 00:25:47.210 "ddgst": ${ddgst:-false} 00:25:47.210 }, 00:25:47.210 "method": "bdev_nvme_attach_controller" 00:25:47.210 } 00:25:47.210 EOF 00:25:47.210 )") 00:25:47.210 13:08:52 -- nvmf/common.sh@543 -- # cat 00:25:47.210 13:08:52 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:47.210 13:08:52 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:47.210 { 00:25:47.210 "params": { 00:25:47.210 "name": "Nvme$subsystem", 00:25:47.210 "trtype": "$TEST_TRANSPORT", 00:25:47.210 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:47.210 "adrfam": "ipv4", 00:25:47.210 "trsvcid": "$NVMF_PORT", 00:25:47.210 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:47.210 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:47.210 "hdgst": ${hdgst:-false}, 00:25:47.210 "ddgst": ${ddgst:-false} 00:25:47.210 }, 00:25:47.210 "method": "bdev_nvme_attach_controller" 00:25:47.210 } 00:25:47.210 EOF 00:25:47.210 )") 00:25:47.210 13:08:52 -- nvmf/common.sh@543 -- # cat 00:25:47.210 EAL: No free 2048 kB hugepages reported on node 1 00:25:47.210 13:08:52 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:47.210 13:08:52 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:47.210 { 00:25:47.210 "params": { 00:25:47.210 "name": "Nvme$subsystem", 00:25:47.210 "trtype": "$TEST_TRANSPORT", 00:25:47.210 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:47.210 "adrfam": "ipv4", 00:25:47.210 "trsvcid": "$NVMF_PORT", 00:25:47.210 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:47.210 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:47.210 "hdgst": ${hdgst:-false}, 00:25:47.210 "ddgst": ${ddgst:-false} 00:25:47.210 }, 00:25:47.210 "method": "bdev_nvme_attach_controller" 00:25:47.210 } 00:25:47.210 EOF 00:25:47.210 )") 00:25:47.210 13:08:52 -- nvmf/common.sh@543 -- # cat 00:25:47.210 13:08:52 -- nvmf/common.sh@545 -- # jq . 00:25:47.210 13:08:52 -- nvmf/common.sh@546 -- # IFS=, 00:25:47.210 13:08:52 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:25:47.210 "params": { 00:25:47.210 "name": "Nvme1", 00:25:47.210 "trtype": "tcp", 00:25:47.210 "traddr": "10.0.0.2", 00:25:47.210 "adrfam": "ipv4", 00:25:47.210 "trsvcid": "4420", 00:25:47.210 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:47.210 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:47.210 "hdgst": false, 00:25:47.210 "ddgst": false 00:25:47.210 }, 00:25:47.210 "method": "bdev_nvme_attach_controller" 00:25:47.210 },{ 00:25:47.210 "params": { 00:25:47.210 "name": "Nvme2", 00:25:47.210 "trtype": "tcp", 00:25:47.210 "traddr": "10.0.0.2", 00:25:47.210 "adrfam": "ipv4", 00:25:47.210 "trsvcid": "4420", 00:25:47.210 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:47.210 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:47.210 "hdgst": false, 00:25:47.210 "ddgst": false 00:25:47.210 }, 00:25:47.210 "method": "bdev_nvme_attach_controller" 00:25:47.210 },{ 00:25:47.210 "params": { 00:25:47.210 "name": "Nvme3", 00:25:47.210 "trtype": "tcp", 00:25:47.210 "traddr": "10.0.0.2", 00:25:47.210 "adrfam": "ipv4", 00:25:47.210 "trsvcid": "4420", 00:25:47.210 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:25:47.210 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:25:47.210 "hdgst": false, 00:25:47.210 "ddgst": false 00:25:47.210 }, 00:25:47.210 "method": "bdev_nvme_attach_controller" 00:25:47.210 },{ 00:25:47.210 "params": { 00:25:47.210 "name": "Nvme4", 00:25:47.210 "trtype": "tcp", 00:25:47.210 "traddr": "10.0.0.2", 00:25:47.210 "adrfam": "ipv4", 00:25:47.210 "trsvcid": "4420", 00:25:47.210 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:25:47.210 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:25:47.210 "hdgst": false, 00:25:47.210 "ddgst": false 00:25:47.210 }, 00:25:47.210 "method": "bdev_nvme_attach_controller" 00:25:47.210 },{ 00:25:47.210 "params": { 00:25:47.210 "name": "Nvme5", 00:25:47.210 "trtype": "tcp", 00:25:47.210 "traddr": "10.0.0.2", 00:25:47.210 "adrfam": "ipv4", 00:25:47.210 "trsvcid": "4420", 00:25:47.210 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:25:47.210 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:25:47.210 "hdgst": false, 00:25:47.210 "ddgst": false 00:25:47.210 }, 00:25:47.210 "method": "bdev_nvme_attach_controller" 00:25:47.210 },{ 00:25:47.210 "params": { 00:25:47.210 "name": "Nvme6", 00:25:47.210 "trtype": "tcp", 00:25:47.210 "traddr": "10.0.0.2", 00:25:47.210 "adrfam": "ipv4", 00:25:47.210 "trsvcid": "4420", 00:25:47.210 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:25:47.210 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:25:47.210 "hdgst": false, 00:25:47.210 "ddgst": false 00:25:47.210 }, 00:25:47.210 "method": "bdev_nvme_attach_controller" 00:25:47.210 },{ 00:25:47.210 "params": { 00:25:47.211 "name": "Nvme7", 00:25:47.211 "trtype": "tcp", 00:25:47.211 "traddr": "10.0.0.2", 00:25:47.211 "adrfam": "ipv4", 00:25:47.211 "trsvcid": "4420", 00:25:47.211 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:25:47.211 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:25:47.211 "hdgst": false, 00:25:47.211 "ddgst": false 00:25:47.211 }, 00:25:47.211 "method": "bdev_nvme_attach_controller" 00:25:47.211 },{ 00:25:47.211 "params": { 00:25:47.211 "name": "Nvme8", 00:25:47.211 "trtype": "tcp", 00:25:47.211 "traddr": "10.0.0.2", 00:25:47.211 "adrfam": "ipv4", 00:25:47.211 "trsvcid": "4420", 00:25:47.211 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:25:47.211 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:25:47.211 "hdgst": false, 00:25:47.211 "ddgst": false 00:25:47.211 }, 00:25:47.211 "method": "bdev_nvme_attach_controller" 00:25:47.211 },{ 00:25:47.211 "params": { 00:25:47.211 "name": "Nvme9", 00:25:47.211 "trtype": "tcp", 00:25:47.211 "traddr": "10.0.0.2", 00:25:47.211 "adrfam": "ipv4", 00:25:47.211 "trsvcid": "4420", 00:25:47.211 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:25:47.211 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:25:47.211 "hdgst": false, 00:25:47.211 "ddgst": false 00:25:47.211 }, 00:25:47.211 "method": "bdev_nvme_attach_controller" 00:25:47.211 },{ 00:25:47.211 "params": { 00:25:47.211 "name": "Nvme10", 00:25:47.211 "trtype": "tcp", 00:25:47.211 "traddr": "10.0.0.2", 00:25:47.211 "adrfam": "ipv4", 00:25:47.211 "trsvcid": "4420", 00:25:47.211 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:25:47.211 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:25:47.211 "hdgst": false, 00:25:47.211 "ddgst": false 00:25:47.211 }, 00:25:47.211 "method": "bdev_nvme_attach_controller" 00:25:47.211 }' 00:25:47.211 [2024-04-26 13:08:52.221436] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:47.472 [2024-04-26 13:08:52.284614] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:48.891 Running I/O for 10 seconds... 00:25:48.891 13:08:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:48.891 13:08:53 -- common/autotest_common.sh@850 -- # return 0 00:25:48.891 13:08:53 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:25:48.891 13:08:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:48.891 13:08:53 -- common/autotest_common.sh@10 -- # set +x 00:25:48.891 13:08:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:48.891 13:08:53 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:48.891 13:08:53 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:25:48.891 13:08:53 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:25:48.891 13:08:53 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:25:48.891 13:08:53 -- target/shutdown.sh@57 -- # local ret=1 00:25:48.891 13:08:53 -- target/shutdown.sh@58 -- # local i 00:25:48.891 13:08:53 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:25:48.891 13:08:53 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:25:48.891 13:08:53 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:48.891 13:08:53 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:25:48.891 13:08:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:48.891 13:08:53 -- common/autotest_common.sh@10 -- # set +x 00:25:48.891 13:08:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:48.891 13:08:53 -- target/shutdown.sh@60 -- # read_io_count=3 00:25:48.891 13:08:53 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:25:48.891 13:08:53 -- target/shutdown.sh@67 -- # sleep 0.25 00:25:49.152 13:08:54 -- target/shutdown.sh@59 -- # (( i-- )) 00:25:49.153 13:08:54 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:25:49.153 13:08:54 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:49.153 13:08:54 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:25:49.153 13:08:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:49.153 13:08:54 -- common/autotest_common.sh@10 -- # set +x 00:25:49.153 13:08:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:49.153 13:08:54 -- target/shutdown.sh@60 -- # read_io_count=67 00:25:49.153 13:08:54 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:25:49.153 13:08:54 -- target/shutdown.sh@67 -- # sleep 0.25 00:25:49.413 13:08:54 -- target/shutdown.sh@59 -- # (( i-- )) 00:25:49.413 13:08:54 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:25:49.413 13:08:54 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:49.413 13:08:54 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:25:49.413 13:08:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:49.413 13:08:54 -- common/autotest_common.sh@10 -- # set +x 00:25:49.413 13:08:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:49.413 13:08:54 -- target/shutdown.sh@60 -- # read_io_count=131 00:25:49.413 13:08:54 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:25:49.413 13:08:54 -- target/shutdown.sh@64 -- # ret=0 00:25:49.413 13:08:54 -- target/shutdown.sh@65 -- # break 00:25:49.413 13:08:54 -- target/shutdown.sh@69 -- # return 0 00:25:49.413 13:08:54 -- target/shutdown.sh@135 -- # killprocess 4092868 00:25:49.413 13:08:54 -- common/autotest_common.sh@936 -- # '[' -z 4092868 ']' 00:25:49.413 13:08:54 -- common/autotest_common.sh@940 -- # kill -0 4092868 00:25:49.413 13:08:54 -- common/autotest_common.sh@941 -- # uname 00:25:49.413 13:08:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:49.413 13:08:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4092868 00:25:49.687 13:08:54 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:49.687 13:08:54 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:49.687 13:08:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4092868' 00:25:49.687 killing process with pid 4092868 00:25:49.687 13:08:54 -- common/autotest_common.sh@955 -- # kill 4092868 00:25:49.687 13:08:54 -- common/autotest_common.sh@960 -- # wait 4092868 00:25:49.687 [2024-04-26 13:08:54.486900] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1792870 is same with the state(5) to be set 00:25:49.687 [2024-04-26 13:08:54.486954] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1792870 is same with the state(5) to be set 00:25:49.687 [2024-04-26 13:08:54.486960] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1792870 is same with the state(5) to be set 00:25:49.687 [2024-04-26 13:08:54.486965] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1792870 is same with the state(5) to be set 00:25:49.687 [2024-04-26 13:08:54.487706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.687 [2024-04-26 13:08:54.487742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.687 [2024-04-26 13:08:54.487760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.687 [2024-04-26 13:08:54.487769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.687 [2024-04-26 13:08:54.487778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.687 [2024-04-26 13:08:54.487786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.687 [2024-04-26 13:08:54.487796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.687 [2024-04-26 13:08:54.487803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.687 [2024-04-26 13:08:54.487812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.687 [2024-04-26 13:08:54.487819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.687 [2024-04-26 13:08:54.487829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.687 [2024-04-26 13:08:54.487843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.687 [2024-04-26 13:08:54.487853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.687 [2024-04-26 13:08:54.487860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.687 [2024-04-26 13:08:54.487869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.687 [2024-04-26 13:08:54.487876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.687 [2024-04-26 13:08:54.487886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.687 [2024-04-26 13:08:54.487893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.687 [2024-04-26 13:08:54.487902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.687 [2024-04-26 13:08:54.487909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.687 [2024-04-26 13:08:54.487919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.687 [2024-04-26 13:08:54.487926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.687 [2024-04-26 13:08:54.487919] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793e10 is same with the state(5) to be set 00:25:49.687 [2024-04-26 13:08:54.487940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.687 [2024-04-26 13:08:54.487944] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793e10 is same with the state(5) to be set 00:25:49.687 [2024-04-26 13:08:54.487949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.687 [2024-04-26 13:08:54.487951] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793e10 is same with the state(5) to be set 00:25:49.687 [2024-04-26 13:08:54.487957] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793e10 is same with the state(5) to be set 00:25:49.687 [2024-04-26 13:08:54.487959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.687 [2024-04-26 13:08:54.487962] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793e10 is same with the state(5) to be set 00:25:49.687 [2024-04-26 13:08:54.487968] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793e10 is same with [2024-04-26 13:08:54.487967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:25:49.687 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.687 [2024-04-26 13:08:54.487975] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793e10 is same with the state(5) to be set 00:25:49.687 [2024-04-26 13:08:54.487979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:1[2024-04-26 13:08:54.487980] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793e10 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.687 the state(5) to be set 00:25:49.687 [2024-04-26 13:08:54.487988] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793e10 is same with the state(5) to be set 00:25:49.687 [2024-04-26 13:08:54.487989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.687 [2024-04-26 13:08:54.487993] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793e10 is same with the state(5) to be set 00:25:49.687 [2024-04-26 13:08:54.487998] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793e10 is same with the state(5) to be set 00:25:49.687 [2024-04-26 13:08:54.487999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.687 [2024-04-26 13:08:54.488003] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793e10 is same with the state(5) to be set 00:25:49.687 [2024-04-26 13:08:54.488007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-04-26 13:08:54.488008] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793e10 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.687 the state(5) to be set 00:25:49.687 [2024-04-26 13:08:54.488016] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793e10 is same with the state(5) to be set 00:25:49.687 [2024-04-26 13:08:54.488018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.687 [2024-04-26 13:08:54.488020] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793e10 is same with the state(5) to be set 00:25:49.687 [2024-04-26 13:08:54.488026] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793e10 is same with the state(5) to be set 00:25:49.687 [2024-04-26 13:08:54.488027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.687 [2024-04-26 13:08:54.488031] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793e10 is same with the state(5) to be set 00:25:49.687 [2024-04-26 13:08:54.488037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.687 [2024-04-26 13:08:54.488041] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793e10 is same with the state(5) to be set 00:25:49.687 [2024-04-26 13:08:54.488045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-04-26 13:08:54.488046] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793e10 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.687 the state(5) to be set 00:25:49.687 [2024-04-26 13:08:54.488055] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793e10 is same with the state(5) to be set 00:25:49.687 [2024-04-26 13:08:54.488057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.687 [2024-04-26 13:08:54.488059] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793e10 is same with the state(5) to be set 00:25:49.687 [2024-04-26 13:08:54.488065] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793e10 is same with the state(5) to be set 00:25:49.687 [2024-04-26 13:08:54.488065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.687 [2024-04-26 13:08:54.488070] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793e10 is same with the state(5) to be set 00:25:49.687 [2024-04-26 13:08:54.488075] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793e10 is same with the state(5) to be set 00:25:49.687 [2024-04-26 13:08:54.488076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.687 [2024-04-26 13:08:54.488080] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793e10 is same with the state(5) to be set 00:25:49.687 [2024-04-26 13:08:54.488084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-04-26 13:08:54.488085] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793e10 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.687 the state(5) to be set 00:25:49.687 [2024-04-26 13:08:54.488092] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793e10 is same with the state(5) to be set 00:25:49.687 [2024-04-26 13:08:54.488095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:1[2024-04-26 13:08:54.488097] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793e10 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.687 the state(5) to be set 00:25:49.687 [2024-04-26 13:08:54.488104] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793e10 is same with the state(5) to be set 00:25:49.687 [2024-04-26 13:08:54.488104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.687 [2024-04-26 13:08:54.488109] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793e10 is same with the state(5) to be set 00:25:49.688 [2024-04-26 13:08:54.488115] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793e10 is same with the state(5) to be set 00:25:49.688 [2024-04-26 13:08:54.488115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.688 [2024-04-26 13:08:54.488121] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793e10 is same with the state(5) to be set 00:25:49.688 [2024-04-26 13:08:54.488124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.688 [2024-04-26 13:08:54.488126] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793e10 is same with the state(5) to be set 00:25:49.688 [2024-04-26 13:08:54.488133] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793e10 is same with the state(5) to be set 00:25:49.688 [2024-04-26 13:08:54.488133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.688 [2024-04-26 13:08:54.488139] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793e10 is same with the state(5) to be set 00:25:49.688 [2024-04-26 13:08:54.488142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.688 [2024-04-26 13:08:54.488144] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793e10 is same with the state(5) to be set 00:25:49.688 [2024-04-26 13:08:54.488150] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793e10 is same with the state(5) to be set 00:25:49.688 [2024-04-26 13:08:54.488152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.688 [2024-04-26 13:08:54.488154] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793e10 is same with the state(5) to be set 00:25:49.688 [2024-04-26 13:08:54.488160] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793e10 is same with the state(5) to be set 00:25:49.688 [2024-04-26 13:08:54.488160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.688 [2024-04-26 13:08:54.488165] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793e10 is same with the state(5) to be set 00:25:49.688 [2024-04-26 13:08:54.488170] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793e10 is same with the state(5) to be set 00:25:49.688 [2024-04-26 13:08:54.488170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.688 [2024-04-26 13:08:54.488175] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793e10 is same with the state(5) to be set 00:25:49.688 [2024-04-26 13:08:54.488178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.688 [2024-04-26 13:08:54.488180] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793e10 is same with the state(5) to be set 00:25:49.688 [2024-04-26 13:08:54.488185] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793e10 is same with the state(5) to be set 00:25:49.688 [2024-04-26 13:08:54.488188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.688 [2024-04-26 13:08:54.488190] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793e10 is same with the state(5) to be set 00:25:49.688 [2024-04-26 13:08:54.488196] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793e10 is same with the state(5) to be set 00:25:49.688 [2024-04-26 13:08:54.488196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.688 [2024-04-26 13:08:54.488200] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793e10 is same with the state(5) to be set 00:25:49.688 [2024-04-26 13:08:54.488205] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793e10 is same with the state(5) to be set 00:25:49.688 [2024-04-26 13:08:54.488206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.688 [2024-04-26 13:08:54.488210] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793e10 is same with the state(5) to be set 00:25:49.688 [2024-04-26 13:08:54.488214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-04-26 13:08:54.488215] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793e10 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.688 the state(5) to be set 00:25:49.688 [2024-04-26 13:08:54.488224] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793e10 is same with the state(5) to be set 00:25:49.688 [2024-04-26 13:08:54.488227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.688 [2024-04-26 13:08:54.488229] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793e10 is same with the state(5) to be set 00:25:49.688 [2024-04-26 13:08:54.488235] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793e10 is same with the state(5) to be set 00:25:49.688 [2024-04-26 13:08:54.488235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.688 [2024-04-26 13:08:54.488239] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793e10 is same with the state(5) to be set 00:25:49.688 [2024-04-26 13:08:54.488245] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793e10 is same with the state(5) to be set 00:25:49.688 [2024-04-26 13:08:54.488245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.688 [2024-04-26 13:08:54.488250] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793e10 is same with the state(5) to be set 00:25:49.688 [2024-04-26 13:08:54.488253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.688 [2024-04-26 13:08:54.488255] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793e10 is same with the state(5) to be set 00:25:49.688 [2024-04-26 13:08:54.488261] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793e10 is same with the state(5) to be set 00:25:49.688 [2024-04-26 13:08:54.488263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.688 [2024-04-26 13:08:54.488265] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793e10 is same with the state(5) to be set 00:25:49.688 [2024-04-26 13:08:54.488270] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793e10 is same with the state(5) to be set 00:25:49.688 [2024-04-26 13:08:54.488271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.688 [2024-04-26 13:08:54.488275] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793e10 is same with the state(5) to be set 00:25:49.688 [2024-04-26 13:08:54.488280] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793e10 is same with the state(5) to be set 00:25:49.688 [2024-04-26 13:08:54.488281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.688 [2024-04-26 13:08:54.488285] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793e10 is same with the state(5) to be set 00:25:49.688 [2024-04-26 13:08:54.488289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.688 [2024-04-26 13:08:54.488298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.688 [2024-04-26 13:08:54.488305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.688 [2024-04-26 13:08:54.488314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.688 [2024-04-26 13:08:54.488321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.688 [2024-04-26 13:08:54.488332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.688 [2024-04-26 13:08:54.488339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.688 [2024-04-26 13:08:54.488349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.688 [2024-04-26 13:08:54.488356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.688 [2024-04-26 13:08:54.488365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.688 [2024-04-26 13:08:54.488372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.688 [2024-04-26 13:08:54.488381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.688 [2024-04-26 13:08:54.488388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.688 [2024-04-26 13:08:54.488397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.688 [2024-04-26 13:08:54.488405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.688 [2024-04-26 13:08:54.488414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.688 [2024-04-26 13:08:54.488421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.688 [2024-04-26 13:08:54.488430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.688 [2024-04-26 13:08:54.488437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.688 [2024-04-26 13:08:54.488446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.688 [2024-04-26 13:08:54.488454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.688 [2024-04-26 13:08:54.488463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.688 [2024-04-26 13:08:54.488470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.688 [2024-04-26 13:08:54.488478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.688 [2024-04-26 13:08:54.488485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.689 [2024-04-26 13:08:54.488495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.689 [2024-04-26 13:08:54.488502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.689 [2024-04-26 13:08:54.488511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.689 [2024-04-26 13:08:54.488518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.689 [2024-04-26 13:08:54.488528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.689 [2024-04-26 13:08:54.488536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.689 [2024-04-26 13:08:54.488545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.689 [2024-04-26 13:08:54.488552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.689 [2024-04-26 13:08:54.488562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.689 [2024-04-26 13:08:54.488569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.689 [2024-04-26 13:08:54.488578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.689 [2024-04-26 13:08:54.488585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.689 [2024-04-26 13:08:54.488594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.689 [2024-04-26 13:08:54.488601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.689 [2024-04-26 13:08:54.488610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.689 [2024-04-26 13:08:54.488617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.689 [2024-04-26 13:08:54.488626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.689 [2024-04-26 13:08:54.488633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.689 [2024-04-26 13:08:54.488642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.689 [2024-04-26 13:08:54.488649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.689 [2024-04-26 13:08:54.488658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.689 [2024-04-26 13:08:54.488666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.689 [2024-04-26 13:08:54.488675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.689 [2024-04-26 13:08:54.488682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.689 [2024-04-26 13:08:54.488691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.689 [2024-04-26 13:08:54.488698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.689 [2024-04-26 13:08:54.488707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.689 [2024-04-26 13:08:54.488714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.689 [2024-04-26 13:08:54.488722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.689 [2024-04-26 13:08:54.488730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.689 [2024-04-26 13:08:54.488740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.689 [2024-04-26 13:08:54.488747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.689 [2024-04-26 13:08:54.488757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.689 [2024-04-26 13:08:54.488764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.689 [2024-04-26 13:08:54.488773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.689 [2024-04-26 13:08:54.488780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.689 [2024-04-26 13:08:54.488789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.689 [2024-04-26 13:08:54.488796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.689 [2024-04-26 13:08:54.488805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.689 [2024-04-26 13:08:54.488812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.689 [2024-04-26 13:08:54.488821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.689 [2024-04-26 13:08:54.488828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.689 [2024-04-26 13:08:54.488841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.689 [2024-04-26 13:08:54.488848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.689 [2024-04-26 13:08:54.488901] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x13710f0 was disconnected and freed. reset controller. 00:25:49.689 [2024-04-26 13:08:54.489322] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1792d00 is same with the state(5) to be set 00:25:49.689 [2024-04-26 13:08:54.489337] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1792d00 is same with the state(5) to be set 00:25:49.689 [2024-04-26 13:08:54.489342] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1792d00 is same with the state(5) to be set 00:25:49.689 [2024-04-26 13:08:54.489347] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1792d00 is same with the state(5) to be set 00:25:49.689 [2024-04-26 13:08:54.489352] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1792d00 is same with the state(5) to be set 00:25:49.689 [2024-04-26 13:08:54.489357] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1792d00 is same with the state(5) to be set 00:25:49.689 [2024-04-26 13:08:54.489361] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1792d00 is same with the state(5) to be set 00:25:49.689 [2024-04-26 13:08:54.489366] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1792d00 is same with the state(5) to be set 00:25:49.689 [2024-04-26 13:08:54.489371] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1792d00 is same with the state(5) to be set 00:25:49.689 [2024-04-26 13:08:54.489375] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1792d00 is same with the state(5) to be set 00:25:49.689 [2024-04-26 13:08:54.489380] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1792d00 is same with the state(5) to be set 00:25:49.689 [2024-04-26 13:08:54.489387] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1792d00 is same with the state(5) to be set 00:25:49.689 [2024-04-26 13:08:54.489392] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1792d00 is same with the state(5) to be set 00:25:49.689 [2024-04-26 13:08:54.489396] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1792d00 is same with the state(5) to be set 00:25:49.689 [2024-04-26 13:08:54.489401] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1792d00 is same with the state(5) to be set 00:25:49.689 [2024-04-26 13:08:54.489405] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1792d00 is same with the state(5) to be set 00:25:49.689 [2024-04-26 13:08:54.489410] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1792d00 is same with the state(5) to be set 00:25:49.689 [2024-04-26 13:08:54.489415] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1792d00 is same with the state(5) to be set 00:25:49.689 [2024-04-26 13:08:54.489419] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1792d00 is same with the state(5) to be set 00:25:49.689 [2024-04-26 13:08:54.489423] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1792d00 is same with the state(5) to be set 00:25:49.689 [2024-04-26 13:08:54.489428] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1792d00 is same with the state(5) to be set 00:25:49.689 [2024-04-26 13:08:54.489433] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1792d00 is same with the state(5) to be set 00:25:49.689 [2024-04-26 13:08:54.489437] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1792d00 is same with the state(5) to be set 00:25:49.689 [2024-04-26 13:08:54.489442] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1792d00 is same with the state(5) to be set 00:25:49.689 [2024-04-26 13:08:54.489446] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1792d00 is same with the state(5) to be set 00:25:49.689 [2024-04-26 13:08:54.489450] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1792d00 is same with the state(5) to be set 00:25:49.689 [2024-04-26 13:08:54.489455] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1792d00 is same with the state(5) to be set 00:25:49.689 [2024-04-26 13:08:54.489459] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1792d00 is same with the state(5) to be set 00:25:49.689 [2024-04-26 13:08:54.489463] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1792d00 is same with the state(5) to be set 00:25:49.689 [2024-04-26 13:08:54.489468] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1792d00 is same with the state(5) to be set 00:25:49.689 [2024-04-26 13:08:54.489472] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1792d00 is same with the state(5) to be set 00:25:49.689 [2024-04-26 13:08:54.489477] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1792d00 is same with the state(5) to be set 00:25:49.689 [2024-04-26 13:08:54.489481] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1792d00 is same with the state(5) to be set 00:25:49.689 [2024-04-26 13:08:54.489486] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1792d00 is same with the state(5) to be set 00:25:49.689 [2024-04-26 13:08:54.489491] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1792d00 is same with the state(5) to be set 00:25:49.689 [2024-04-26 13:08:54.489495] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1792d00 is same with the state(5) to be set 00:25:49.690 [2024-04-26 13:08:54.489500] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1792d00 is same with the state(5) to be set 00:25:49.690 [2024-04-26 13:08:54.489504] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1792d00 is same with the state(5) to be set 00:25:49.690 [2024-04-26 13:08:54.489510] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1792d00 is same with the state(5) to be set 00:25:49.690 [2024-04-26 13:08:54.489514] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1792d00 is same with the state(5) to be set 00:25:49.690 [2024-04-26 13:08:54.489519] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1792d00 is same with the state(5) to be set 00:25:49.690 [2024-04-26 13:08:54.489523] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1792d00 is same with the state(5) to be set 00:25:49.690 [2024-04-26 13:08:54.489528] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1792d00 is same with the state(5) to be set 00:25:49.690 [2024-04-26 13:08:54.489532] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1792d00 is same with the state(5) to be set 00:25:49.690 [2024-04-26 13:08:54.489537] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1792d00 is same with the state(5) to be set 00:25:49.690 [2024-04-26 13:08:54.489542] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1792d00 is same with the state(5) to be set 00:25:49.690 [2024-04-26 13:08:54.489546] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1792d00 is same with the state(5) to be set 00:25:49.690 [2024-04-26 13:08:54.489550] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1792d00 is same with the state(5) to be set 00:25:49.690 [2024-04-26 13:08:54.489555] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1792d00 is same with the state(5) to be set 00:25:49.690 [2024-04-26 13:08:54.489559] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1792d00 is same with the state(5) to be set 00:25:49.690 [2024-04-26 13:08:54.489563] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1792d00 is same with the state(5) to be set 00:25:49.690 [2024-04-26 13:08:54.491152] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:49.690 [2024-04-26 13:08:54.491205] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xeb0220 (9): Bad file descriptor 00:25:49.690 [2024-04-26 13:08:54.491974] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793190 is same with the state(5) to be set 00:25:49.690 [2024-04-26 13:08:54.491996] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793190 is same with the state(5) to be set 00:25:49.690 [2024-04-26 13:08:54.492001] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793190 is same with the state(5) to be set 00:25:49.690 [2024-04-26 13:08:54.492006] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793190 is same with the state(5) to be set 00:25:49.690 [2024-04-26 13:08:54.492011] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793190 is same with the state(5) to be set 00:25:49.690 [2024-04-26 13:08:54.492015] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793190 is same with the state(5) to be set 00:25:49.690 [2024-04-26 13:08:54.492021] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793190 is same with the state(5) to be set 00:25:49.690 [2024-04-26 13:08:54.492026] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793190 is same with the state(5) to be set 00:25:49.690 [2024-04-26 13:08:54.492034] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793190 is same with the state(5) to be set 00:25:49.690 [2024-04-26 13:08:54.492039] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793190 is same with the state(5) to be set 00:25:49.690 [2024-04-26 13:08:54.492044] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793190 is same with the state(5) to be set 00:25:49.690 [2024-04-26 13:08:54.492048] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793190 is same with the state(5) to be set 00:25:49.690 [2024-04-26 13:08:54.492053] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793190 is same with the state(5) to be set 00:25:49.690 [2024-04-26 13:08:54.492061] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793190 is same with the state(5) to be set 00:25:49.690 [2024-04-26 13:08:54.492066] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793190 is same with the state(5) to be set 00:25:49.690 [2024-04-26 13:08:54.492071] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793190 is same with the state(5) to be set 00:25:49.690 [2024-04-26 13:08:54.492075] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793190 is same with the state(5) to be set 00:25:49.690 [2024-04-26 13:08:54.492080] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793190 is same with the state(5) to be set 00:25:49.690 [2024-04-26 13:08:54.492084] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793190 is same with the state(5) to be set 00:25:49.690 [2024-04-26 13:08:54.492089] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793190 is same with the state(5) to be set 00:25:49.690 [2024-04-26 13:08:54.492093] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793190 is same with the state(5) to be set 00:25:49.690 [2024-04-26 13:08:54.492098] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793190 is same with the state(5) to be set 00:25:49.690 [2024-04-26 13:08:54.492103] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793190 is same with the state(5) to be set 00:25:49.690 [2024-04-26 13:08:54.492107] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793190 is same with the state(5) to be set 00:25:49.690 [2024-04-26 13:08:54.492112] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793190 is same with the state(5) to be set 00:25:49.690 [2024-04-26 13:08:54.492116] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793190 is same with the state(5) to be set 00:25:49.690 [2024-04-26 13:08:54.492120] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793190 is same with the state(5) to be set 00:25:49.690 [2024-04-26 13:08:54.492125] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793190 is same with the state(5) to be set 00:25:49.690 [2024-04-26 13:08:54.492129] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793190 is same with the state(5) to be set 00:25:49.690 [2024-04-26 13:08:54.492134] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793190 is same with the state(5) to be set 00:25:49.690 [2024-04-26 13:08:54.492138] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793190 is same with the state(5) to be set 00:25:49.690 [2024-04-26 13:08:54.492144] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793190 is same with the state(5) to be set 00:25:49.690 [2024-04-26 13:08:54.492149] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793190 is same with the state(5) to be set 00:25:49.690 [2024-04-26 13:08:54.492154] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793190 is same with the state(5) to be set 00:25:49.690 [2024-04-26 13:08:54.492159] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793190 is same with the state(5) to be set 00:25:49.690 [2024-04-26 13:08:54.492164] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793190 is same with the state(5) to be set 00:25:49.690 [2024-04-26 13:08:54.492168] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793190 is same with the state(5) to be set 00:25:49.690 [2024-04-26 13:08:54.492173] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793190 is same with the state(5) to be set 00:25:49.690 [2024-04-26 13:08:54.492177] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793190 is same with the state(5) to be set 00:25:49.690 [2024-04-26 13:08:54.492182] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793190 is same with the state(5) to be set 00:25:49.690 [2024-04-26 13:08:54.492188] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793190 is same with the state(5) to be set 00:25:49.690 [2024-04-26 13:08:54.492193] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793190 is same with the state(5) to be set 00:25:49.690 [2024-04-26 13:08:54.492198] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793190 is same with the state(5) to be set 00:25:49.690 [2024-04-26 13:08:54.492202] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793190 is same with the state(5) to be set 00:25:49.690 [2024-04-26 13:08:54.492207] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793190 is same with the state(5) to be set 00:25:49.690 [2024-04-26 13:08:54.492211] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793190 is same with the state(5) to be set 00:25:49.690 [2024-04-26 13:08:54.492216] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793190 is same with the state(5) to be set 00:25:49.690 [2024-04-26 13:08:54.492220] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793190 is same with the state(5) to be set 00:25:49.690 [2024-04-26 13:08:54.492224] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793190 is same with the state(5) to be set 00:25:49.690 [2024-04-26 13:08:54.492228] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793190 is same with the state(5) to be set 00:25:49.690 [2024-04-26 13:08:54.492233] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793190 is same with the state(5) to be set 00:25:49.690 [2024-04-26 13:08:54.492237] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793190 is same with the state(5) to be set 00:25:49.690 [2024-04-26 13:08:54.492242] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793190 is same with the state(5) to be set 00:25:49.690 [2024-04-26 13:08:54.492246] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793190 is same with the state(5) to be set 00:25:49.690 [2024-04-26 13:08:54.492251] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793190 is same with the state(5) to be set 00:25:49.690 [2024-04-26 13:08:54.492255] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793190 is same with the state(5) to be set 00:25:49.690 [2024-04-26 13:08:54.492260] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793190 is same with the state(5) to be set 00:25:49.690 [2024-04-26 13:08:54.492265] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793190 is same with the state(5) to be set 00:25:49.690 [2024-04-26 13:08:54.492269] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793190 is same with the state(5) to be set 00:25:49.690 [2024-04-26 13:08:54.492273] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793190 is same with the state(5) to be set 00:25:49.690 [2024-04-26 13:08:54.492278] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793190 is same with the state(5) to be set 00:25:49.690 [2024-04-26 13:08:54.492283] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793190 is same with the state(5) to be set 00:25:49.690 [2024-04-26 13:08:54.492287] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793190 is same with the state(5) to be set 00:25:49.690 [2024-04-26 13:08:54.492389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.690 [2024-04-26 13:08:54.492626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.690 [2024-04-26 13:08:54.492637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeb0220 with addr=10.0.0.2, port=4420 00:25:49.690 [2024-04-26 13:08:54.492645] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb0220 is same with the state(5) to be set 00:25:49.691 [2024-04-26 13:08:54.492981] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:49.691 [2024-04-26 13:08:54.493006] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xeb0220 (9): Bad file descriptor 00:25:49.691 [2024-04-26 13:08:54.493050] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:49.691 [2024-04-26 13:08:54.493304] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:49.691 [2024-04-26 13:08:54.493317] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:49.691 [2024-04-26 13:08:54.493326] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:49.691 [2024-04-26 13:08:54.494152] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.691 [2024-04-26 13:08:54.494198] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.691 [2024-04-26 13:08:54.494210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.691 [2024-04-26 13:08:54.494218] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.691 [2024-04-26 13:08:54.494225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.691 [2024-04-26 13:08:54.494234] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.691 [2024-04-26 13:08:54.494241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.691 [2024-04-26 13:08:54.494249] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.691 [2024-04-26 13:08:54.494256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.691 [2024-04-26 13:08:54.494263] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c5a90 is same with the state(5) to be set 00:25:49.691 [2024-04-26 13:08:54.494294] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.691 [2024-04-26 13:08:54.494303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.691 [2024-04-26 13:08:54.494311] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.691 [2024-04-26 13:08:54.494318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.691 [2024-04-26 13:08:54.494326] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.691 [2024-04-26 13:08:54.494333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.691 [2024-04-26 13:08:54.494342] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.691 [2024-04-26 13:08:54.494349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.691 [2024-04-26 13:08:54.494356] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x147b6e0 is same with the state(5) to be set 00:25:49.691 [2024-04-26 13:08:54.494401] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.691 [2024-04-26 13:08:54.494410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.691 [2024-04-26 13:08:54.494418] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.691 [2024-04-26 13:08:54.494428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.691 [2024-04-26 13:08:54.494436] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.691 [2024-04-26 13:08:54.494443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.691 [2024-04-26 13:08:54.494452] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.691 [2024-04-26 13:08:54.494464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.691 [2024-04-26 13:08:54.494475] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c0ed0 is same with the state(5) to be set 00:25:49.691 [2024-04-26 13:08:54.494850] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:49.691 [2024-04-26 13:08:54.495622] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793620 is same with the state(5) to be set 00:25:49.691 [2024-04-26 13:08:54.495645] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793620 is same with the state(5) to be set 00:25:49.691 [2024-04-26 13:08:54.495650] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793620 is same with the state(5) to be set 00:25:49.691 [2024-04-26 13:08:54.495655] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793620 is same with the state(5) to be set 00:25:49.691 [2024-04-26 13:08:54.495660] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793620 is same with the state(5) to be set 00:25:49.691 [2024-04-26 13:08:54.495665] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793620 is same with the state(5) to be set 00:25:49.691 [2024-04-26 13:08:54.495670] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793620 is same with the state(5) to be set 00:25:49.691 [2024-04-26 13:08:54.495675] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793620 is same with the state(5) to be set 00:25:49.691 [2024-04-26 13:08:54.495679] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793620 is same with the state(5) to be set 00:25:49.691 [2024-04-26 13:08:54.495684] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793620 is same with the state(5) to be set 00:25:49.691 [2024-04-26 13:08:54.495688] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793620 is same with the state(5) to be set 00:25:49.691 [2024-04-26 13:08:54.495693] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793620 is same with the state(5) to be set 00:25:49.691 [2024-04-26 13:08:54.495697] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793620 is same with the state(5) to be set 00:25:49.691 [2024-04-26 13:08:54.495702] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793620 is same with the state(5) to be set 00:25:49.691 [2024-04-26 13:08:54.495706] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793620 is same with the state(5) to be set 00:25:49.691 [2024-04-26 13:08:54.495711] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793620 is same with the state(5) to be set 00:25:49.691 [2024-04-26 13:08:54.495715] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793620 is same with the state(5) to be set 00:25:49.691 [2024-04-26 13:08:54.495720] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793620 is same with the state(5) to be set 00:25:49.691 [2024-04-26 13:08:54.495725] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793620 is same with the state(5) to be set 00:25:49.691 [2024-04-26 13:08:54.495729] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793620 is same with the state(5) to be set 00:25:49.691 [2024-04-26 13:08:54.495737] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793620 is same with the state(5) to be set 00:25:49.691 [2024-04-26 13:08:54.495742] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793620 is same with the state(5) to be set 00:25:49.691 [2024-04-26 13:08:54.495746] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793620 is same with the state(5) to be set 00:25:49.691 [2024-04-26 13:08:54.495751] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793620 is same with the state(5) to be set 00:25:49.691 [2024-04-26 13:08:54.495756] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793620 is same with the state(5) to be set 00:25:49.691 [2024-04-26 13:08:54.495760] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793620 is same with the state(5) to be set 00:25:49.691 [2024-04-26 13:08:54.495765] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793620 is same with the state(5) to be set 00:25:49.691 [2024-04-26 13:08:54.495770] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793620 is same with the state(5) to be set 00:25:49.691 [2024-04-26 13:08:54.495774] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793620 is same with the state(5) to be set 00:25:49.692 [2024-04-26 13:08:54.495779] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793620 is same with the state(5) to be set 00:25:49.692 [2024-04-26 13:08:54.495783] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793620 is same with the state(5) to be set 00:25:49.692 [2024-04-26 13:08:54.495788] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793620 is same with the state(5) to be set 00:25:49.692 [2024-04-26 13:08:54.495792] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793620 is same with the state(5) to be set 00:25:49.692 [2024-04-26 13:08:54.495797] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793620 is same with the state(5) to be set 00:25:49.692 [2024-04-26 13:08:54.495802] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793620 is same with the state(5) to be set 00:25:49.692 [2024-04-26 13:08:54.495806] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793620 is same with the state(5) to be set 00:25:49.692 [2024-04-26 13:08:54.495811] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793620 is same with the state(5) to be set 00:25:49.692 [2024-04-26 13:08:54.495816] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793620 is same with the state(5) to be set 00:25:49.692 [2024-04-26 13:08:54.495820] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793620 is same with the state(5) to be set 00:25:49.692 [2024-04-26 13:08:54.495825] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793620 is same with the state(5) to be set 00:25:49.692 [2024-04-26 13:08:54.495829] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793620 is same with the state(5) to be set 00:25:49.692 [2024-04-26 13:08:54.495834] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793620 is same with the state(5) to be set 00:25:49.692 [2024-04-26 13:08:54.495847] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793620 is same with the state(5) to be set 00:25:49.692 [2024-04-26 13:08:54.495852] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793620 is same with the state(5) to be set 00:25:49.692 [2024-04-26 13:08:54.495856] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793620 is same with the state(5) to be set 00:25:49.692 [2024-04-26 13:08:54.495861] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793620 is same with the state(5) to be set 00:25:49.692 [2024-04-26 13:08:54.495865] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793620 is same with the state(5) to be set 00:25:49.692 [2024-04-26 13:08:54.495871] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793620 is same with the state(5) to be set 00:25:49.692 [2024-04-26 13:08:54.495876] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793620 is same with the state(5) to be set 00:25:49.692 [2024-04-26 13:08:54.495881] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793620 is same with the state(5) to be set 00:25:49.692 [2024-04-26 13:08:54.495886] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793620 is same with the state(5) to be set 00:25:49.692 [2024-04-26 13:08:54.495890] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793620 is same with the state(5) to be set 00:25:49.692 [2024-04-26 13:08:54.495895] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793620 is same with the state(5) to be set 00:25:49.692 [2024-04-26 13:08:54.495899] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793620 is same with the state(5) to be set 00:25:49.692 [2024-04-26 13:08:54.495904] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793620 is same with the state(5) to be set 00:25:49.692 [2024-04-26 13:08:54.495908] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793620 is same with the state(5) to be set 00:25:49.692 [2024-04-26 13:08:54.495913] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793620 is same with the state(5) to be set 00:25:49.692 [2024-04-26 13:08:54.495917] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793620 is same with the state(5) to be set 00:25:49.692 [2024-04-26 13:08:54.495922] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793620 is same with the state(5) to be set 00:25:49.692 [2024-04-26 13:08:54.495926] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793620 is same with the state(5) to be set 00:25:49.692 [2024-04-26 13:08:54.495930] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793620 is same with the state(5) to be set 00:25:49.692 [2024-04-26 13:08:54.495935] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793620 is same with the state(5) to be set 00:25:49.692 [2024-04-26 13:08:54.495939] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793620 is same with the state(5) to be set 00:25:49.692 [2024-04-26 13:08:54.496607] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:49.692 [2024-04-26 13:08:54.496611] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793ab0 is same with the state(5) to be set 00:25:49.692 [2024-04-26 13:08:54.496628] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793ab0 is same with the state(5) to be set 00:25:49.692 [2024-04-26 13:08:54.496634] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793ab0 is same with the state(5) to be set 00:25:49.692 [2024-04-26 13:08:54.496638] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793ab0 is same with the state(5) to be set 00:25:49.692 [2024-04-26 13:08:54.496643] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793ab0 is same with the state(5) to be set 00:25:49.692 [2024-04-26 13:08:54.496650] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793ab0 is same with the state(5) to be set 00:25:49.692 [2024-04-26 13:08:54.496657] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793ab0 is same with the state(5) to be set 00:25:49.692 [2024-04-26 13:08:54.496662] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793ab0 is same with the state(5) to be set 00:25:49.692 [2024-04-26 13:08:54.496667] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793ab0 is same with the state(5) to be set 00:25:49.692 [2024-04-26 13:08:54.496675] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793ab0 is same with the state(5) to be set 00:25:49.692 [2024-04-26 13:08:54.496689] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793ab0 is same with the state(5) to be set 00:25:49.692 [2024-04-26 13:08:54.496694] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793ab0 is same with the state(5) to be set 00:25:49.692 [2024-04-26 13:08:54.496699] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793ab0 is same with the state(5) to be set 00:25:49.692 [2024-04-26 13:08:54.496708] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793ab0 is same with the state(5) to be set 00:25:49.692 [2024-04-26 13:08:54.496715] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793ab0 is same with the state(5) to be set 00:25:49.692 [2024-04-26 13:08:54.496720] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793ab0 is same with the state(5) to be set 00:25:49.692 [2024-04-26 13:08:54.496724] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793ab0 is same with the state(5) to be set 00:25:49.692 [2024-04-26 13:08:54.496729] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793ab0 is same with the state(5) to be set 00:25:49.692 [2024-04-26 13:08:54.496733] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793ab0 is same with the state(5) to be set 00:25:49.692 [2024-04-26 13:08:54.496738] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793ab0 is same with the state(5) to be set 00:25:49.692 [2024-04-26 13:08:54.496743] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793ab0 is same with the state(5) to be set 00:25:49.692 [2024-04-26 13:08:54.496747] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793ab0 is same with the state(5) to be set 00:25:49.692 [2024-04-26 13:08:54.496752] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793ab0 is same with the state(5) to be set 00:25:49.692 [2024-04-26 13:08:54.496757] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793ab0 is same with the state(5) to be set 00:25:49.692 [2024-04-26 13:08:54.496762] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793ab0 is same with the state(5) to be set 00:25:49.692 [2024-04-26 13:08:54.496769] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793ab0 is same with the state(5) to be set 00:25:49.692 [2024-04-26 13:08:54.496774] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793ab0 is same with the state(5) to be set 00:25:49.692 [2024-04-26 13:08:54.496781] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793ab0 is same with the state(5) to be set 00:25:49.692 [2024-04-26 13:08:54.496787] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793ab0 is same with the state(5) to be set 00:25:49.692 [2024-04-26 13:08:54.496792] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793ab0 is same with the state(5) to be set 00:25:49.692 [2024-04-26 13:08:54.496796] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793ab0 is same with the state(5) to be set 00:25:49.692 [2024-04-26 13:08:54.496801] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793ab0 is same with the state(5) to be set 00:25:49.692 [2024-04-26 13:08:54.496805] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793ab0 is same with the state(5) to be set 00:25:49.692 [2024-04-26 13:08:54.496810] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793ab0 is same with the state(5) to be set 00:25:49.692 [2024-04-26 13:08:54.496814] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793ab0 is same with the state(5) to be set 00:25:49.692 [2024-04-26 13:08:54.496819] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793ab0 is same with the state(5) to be set 00:25:49.692 [2024-04-26 13:08:54.496825] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793ab0 is same with the state(5) to be set 00:25:49.692 [2024-04-26 13:08:54.496832] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793ab0 is same with the state(5) to be set 00:25:49.692 [2024-04-26 13:08:54.496842] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793ab0 is same with the state(5) to be set 00:25:49.692 [2024-04-26 13:08:54.496846] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793ab0 is same with the state(5) to be set 00:25:49.692 [2024-04-26 13:08:54.496851] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793ab0 is same with the state(5) to be set 00:25:49.692 [2024-04-26 13:08:54.496855] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793ab0 is same with the state(5) to be set 00:25:49.692 [2024-04-26 13:08:54.496860] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793ab0 is same with the state(5) to be set 00:25:49.692 [2024-04-26 13:08:54.496865] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793ab0 is same with the state(5) to be set 00:25:49.692 [2024-04-26 13:08:54.496869] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793ab0 is same with the state(5) to be set 00:25:49.692 [2024-04-26 13:08:54.496874] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793ab0 is same with the state(5) to be set 00:25:49.692 [2024-04-26 13:08:54.496878] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793ab0 is same with the state(5) to be set 00:25:49.692 [2024-04-26 13:08:54.496883] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793ab0 is same with the state(5) to be set 00:25:49.692 [2024-04-26 13:08:54.496887] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793ab0 is same with the state(5) to be set 00:25:49.692 [2024-04-26 13:08:54.496892] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793ab0 is same with the state(5) to be set 00:25:49.693 [2024-04-26 13:08:54.496896] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793ab0 is same with the state(5) to be set 00:25:49.693 [2024-04-26 13:08:54.496900] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793ab0 is same with the state(5) to be set 00:25:49.693 [2024-04-26 13:08:54.496905] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793ab0 is same with the state(5) to be set 00:25:49.693 [2024-04-26 13:08:54.496909] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793ab0 is same with the state(5) to be set 00:25:49.693 [2024-04-26 13:08:54.496914] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793ab0 is same with the state(5) to be set 00:25:49.693 [2024-04-26 13:08:54.496918] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793ab0 is same with the state(5) to be set 00:25:49.693 [2024-04-26 13:08:54.496923] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793ab0 is same with the state(5) to be set 00:25:49.693 [2024-04-26 13:08:54.496927] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793ab0 is same with the state(5) to be set 00:25:49.693 [2024-04-26 13:08:54.496931] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793ab0 is same with the state(5) to be set 00:25:49.693 [2024-04-26 13:08:54.496936] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793ab0 is same with the state(5) to be set 00:25:49.693 [2024-04-26 13:08:54.496940] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793ab0 is same with the state(5) to be set 00:25:49.693 [2024-04-26 13:08:54.496944] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793ab0 is same with the state(5) to be set 00:25:49.693 [2024-04-26 13:08:54.496949] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793ab0 is same with the state(5) to be set 00:25:49.693 [2024-04-26 13:08:54.497670] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c8760 is same with the state(5) to be set 00:25:49.693 [2024-04-26 13:08:54.497675] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:49.693 [2024-04-26 13:08:54.497687] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c8760 is same with the state(5) to be set 00:25:49.693 [2024-04-26 13:08:54.497693] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c8760 is same with the state(5) to be set 00:25:49.693 [2024-04-26 13:08:54.497698] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c8760 is same with the state(5) to be set 00:25:49.693 [2024-04-26 13:08:54.497702] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c8760 is same with the state(5) to be set 00:25:49.693 [2024-04-26 13:08:54.497707] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c8760 is same with the state(5) to be set 00:25:49.693 [2024-04-26 13:08:54.497711] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c8760 is same with the state(5) to be set 00:25:49.693 [2024-04-26 13:08:54.497716] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c8760 is same with the state(5) to be set 00:25:49.693 [2024-04-26 13:08:54.497720] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c8760 is same with the state(5) to be set 00:25:49.693 [2024-04-26 13:08:54.497725] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c8760 is same with the state(5) to be set 00:25:49.693 [2024-04-26 13:08:54.497729] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c8760 is same with the state(5) to be set 00:25:49.693 [2024-04-26 13:08:54.497733] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c8760 is same with the state(5) to be set 00:25:49.693 [2024-04-26 13:08:54.497738] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c8760 is same with the state(5) to be set 00:25:49.693 [2024-04-26 13:08:54.497743] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c8760 is same with the state(5) to be set 00:25:49.693 [2024-04-26 13:08:54.497747] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c8760 is same with the state(5) to be set 00:25:49.693 [2024-04-26 13:08:54.497753] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c8760 is same with the state(5) to be set 00:25:49.693 [2024-04-26 13:08:54.497757] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c8760 is same with the state(5) to be set 00:25:49.693 [2024-04-26 13:08:54.497762] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c8760 is same with the state(5) to be set 00:25:49.693 [2024-04-26 13:08:54.497766] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c8760 is same with the state(5) to be set 00:25:49.693 [2024-04-26 13:08:54.497770] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c8760 is same with the state(5) to be set 00:25:49.693 [2024-04-26 13:08:54.497775] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c8760 is same with the state(5) to be set 00:25:49.693 [2024-04-26 13:08:54.497779] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c8760 is same with the state(5) to be set 00:25:49.693 [2024-04-26 13:08:54.497784] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c8760 is same with the state(5) to be set 00:25:49.693 [2024-04-26 13:08:54.497788] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c8760 is same with the state(5) to be set 00:25:49.693 [2024-04-26 13:08:54.497796] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c8760 is same with the state(5) to be set 00:25:49.693 [2024-04-26 13:08:54.497801] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c8760 is same with the state(5) to be set 00:25:49.693 [2024-04-26 13:08:54.497806] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c8760 is same with the state(5) to be set 00:25:49.693 [2024-04-26 13:08:54.497810] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c8760 is same with the state(5) to be set 00:25:49.693 [2024-04-26 13:08:54.497817] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c8760 is same with the state(5) to be set 00:25:49.693 [2024-04-26 13:08:54.497822] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c8760 is same with the state(5) to be set 00:25:49.693 [2024-04-26 13:08:54.497826] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c8760 is same with the state(5) to be set 00:25:49.693 [2024-04-26 13:08:54.497831] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c8760 is same with the state(5) to be set 00:25:49.693 [2024-04-26 13:08:54.497835] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c8760 is same with the state(5) to be set 00:25:49.693 [2024-04-26 13:08:54.497844] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c8760 is same with the state(5) to be set 00:25:49.693 [2024-04-26 13:08:54.497849] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c8760 is same with the state(5) to be set 00:25:49.693 [2024-04-26 13:08:54.497853] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c8760 is same with the state(5) to be set 00:25:49.693 [2024-04-26 13:08:54.497858] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c8760 is same with the state(5) to be set 00:25:49.693 [2024-04-26 13:08:54.497863] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c8760 is same with the state(5) to be set 00:25:49.693 [2024-04-26 13:08:54.497867] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c8760 is same with the state(5) to be set 00:25:49.693 [2024-04-26 13:08:54.503126] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:49.693 [2024-04-26 13:08:54.503729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.693 [2024-04-26 13:08:54.504065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.693 [2024-04-26 13:08:54.504077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeb0220 with addr=10.0.0.2, port=4420 00:25:49.693 [2024-04-26 13:08:54.504085] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb0220 is same with the state(5) to be set 00:25:49.693 [2024-04-26 13:08:54.504209] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xeb0220 (9): Bad file descriptor 00:25:49.693 [2024-04-26 13:08:54.504225] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c5a90 (9): Bad file descriptor 00:25:49.693 [2024-04-26 13:08:54.504250] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x147b6e0 (9): Bad file descriptor 00:25:49.693 [2024-04-26 13:08:54.504293] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.693 [2024-04-26 13:08:54.504304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.693 [2024-04-26 13:08:54.504312] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.693 [2024-04-26 13:08:54.504319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.693 [2024-04-26 13:08:54.504327] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.693 [2024-04-26 13:08:54.504334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.693 [2024-04-26 13:08:54.504343] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.693 [2024-04-26 13:08:54.504350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.693 [2024-04-26 13:08:54.504361] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14748f0 is same with the state(5) to be set 00:25:49.693 [2024-04-26 13:08:54.504377] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c0ed0 (9): Bad file descriptor 00:25:49.693 [2024-04-26 13:08:54.504406] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.693 [2024-04-26 13:08:54.504415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.693 [2024-04-26 13:08:54.504423] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.693 [2024-04-26 13:08:54.504430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.693 [2024-04-26 13:08:54.504438] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.693 [2024-04-26 13:08:54.504445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.693 [2024-04-26 13:08:54.504453] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.693 [2024-04-26 13:08:54.504461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.693 [2024-04-26 13:08:54.504468] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e6630 is same with the state(5) to be set 00:25:49.693 [2024-04-26 13:08:54.504569] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:49.693 [2024-04-26 13:08:54.504579] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:49.693 [2024-04-26 13:08:54.504587] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:49.693 [2024-04-26 13:08:54.504658] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.693 [2024-04-26 13:08:54.510474] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c8760 is same with the state(5) to be set 00:25:49.694 [2024-04-26 13:08:54.510493] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c8760 is same with the state(5) to be set 00:25:49.694 [2024-04-26 13:08:54.510499] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c8760 is same with the state(5) to be set 00:25:49.694 [2024-04-26 13:08:54.510505] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c8760 is same with the state(5) to be set 00:25:49.694 [2024-04-26 13:08:54.510510] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c8760 is same with the state(5) to be set 00:25:49.694 [2024-04-26 13:08:54.510515] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c8760 is same with the state(5) to be set 00:25:49.694 [2024-04-26 13:08:54.510521] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c8760 is same with the state(5) to be set 00:25:49.694 [2024-04-26 13:08:54.510526] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c8760 is same with the state(5) to be set 00:25:49.694 [2024-04-26 13:08:54.510531] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c8760 is same with the state(5) to be set 00:25:49.694 [2024-04-26 13:08:54.510537] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c8760 is same with the state(5) to be set 00:25:49.694 [2024-04-26 13:08:54.510542] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c8760 is same with the state(5) to be set 00:25:49.694 [2024-04-26 13:08:54.510548] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c8760 is same with the state(5) to be set 00:25:49.694 [2024-04-26 13:08:54.510559] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c8760 is same with the state(5) to be set 00:25:49.694 [2024-04-26 13:08:54.510564] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c8760 is same with the state(5) to be set 00:25:49.694 [2024-04-26 13:08:54.510569] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c8760 is same with the state(5) to be set 00:25:49.694 [2024-04-26 13:08:54.510573] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c8760 is same with the state(5) to be set 00:25:49.694 [2024-04-26 13:08:54.510578] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c8760 is same with the state(5) to be set 00:25:49.694 [2024-04-26 13:08:54.510582] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c8760 is same with the state(5) to be set 00:25:49.694 [2024-04-26 13:08:54.510586] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c8760 is same with the state(5) to be set 00:25:49.694 [2024-04-26 13:08:54.510591] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c8760 is same with the state(5) to be set 00:25:49.694 [2024-04-26 13:08:54.510596] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c8760 is same with the state(5) to be set 00:25:49.694 [2024-04-26 13:08:54.510600] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c8760 is same with the state(5) to be set 00:25:49.694 [2024-04-26 13:08:54.510605] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c8760 is same with the state(5) to be set 00:25:49.694 [2024-04-26 13:08:54.510609] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c8760 is same with the state(5) to be set 00:25:49.694 [2024-04-26 13:08:54.510675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.694 [2024-04-26 13:08:54.510693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.694 [2024-04-26 13:08:54.510706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.694 [2024-04-26 13:08:54.510713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.694 [2024-04-26 13:08:54.510723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.694 [2024-04-26 13:08:54.510730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.694 [2024-04-26 13:08:54.510740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.694 [2024-04-26 13:08:54.510747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.694 [2024-04-26 13:08:54.510756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.694 [2024-04-26 13:08:54.510763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.694 [2024-04-26 13:08:54.510772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.694 [2024-04-26 13:08:54.510779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.694 [2024-04-26 13:08:54.510789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.694 [2024-04-26 13:08:54.510796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.694 [2024-04-26 13:08:54.510809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.694 [2024-04-26 13:08:54.510816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.694 [2024-04-26 13:08:54.510825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.694 [2024-04-26 13:08:54.510832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.694 [2024-04-26 13:08:54.510845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.694 [2024-04-26 13:08:54.510853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.694 [2024-04-26 13:08:54.510862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.694 [2024-04-26 13:08:54.510869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.694 [2024-04-26 13:08:54.510878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.694 [2024-04-26 13:08:54.510886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.694 [2024-04-26 13:08:54.510895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.694 [2024-04-26 13:08:54.510902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.694 [2024-04-26 13:08:54.510911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.694 [2024-04-26 13:08:54.510918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.694 [2024-04-26 13:08:54.510928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.694 [2024-04-26 13:08:54.510935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.694 [2024-04-26 13:08:54.510944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.694 [2024-04-26 13:08:54.510951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.694 [2024-04-26 13:08:54.510960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.694 [2024-04-26 13:08:54.510968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.694 [2024-04-26 13:08:54.510978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.694 [2024-04-26 13:08:54.510985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.694 [2024-04-26 13:08:54.510995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.694 [2024-04-26 13:08:54.511002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.694 [2024-04-26 13:08:54.511011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.694 [2024-04-26 13:08:54.511019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.694 [2024-04-26 13:08:54.511029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.694 [2024-04-26 13:08:54.511036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.694 [2024-04-26 13:08:54.511045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.694 [2024-04-26 13:08:54.511052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.694 [2024-04-26 13:08:54.511061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.694 [2024-04-26 13:08:54.511068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.694 [2024-04-26 13:08:54.511078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.694 [2024-04-26 13:08:54.511085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.694 [2024-04-26 13:08:54.511094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.694 [2024-04-26 13:08:54.511101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.694 [2024-04-26 13:08:54.511110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.694 [2024-04-26 13:08:54.511117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.694 [2024-04-26 13:08:54.511126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.694 [2024-04-26 13:08:54.511133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.694 [2024-04-26 13:08:54.511142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.694 [2024-04-26 13:08:54.511149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.694 [2024-04-26 13:08:54.511158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.694 [2024-04-26 13:08:54.511165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.695 [2024-04-26 13:08:54.511175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.695 [2024-04-26 13:08:54.511182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.695 [2024-04-26 13:08:54.511191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.695 [2024-04-26 13:08:54.511199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.695 [2024-04-26 13:08:54.511208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.695 [2024-04-26 13:08:54.511216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.695 [2024-04-26 13:08:54.511227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.695 [2024-04-26 13:08:54.511234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.695 [2024-04-26 13:08:54.511243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.695 [2024-04-26 13:08:54.511250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.695 [2024-04-26 13:08:54.511260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.695 [2024-04-26 13:08:54.511267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.695 [2024-04-26 13:08:54.511276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.695 [2024-04-26 13:08:54.511283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.695 [2024-04-26 13:08:54.511292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.695 [2024-04-26 13:08:54.511299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.695 [2024-04-26 13:08:54.511308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.695 [2024-04-26 13:08:54.511315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.695 [2024-04-26 13:08:54.511325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.695 [2024-04-26 13:08:54.511332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.695 [2024-04-26 13:08:54.511341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.695 [2024-04-26 13:08:54.511348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.695 [2024-04-26 13:08:54.511357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.695 [2024-04-26 13:08:54.511364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.695 [2024-04-26 13:08:54.511373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.695 [2024-04-26 13:08:54.511380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.695 [2024-04-26 13:08:54.511390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.695 [2024-04-26 13:08:54.511397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.695 [2024-04-26 13:08:54.511407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.695 [2024-04-26 13:08:54.511414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.695 [2024-04-26 13:08:54.511424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.695 [2024-04-26 13:08:54.511432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.695 [2024-04-26 13:08:54.511441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.695 [2024-04-26 13:08:54.511448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.695 [2024-04-26 13:08:54.511457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.695 [2024-04-26 13:08:54.511464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.695 [2024-04-26 13:08:54.511474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.695 [2024-04-26 13:08:54.511481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.695 [2024-04-26 13:08:54.511490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.695 [2024-04-26 13:08:54.511497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.695 [2024-04-26 13:08:54.511507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.695 [2024-04-26 13:08:54.511514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.695 [2024-04-26 13:08:54.511523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.695 [2024-04-26 13:08:54.511530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.695 [2024-04-26 13:08:54.511539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.695 [2024-04-26 13:08:54.511546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.695 [2024-04-26 13:08:54.511556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.695 [2024-04-26 13:08:54.511563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.695 [2024-04-26 13:08:54.511572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.695 [2024-04-26 13:08:54.511579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.695 [2024-04-26 13:08:54.511589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.695 [2024-04-26 13:08:54.511596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.695 [2024-04-26 13:08:54.511606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.695 [2024-04-26 13:08:54.511613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.695 [2024-04-26 13:08:54.511622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.695 [2024-04-26 13:08:54.511629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.695 [2024-04-26 13:08:54.511639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.695 [2024-04-26 13:08:54.511651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.695 [2024-04-26 13:08:54.511660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.695 [2024-04-26 13:08:54.511668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.695 [2024-04-26 13:08:54.511677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.695 [2024-04-26 13:08:54.511684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.695 [2024-04-26 13:08:54.511693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.695 [2024-04-26 13:08:54.511700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.695 [2024-04-26 13:08:54.511710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.695 [2024-04-26 13:08:54.511716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.695 [2024-04-26 13:08:54.511726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.695 [2024-04-26 13:08:54.511732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.695 [2024-04-26 13:08:54.511742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.695 [2024-04-26 13:08:54.511749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.695 [2024-04-26 13:08:54.511758] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb5b0 is same with the state(5) to be set 00:25:49.695 [2024-04-26 13:08:54.511793] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x12bb5b0 was disconnected and freed. reset controller. 00:25:49.695 [2024-04-26 13:08:54.513050] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:25:49.695 [2024-04-26 13:08:54.513090] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x147be90 (9): Bad file descriptor 00:25:49.695 [2024-04-26 13:08:54.513758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.695 [2024-04-26 13:08:54.514086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.695 [2024-04-26 13:08:54.514097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x147be90 with addr=10.0.0.2, port=4420 00:25:49.695 [2024-04-26 13:08:54.514105] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x147be90 is same with the state(5) to be set 00:25:49.695 [2024-04-26 13:08:54.514156] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x147be90 (9): Bad file descriptor 00:25:49.695 [2024-04-26 13:08:54.514194] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:49.696 [2024-04-26 13:08:54.514209] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:25:49.696 [2024-04-26 13:08:54.514216] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:25:49.696 [2024-04-26 13:08:54.514223] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:25:49.696 [2024-04-26 13:08:54.514258] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.696 [2024-04-26 13:08:54.514457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.696 [2024-04-26 13:08:54.514774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.696 [2024-04-26 13:08:54.514784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeb0220 with addr=10.0.0.2, port=4420 00:25:49.696 [2024-04-26 13:08:54.514791] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb0220 is same with the state(5) to be set 00:25:49.696 [2024-04-26 13:08:54.514816] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.696 [2024-04-26 13:08:54.514825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.696 [2024-04-26 13:08:54.514833] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.696 [2024-04-26 13:08:54.514844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.696 [2024-04-26 13:08:54.514853] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.696 [2024-04-26 13:08:54.514860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.696 [2024-04-26 13:08:54.514867] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.696 [2024-04-26 13:08:54.514875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.696 [2024-04-26 13:08:54.514882] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6b40 is same with the state(5) to be set 00:25:49.696 [2024-04-26 13:08:54.514913] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.696 [2024-04-26 13:08:54.514922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.696 [2024-04-26 13:08:54.514930] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.696 [2024-04-26 13:08:54.514937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.696 [2024-04-26 13:08:54.514945] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.696 [2024-04-26 13:08:54.514952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.696 [2024-04-26 13:08:54.514960] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.696 [2024-04-26 13:08:54.514968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.696 [2024-04-26 13:08:54.514975] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ee280 is same with the state(5) to be set 00:25:49.696 [2024-04-26 13:08:54.514991] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14748f0 (9): Bad file descriptor 00:25:49.696 [2024-04-26 13:08:54.515021] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.696 [2024-04-26 13:08:54.515029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.696 [2024-04-26 13:08:54.515037] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.696 [2024-04-26 13:08:54.515047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.696 [2024-04-26 13:08:54.515055] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.696 [2024-04-26 13:08:54.515062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.696 [2024-04-26 13:08:54.515069] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.696 [2024-04-26 13:08:54.515076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.696 [2024-04-26 13:08:54.515084] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ef450 is same with the state(5) to be set 00:25:49.696 [2024-04-26 13:08:54.515100] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e6630 (9): Bad file descriptor 00:25:49.696 [2024-04-26 13:08:54.515157] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xeb0220 (9): Bad file descriptor 00:25:49.696 [2024-04-26 13:08:54.515183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.696 [2024-04-26 13:08:54.515192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.696 [2024-04-26 13:08:54.515202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.696 [2024-04-26 13:08:54.515209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.696 [2024-04-26 13:08:54.515219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.696 [2024-04-26 13:08:54.515227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.696 [2024-04-26 13:08:54.515236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.696 [2024-04-26 13:08:54.515243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.696 [2024-04-26 13:08:54.515253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.696 [2024-04-26 13:08:54.515260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.696 [2024-04-26 13:08:54.515269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.696 [2024-04-26 13:08:54.515276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.696 [2024-04-26 13:08:54.515286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.696 [2024-04-26 13:08:54.515293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.696 [2024-04-26 13:08:54.515303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.696 [2024-04-26 13:08:54.515310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.696 [2024-04-26 13:08:54.515319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.696 [2024-04-26 13:08:54.515327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.696 [2024-04-26 13:08:54.515338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.696 [2024-04-26 13:08:54.515345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.696 [2024-04-26 13:08:54.515355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.696 [2024-04-26 13:08:54.515363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.696 [2024-04-26 13:08:54.515372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.696 [2024-04-26 13:08:54.515379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.696 [2024-04-26 13:08:54.515389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.696 [2024-04-26 13:08:54.515396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.696 [2024-04-26 13:08:54.515405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.696 [2024-04-26 13:08:54.515412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.696 [2024-04-26 13:08:54.515422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.696 [2024-04-26 13:08:54.515429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.696 [2024-04-26 13:08:54.515439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.696 [2024-04-26 13:08:54.515446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.696 [2024-04-26 13:08:54.515455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.697 [2024-04-26 13:08:54.515463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.697 [2024-04-26 13:08:54.515472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.697 [2024-04-26 13:08:54.515479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.697 [2024-04-26 13:08:54.515489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.697 [2024-04-26 13:08:54.515496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.697 [2024-04-26 13:08:54.515505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.697 [2024-04-26 13:08:54.515512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.697 [2024-04-26 13:08:54.515522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.697 [2024-04-26 13:08:54.515529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.697 [2024-04-26 13:08:54.515538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.697 [2024-04-26 13:08:54.515546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.697 [2024-04-26 13:08:54.515556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.697 [2024-04-26 13:08:54.515564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.697 [2024-04-26 13:08:54.515573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.697 [2024-04-26 13:08:54.515580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.697 [2024-04-26 13:08:54.515589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.697 [2024-04-26 13:08:54.515597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.697 [2024-04-26 13:08:54.515606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.697 [2024-04-26 13:08:54.515613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.697 [2024-04-26 13:08:54.515622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.697 [2024-04-26 13:08:54.515629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.697 [2024-04-26 13:08:54.515639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.697 [2024-04-26 13:08:54.515646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.697 [2024-04-26 13:08:54.515656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.697 [2024-04-26 13:08:54.515663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.697 [2024-04-26 13:08:54.515672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.697 [2024-04-26 13:08:54.515679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.697 [2024-04-26 13:08:54.515689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.697 [2024-04-26 13:08:54.515697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.697 [2024-04-26 13:08:54.515706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.697 [2024-04-26 13:08:54.515713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.697 [2024-04-26 13:08:54.515722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.697 [2024-04-26 13:08:54.515729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.697 [2024-04-26 13:08:54.515738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.697 [2024-04-26 13:08:54.515745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.697 [2024-04-26 13:08:54.515756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.697 [2024-04-26 13:08:54.515763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.697 [2024-04-26 13:08:54.515773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.697 [2024-04-26 13:08:54.515779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.697 [2024-04-26 13:08:54.515789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.697 [2024-04-26 13:08:54.515796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.697 [2024-04-26 13:08:54.515806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.697 [2024-04-26 13:08:54.515812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.697 [2024-04-26 13:08:54.515822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.697 [2024-04-26 13:08:54.515829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.697 [2024-04-26 13:08:54.515844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.697 [2024-04-26 13:08:54.515851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.697 [2024-04-26 13:08:54.515861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.697 [2024-04-26 13:08:54.515868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.697 [2024-04-26 13:08:54.515877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.697 [2024-04-26 13:08:54.515884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.697 [2024-04-26 13:08:54.515894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.697 [2024-04-26 13:08:54.515901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.697 [2024-04-26 13:08:54.515910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.697 [2024-04-26 13:08:54.515917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.697 [2024-04-26 13:08:54.515926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.697 [2024-04-26 13:08:54.515933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.697 [2024-04-26 13:08:54.515943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.697 [2024-04-26 13:08:54.515950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.697 [2024-04-26 13:08:54.515959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.697 [2024-04-26 13:08:54.515967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.697 [2024-04-26 13:08:54.515976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.697 [2024-04-26 13:08:54.515984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.697 [2024-04-26 13:08:54.515993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.697 [2024-04-26 13:08:54.516000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.697 [2024-04-26 13:08:54.516009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.697 [2024-04-26 13:08:54.516016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.697 [2024-04-26 13:08:54.516025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.697 [2024-04-26 13:08:54.516033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.697 [2024-04-26 13:08:54.516042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.697 [2024-04-26 13:08:54.516049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.697 [2024-04-26 13:08:54.516057] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1372260 is same with the state(5) to be set 00:25:49.697 [2024-04-26 13:08:54.517257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.697 [2024-04-26 13:08:54.517270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.697 [2024-04-26 13:08:54.517282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.697 [2024-04-26 13:08:54.517291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.697 [2024-04-26 13:08:54.517302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.697 [2024-04-26 13:08:54.517311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.697 [2024-04-26 13:08:54.517322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.697 [2024-04-26 13:08:54.517331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.698 [2024-04-26 13:08:54.517342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.698 [2024-04-26 13:08:54.517350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.698 [2024-04-26 13:08:54.517360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.698 [2024-04-26 13:08:54.517368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.698 [2024-04-26 13:08:54.517377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.698 [2024-04-26 13:08:54.517386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.698 [2024-04-26 13:08:54.517395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.698 [2024-04-26 13:08:54.517403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.698 [2024-04-26 13:08:54.517412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.698 [2024-04-26 13:08:54.517419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.698 [2024-04-26 13:08:54.517428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.698 [2024-04-26 13:08:54.517435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.698 [2024-04-26 13:08:54.517444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.698 [2024-04-26 13:08:54.517451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.698 [2024-04-26 13:08:54.517460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.698 [2024-04-26 13:08:54.517468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.698 [2024-04-26 13:08:54.517477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.698 [2024-04-26 13:08:54.517484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.698 [2024-04-26 13:08:54.517493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.698 [2024-04-26 13:08:54.517500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.698 [2024-04-26 13:08:54.517509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.698 [2024-04-26 13:08:54.517516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.698 [2024-04-26 13:08:54.517525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.698 [2024-04-26 13:08:54.517533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.698 [2024-04-26 13:08:54.517542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.698 [2024-04-26 13:08:54.517549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.698 [2024-04-26 13:08:54.517558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.698 [2024-04-26 13:08:54.517565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.698 [2024-04-26 13:08:54.517574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.698 [2024-04-26 13:08:54.517581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.698 [2024-04-26 13:08:54.517592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.698 [2024-04-26 13:08:54.517599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.698 [2024-04-26 13:08:54.517608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.698 [2024-04-26 13:08:54.517616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.698 [2024-04-26 13:08:54.517625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.698 [2024-04-26 13:08:54.517632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.698 [2024-04-26 13:08:54.517642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.698 [2024-04-26 13:08:54.517649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.698 [2024-04-26 13:08:54.517658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.698 [2024-04-26 13:08:54.517665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.698 [2024-04-26 13:08:54.517674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.698 [2024-04-26 13:08:54.517681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.698 [2024-04-26 13:08:54.517690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.698 [2024-04-26 13:08:54.517697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.698 [2024-04-26 13:08:54.517706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.698 [2024-04-26 13:08:54.517713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.698 [2024-04-26 13:08:54.517722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.698 [2024-04-26 13:08:54.517729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.698 [2024-04-26 13:08:54.517738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.698 [2024-04-26 13:08:54.517746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.698 [2024-04-26 13:08:54.517755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.698 [2024-04-26 13:08:54.517762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.698 [2024-04-26 13:08:54.517771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.698 [2024-04-26 13:08:54.517778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.698 [2024-04-26 13:08:54.517788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.698 [2024-04-26 13:08:54.517796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.698 [2024-04-26 13:08:54.517806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.698 [2024-04-26 13:08:54.517812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.698 [2024-04-26 13:08:54.517821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.698 [2024-04-26 13:08:54.517829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.698 [2024-04-26 13:08:54.517845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.698 [2024-04-26 13:08:54.517853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.698 [2024-04-26 13:08:54.517862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.698 [2024-04-26 13:08:54.517869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.698 [2024-04-26 13:08:54.517878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.698 [2024-04-26 13:08:54.517886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.698 [2024-04-26 13:08:54.517895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.698 [2024-04-26 13:08:54.517902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.698 [2024-04-26 13:08:54.517911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.698 [2024-04-26 13:08:54.517919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.698 [2024-04-26 13:08:54.517928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.698 [2024-04-26 13:08:54.517935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.698 [2024-04-26 13:08:54.517944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.698 [2024-04-26 13:08:54.517951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.698 [2024-04-26 13:08:54.517961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.698 [2024-04-26 13:08:54.517968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.698 [2024-04-26 13:08:54.517977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.699 [2024-04-26 13:08:54.517984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.699 [2024-04-26 13:08:54.517993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.699 [2024-04-26 13:08:54.518000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.699 [2024-04-26 13:08:54.523828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.699 [2024-04-26 13:08:54.523870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.699 [2024-04-26 13:08:54.523881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.699 [2024-04-26 13:08:54.523889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.699 [2024-04-26 13:08:54.523899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.699 [2024-04-26 13:08:54.523905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.699 [2024-04-26 13:08:54.523915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.699 [2024-04-26 13:08:54.523923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.699 [2024-04-26 13:08:54.523932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.699 [2024-04-26 13:08:54.523940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.699 [2024-04-26 13:08:54.523949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.699 [2024-04-26 13:08:54.523956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.699 [2024-04-26 13:08:54.523965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.699 [2024-04-26 13:08:54.523972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.699 [2024-04-26 13:08:54.523981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.699 [2024-04-26 13:08:54.523988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.699 [2024-04-26 13:08:54.523997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.699 [2024-04-26 13:08:54.524004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.699 [2024-04-26 13:08:54.524014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.699 [2024-04-26 13:08:54.524020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.699 [2024-04-26 13:08:54.524030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.699 [2024-04-26 13:08:54.524037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.699 [2024-04-26 13:08:54.524046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.699 [2024-04-26 13:08:54.524054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.699 [2024-04-26 13:08:54.524062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.699 [2024-04-26 13:08:54.524074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.699 [2024-04-26 13:08:54.524084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.699 [2024-04-26 13:08:54.524091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.699 [2024-04-26 13:08:54.524100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.699 [2024-04-26 13:08:54.524107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.699 [2024-04-26 13:08:54.524116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.699 [2024-04-26 13:08:54.524123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.699 [2024-04-26 13:08:54.524132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.699 [2024-04-26 13:08:54.524139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.699 [2024-04-26 13:08:54.524149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.699 [2024-04-26 13:08:54.524155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.699 [2024-04-26 13:08:54.524165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.699 [2024-04-26 13:08:54.524172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.699 [2024-04-26 13:08:54.524181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.699 [2024-04-26 13:08:54.524188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.699 [2024-04-26 13:08:54.524196] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146fde0 is same with the state(5) to be set 00:25:49.699 [2024-04-26 13:08:54.525544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.699 [2024-04-26 13:08:54.525560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.699 [2024-04-26 13:08:54.525575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.699 [2024-04-26 13:08:54.525584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.699 [2024-04-26 13:08:54.525593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.699 [2024-04-26 13:08:54.525601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.699 [2024-04-26 13:08:54.525611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.699 [2024-04-26 13:08:54.525619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.699 [2024-04-26 13:08:54.525628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.699 [2024-04-26 13:08:54.525639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.699 [2024-04-26 13:08:54.525650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.699 [2024-04-26 13:08:54.525657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.699 [2024-04-26 13:08:54.525667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.699 [2024-04-26 13:08:54.525674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.699 [2024-04-26 13:08:54.525683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.699 [2024-04-26 13:08:54.525691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.699 [2024-04-26 13:08:54.525700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.699 [2024-04-26 13:08:54.525707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.699 [2024-04-26 13:08:54.525716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.699 [2024-04-26 13:08:54.525723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.699 [2024-04-26 13:08:54.525733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.699 [2024-04-26 13:08:54.525740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.699 [2024-04-26 13:08:54.525749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.699 [2024-04-26 13:08:54.525756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.699 [2024-04-26 13:08:54.525766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.699 [2024-04-26 13:08:54.525772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.699 [2024-04-26 13:08:54.525782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.699 [2024-04-26 13:08:54.525789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.699 [2024-04-26 13:08:54.525799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.699 [2024-04-26 13:08:54.525806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.699 [2024-04-26 13:08:54.525815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.699 [2024-04-26 13:08:54.525822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.699 [2024-04-26 13:08:54.525832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.699 [2024-04-26 13:08:54.525848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.699 [2024-04-26 13:08:54.525859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.700 [2024-04-26 13:08:54.525867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.700 [2024-04-26 13:08:54.525876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.700 [2024-04-26 13:08:54.525883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.700 [2024-04-26 13:08:54.525893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.700 [2024-04-26 13:08:54.525900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.700 [2024-04-26 13:08:54.525909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.700 [2024-04-26 13:08:54.525917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.700 [2024-04-26 13:08:54.525926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.700 [2024-04-26 13:08:54.525933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.700 [2024-04-26 13:08:54.525943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.700 [2024-04-26 13:08:54.525950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.700 [2024-04-26 13:08:54.525960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.700 [2024-04-26 13:08:54.525967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.700 [2024-04-26 13:08:54.525976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.700 [2024-04-26 13:08:54.525983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.700 [2024-04-26 13:08:54.525992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.700 [2024-04-26 13:08:54.526000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.700 [2024-04-26 13:08:54.526009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.700 [2024-04-26 13:08:54.526016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.700 [2024-04-26 13:08:54.526026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.700 [2024-04-26 13:08:54.526033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.700 [2024-04-26 13:08:54.526042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.700 [2024-04-26 13:08:54.526050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.700 [2024-04-26 13:08:54.526060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.700 [2024-04-26 13:08:54.526069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.700 [2024-04-26 13:08:54.526078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.700 [2024-04-26 13:08:54.526086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.700 [2024-04-26 13:08:54.526095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.700 [2024-04-26 13:08:54.526103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.700 [2024-04-26 13:08:54.526112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.700 [2024-04-26 13:08:54.526119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.700 [2024-04-26 13:08:54.526128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.700 [2024-04-26 13:08:54.526136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.700 [2024-04-26 13:08:54.526145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.700 [2024-04-26 13:08:54.526152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.700 [2024-04-26 13:08:54.526161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.700 [2024-04-26 13:08:54.526168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.700 [2024-04-26 13:08:54.526177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.700 [2024-04-26 13:08:54.526185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.700 [2024-04-26 13:08:54.526194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.700 [2024-04-26 13:08:54.526201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.700 [2024-04-26 13:08:54.526210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.700 [2024-04-26 13:08:54.526217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.700 [2024-04-26 13:08:54.526226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.700 [2024-04-26 13:08:54.526234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.700 [2024-04-26 13:08:54.526243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.700 [2024-04-26 13:08:54.526250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.700 [2024-04-26 13:08:54.526260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.700 [2024-04-26 13:08:54.526267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.700 [2024-04-26 13:08:54.526276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.700 [2024-04-26 13:08:54.526288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.700 [2024-04-26 13:08:54.526297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.700 [2024-04-26 13:08:54.526305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.700 [2024-04-26 13:08:54.526314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.700 [2024-04-26 13:08:54.526321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.700 [2024-04-26 13:08:54.526330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.700 [2024-04-26 13:08:54.526337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.700 [2024-04-26 13:08:54.526346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.700 [2024-04-26 13:08:54.526353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.700 [2024-04-26 13:08:54.526362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.700 [2024-04-26 13:08:54.526370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.700 [2024-04-26 13:08:54.526379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.700 [2024-04-26 13:08:54.526386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.700 [2024-04-26 13:08:54.526395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.700 [2024-04-26 13:08:54.526402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.700 [2024-04-26 13:08:54.526411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.700 [2024-04-26 13:08:54.526419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.700 [2024-04-26 13:08:54.526428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.701 [2024-04-26 13:08:54.526435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.701 [2024-04-26 13:08:54.526444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.701 [2024-04-26 13:08:54.526451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.701 [2024-04-26 13:08:54.526460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.701 [2024-04-26 13:08:54.526467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.701 [2024-04-26 13:08:54.526476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.701 [2024-04-26 13:08:54.526484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.701 [2024-04-26 13:08:54.526495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.701 [2024-04-26 13:08:54.526502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.701 [2024-04-26 13:08:54.526511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.701 [2024-04-26 13:08:54.526519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.701 [2024-04-26 13:08:54.526528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.701 [2024-04-26 13:08:54.526535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.701 [2024-04-26 13:08:54.526544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.701 [2024-04-26 13:08:54.526551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.701 [2024-04-26 13:08:54.526560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.701 [2024-04-26 13:08:54.526567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.701 [2024-04-26 13:08:54.526577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.701 [2024-04-26 13:08:54.526584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.701 [2024-04-26 13:08:54.526593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.701 [2024-04-26 13:08:54.526600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.701 [2024-04-26 13:08:54.526609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.701 [2024-04-26 13:08:54.526617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.701 [2024-04-26 13:08:54.526626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.701 [2024-04-26 13:08:54.526633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.701 [2024-04-26 13:08:54.526641] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1369b00 is same with the state(5) to be set 00:25:49.701 [2024-04-26 13:08:54.528147] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:25:49.701 [2024-04-26 13:08:54.528170] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:25:49.701 [2024-04-26 13:08:54.528182] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:25:49.701 [2024-04-26 13:08:54.528216] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:49.701 [2024-04-26 13:08:54.528223] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:49.701 [2024-04-26 13:08:54.528232] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:49.701 [2024-04-26 13:08:54.528277] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:49.701 [2024-04-26 13:08:54.528297] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13c6b40 (9): Bad file descriptor 00:25:49.701 [2024-04-26 13:08:54.528317] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12ee280 (9): Bad file descriptor 00:25:49.701 [2024-04-26 13:08:54.528340] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13ef450 (9): Bad file descriptor 00:25:49.701 [2024-04-26 13:08:54.528409] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.701 [2024-04-26 13:08:54.528830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.701 [2024-04-26 13:08:54.529204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.701 [2024-04-26 13:08:54.529240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c0ed0 with addr=10.0.0.2, port=4420 00:25:49.701 [2024-04-26 13:08:54.529253] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c0ed0 is same with the state(5) to be set 00:25:49.701 [2024-04-26 13:08:54.529590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.701 [2024-04-26 13:08:54.530089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.701 [2024-04-26 13:08:54.530127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c5a90 with addr=10.0.0.2, port=4420 00:25:49.701 [2024-04-26 13:08:54.530138] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c5a90 is same with the state(5) to be set 00:25:49.701 [2024-04-26 13:08:54.530543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.701 [2024-04-26 13:08:54.530770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.701 [2024-04-26 13:08:54.530779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x147b6e0 with addr=10.0.0.2, port=4420 00:25:49.701 [2024-04-26 13:08:54.530787] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x147b6e0 is same with the state(5) to be set 00:25:49.701 [2024-04-26 13:08:54.531370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.701 [2024-04-26 13:08:54.531384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.701 [2024-04-26 13:08:54.531400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.701 [2024-04-26 13:08:54.531408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.701 [2024-04-26 13:08:54.531418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.701 [2024-04-26 13:08:54.531424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.701 [2024-04-26 13:08:54.531434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.701 [2024-04-26 13:08:54.531441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.701 [2024-04-26 13:08:54.531450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.701 [2024-04-26 13:08:54.531457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.701 [2024-04-26 13:08:54.531466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.701 [2024-04-26 13:08:54.531473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.701 [2024-04-26 13:08:54.531487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.701 [2024-04-26 13:08:54.531494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.701 [2024-04-26 13:08:54.531503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.701 [2024-04-26 13:08:54.531510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.701 [2024-04-26 13:08:54.531519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.701 [2024-04-26 13:08:54.531526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.701 [2024-04-26 13:08:54.531536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.701 [2024-04-26 13:08:54.531543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.701 [2024-04-26 13:08:54.531552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.701 [2024-04-26 13:08:54.531559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.701 [2024-04-26 13:08:54.531568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.701 [2024-04-26 13:08:54.531575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.701 [2024-04-26 13:08:54.531584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.701 [2024-04-26 13:08:54.531592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.701 [2024-04-26 13:08:54.531601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.701 [2024-04-26 13:08:54.531608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.701 [2024-04-26 13:08:54.531617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.701 [2024-04-26 13:08:54.531624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.701 [2024-04-26 13:08:54.531633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.701 [2024-04-26 13:08:54.531640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.701 [2024-04-26 13:08:54.531649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.701 [2024-04-26 13:08:54.531656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.701 [2024-04-26 13:08:54.531666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.702 [2024-04-26 13:08:54.531673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.702 [2024-04-26 13:08:54.531682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.702 [2024-04-26 13:08:54.531691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.702 [2024-04-26 13:08:54.531700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.702 [2024-04-26 13:08:54.531708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.702 [2024-04-26 13:08:54.531717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.702 [2024-04-26 13:08:54.531724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.702 [2024-04-26 13:08:54.531733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.702 [2024-04-26 13:08:54.531740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.702 [2024-04-26 13:08:54.531750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.702 [2024-04-26 13:08:54.531757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.702 [2024-04-26 13:08:54.531766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.702 [2024-04-26 13:08:54.531773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.702 [2024-04-26 13:08:54.531782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.702 [2024-04-26 13:08:54.531789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.702 [2024-04-26 13:08:54.531798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.702 [2024-04-26 13:08:54.531805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.702 [2024-04-26 13:08:54.531814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.702 [2024-04-26 13:08:54.531821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.702 [2024-04-26 13:08:54.531830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.702 [2024-04-26 13:08:54.531842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.702 [2024-04-26 13:08:54.531851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.702 [2024-04-26 13:08:54.531859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.702 [2024-04-26 13:08:54.531868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.702 [2024-04-26 13:08:54.531875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.702 [2024-04-26 13:08:54.531884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.702 [2024-04-26 13:08:54.531891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.702 [2024-04-26 13:08:54.531902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.702 [2024-04-26 13:08:54.531909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.702 [2024-04-26 13:08:54.531919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.702 [2024-04-26 13:08:54.531926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.702 [2024-04-26 13:08:54.531935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.702 [2024-04-26 13:08:54.531943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.702 [2024-04-26 13:08:54.531951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.702 [2024-04-26 13:08:54.531958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.702 [2024-04-26 13:08:54.531967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.702 [2024-04-26 13:08:54.531974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.702 [2024-04-26 13:08:54.531984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.702 [2024-04-26 13:08:54.531991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.702 [2024-04-26 13:08:54.532000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.702 [2024-04-26 13:08:54.532007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.702 [2024-04-26 13:08:54.532016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.702 [2024-04-26 13:08:54.532024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.702 [2024-04-26 13:08:54.532033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.702 [2024-04-26 13:08:54.532040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.702 [2024-04-26 13:08:54.532049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.702 [2024-04-26 13:08:54.532056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.702 [2024-04-26 13:08:54.532065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.702 [2024-04-26 13:08:54.532072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.702 [2024-04-26 13:08:54.532081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.702 [2024-04-26 13:08:54.532088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.702 [2024-04-26 13:08:54.532098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.702 [2024-04-26 13:08:54.532106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.702 [2024-04-26 13:08:54.532115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.702 [2024-04-26 13:08:54.532122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.702 [2024-04-26 13:08:54.532131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.702 [2024-04-26 13:08:54.532138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.702 [2024-04-26 13:08:54.532147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.702 [2024-04-26 13:08:54.532154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.702 [2024-04-26 13:08:54.532164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.702 [2024-04-26 13:08:54.532171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.702 [2024-04-26 13:08:54.532180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.702 [2024-04-26 13:08:54.532187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.702 [2024-04-26 13:08:54.532197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.702 [2024-04-26 13:08:54.532204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.702 [2024-04-26 13:08:54.532213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.702 [2024-04-26 13:08:54.532220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.702 [2024-04-26 13:08:54.532229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.702 [2024-04-26 13:08:54.532236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.702 [2024-04-26 13:08:54.532246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.702 [2024-04-26 13:08:54.532253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.702 [2024-04-26 13:08:54.532262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.702 [2024-04-26 13:08:54.532269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.702 [2024-04-26 13:08:54.532279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.702 [2024-04-26 13:08:54.532286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.702 [2024-04-26 13:08:54.532295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.702 [2024-04-26 13:08:54.532302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.702 [2024-04-26 13:08:54.532313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.703 [2024-04-26 13:08:54.532320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.703 [2024-04-26 13:08:54.532329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.703 [2024-04-26 13:08:54.532336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.703 [2024-04-26 13:08:54.532345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.703 [2024-04-26 13:08:54.532352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.703 [2024-04-26 13:08:54.532361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.703 [2024-04-26 13:08:54.532368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.703 [2024-04-26 13:08:54.532377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.703 [2024-04-26 13:08:54.532384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.703 [2024-04-26 13:08:54.532393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.703 [2024-04-26 13:08:54.532400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.703 [2024-04-26 13:08:54.532409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.703 [2024-04-26 13:08:54.532417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.703 [2024-04-26 13:08:54.532426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.703 [2024-04-26 13:08:54.532433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.703 [2024-04-26 13:08:54.532441] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14712b0 is same with the state(5) to be set 00:25:49.703 [2024-04-26 13:08:54.533710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.703 [2024-04-26 13:08:54.533724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.703 [2024-04-26 13:08:54.533736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.703 [2024-04-26 13:08:54.533745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.703 [2024-04-26 13:08:54.533756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.703 [2024-04-26 13:08:54.533765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.703 [2024-04-26 13:08:54.533776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.703 [2024-04-26 13:08:54.533785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.703 [2024-04-26 13:08:54.533797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.703 [2024-04-26 13:08:54.533808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.703 [2024-04-26 13:08:54.533819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.703 [2024-04-26 13:08:54.533828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.703 [2024-04-26 13:08:54.533844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.703 [2024-04-26 13:08:54.533852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.703 [2024-04-26 13:08:54.533862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.703 [2024-04-26 13:08:54.533869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.703 [2024-04-26 13:08:54.533878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.703 [2024-04-26 13:08:54.533886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.703 [2024-04-26 13:08:54.533895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.703 [2024-04-26 13:08:54.533903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.703 [2024-04-26 13:08:54.533912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.703 [2024-04-26 13:08:54.533920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.703 [2024-04-26 13:08:54.533929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.703 [2024-04-26 13:08:54.533937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.703 [2024-04-26 13:08:54.533946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.703 [2024-04-26 13:08:54.533953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.703 [2024-04-26 13:08:54.533963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.703 [2024-04-26 13:08:54.533970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.703 [2024-04-26 13:08:54.533979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.703 [2024-04-26 13:08:54.533987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.703 [2024-04-26 13:08:54.533997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.703 [2024-04-26 13:08:54.534004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.703 [2024-04-26 13:08:54.534013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.703 [2024-04-26 13:08:54.534021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.703 [2024-04-26 13:08:54.534032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.703 [2024-04-26 13:08:54.534039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.703 [2024-04-26 13:08:54.534049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.703 [2024-04-26 13:08:54.534057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.703 [2024-04-26 13:08:54.534066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.703 [2024-04-26 13:08:54.534073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.703 [2024-04-26 13:08:54.534084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.703 [2024-04-26 13:08:54.534091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.703 [2024-04-26 13:08:54.534100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.703 [2024-04-26 13:08:54.534107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.703 [2024-04-26 13:08:54.534117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.703 [2024-04-26 13:08:54.534124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.703 [2024-04-26 13:08:54.534133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.703 [2024-04-26 13:08:54.534141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.703 [2024-04-26 13:08:54.534150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.703 [2024-04-26 13:08:54.534157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.703 [2024-04-26 13:08:54.534166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.703 [2024-04-26 13:08:54.534174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.703 [2024-04-26 13:08:54.534183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.703 [2024-04-26 13:08:54.534190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.703 [2024-04-26 13:08:54.534199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.703 [2024-04-26 13:08:54.534207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.703 [2024-04-26 13:08:54.534216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.703 [2024-04-26 13:08:54.534223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.703 [2024-04-26 13:08:54.534233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.703 [2024-04-26 13:08:54.534242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.703 [2024-04-26 13:08:54.534251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.703 [2024-04-26 13:08:54.534258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.703 [2024-04-26 13:08:54.534267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.704 [2024-04-26 13:08:54.534275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.704 [2024-04-26 13:08:54.534284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.704 [2024-04-26 13:08:54.534291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.704 [2024-04-26 13:08:54.534300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.704 [2024-04-26 13:08:54.534308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.704 [2024-04-26 13:08:54.534317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.704 [2024-04-26 13:08:54.534325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.704 [2024-04-26 13:08:54.534334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.704 [2024-04-26 13:08:54.534341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.704 [2024-04-26 13:08:54.534350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.704 [2024-04-26 13:08:54.534358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.704 [2024-04-26 13:08:54.534367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.704 [2024-04-26 13:08:54.534375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.704 [2024-04-26 13:08:54.534384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.704 [2024-04-26 13:08:54.534392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.704 [2024-04-26 13:08:54.534401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.704 [2024-04-26 13:08:54.534408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.704 [2024-04-26 13:08:54.534417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.704 [2024-04-26 13:08:54.534424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.704 [2024-04-26 13:08:54.534434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.704 [2024-04-26 13:08:54.534441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.704 [2024-04-26 13:08:54.534452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.704 [2024-04-26 13:08:54.534460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.704 [2024-04-26 13:08:54.534469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.704 [2024-04-26 13:08:54.534476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.704 [2024-04-26 13:08:54.534485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.704 [2024-04-26 13:08:54.534493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.704 [2024-04-26 13:08:54.534502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.704 [2024-04-26 13:08:54.534509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.704 [2024-04-26 13:08:54.534518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.704 [2024-04-26 13:08:54.534525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.704 [2024-04-26 13:08:54.534535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.704 [2024-04-26 13:08:54.534542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.704 [2024-04-26 13:08:54.534552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.704 [2024-04-26 13:08:54.534559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.704 [2024-04-26 13:08:54.534568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.704 [2024-04-26 13:08:54.534576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.704 [2024-04-26 13:08:54.534585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.704 [2024-04-26 13:08:54.534592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.704 [2024-04-26 13:08:54.534601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.704 [2024-04-26 13:08:54.534609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.704 [2024-04-26 13:08:54.534617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.704 [2024-04-26 13:08:54.534625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.704 [2024-04-26 13:08:54.534634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.704 [2024-04-26 13:08:54.534641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.704 [2024-04-26 13:08:54.534650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.704 [2024-04-26 13:08:54.534659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.704 [2024-04-26 13:08:54.534668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.704 [2024-04-26 13:08:54.534676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.704 [2024-04-26 13:08:54.534685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.704 [2024-04-26 13:08:54.534693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.704 [2024-04-26 13:08:54.534702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.704 [2024-04-26 13:08:54.534709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.704 [2024-04-26 13:08:54.534719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.704 [2024-04-26 13:08:54.534726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.704 [2024-04-26 13:08:54.534735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.704 [2024-04-26 13:08:54.534743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.704 [2024-04-26 13:08:54.534752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.704 [2024-04-26 13:08:54.534759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.704 [2024-04-26 13:08:54.534768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.704 [2024-04-26 13:08:54.534775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.704 [2024-04-26 13:08:54.534785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.704 [2024-04-26 13:08:54.534792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.704 [2024-04-26 13:08:54.534801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.704 [2024-04-26 13:08:54.534809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.704 [2024-04-26 13:08:54.534817] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ba150 is same with the state(5) to be set 00:25:49.704 [2024-04-26 13:08:54.536634] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:25:49.704 [2024-04-26 13:08:54.536659] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:25:49.704 task offset: 24576 on job bdev=Nvme1n1 fails 00:25:49.704 00:25:49.704 Latency(us) 00:25:49.704 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:49.704 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:49.704 Job: Nvme1n1 ended in about 0.94 seconds with error 00:25:49.704 Verification LBA range: start 0x0 length 0x400 00:25:49.704 Nvme1n1 : 0.94 204.33 12.77 68.11 0.00 232176.80 4068.69 239424.85 00:25:49.704 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:49.704 Job: Nvme2n1 ended in about 0.97 seconds with error 00:25:49.704 Verification LBA range: start 0x0 length 0x400 00:25:49.704 Nvme2n1 : 0.97 144.91 9.06 53.82 0.00 310432.43 33423.36 251658.24 00:25:49.704 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:49.704 Job: Nvme3n1 ended in about 0.97 seconds with error 00:25:49.704 Verification LBA range: start 0x0 length 0x400 00:25:49.704 Nvme3n1 : 0.97 197.06 12.32 65.69 0.00 231139.41 11195.73 255153.49 00:25:49.704 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:49.704 Job: Nvme4n1 ended in about 0.98 seconds with error 00:25:49.704 Verification LBA range: start 0x0 length 0x400 00:25:49.705 Nvme4n1 : 0.98 195.41 12.21 65.14 0.00 228406.61 15291.73 237677.23 00:25:49.705 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:49.705 Job: Nvme5n1 ended in about 0.98 seconds with error 00:25:49.705 Verification LBA range: start 0x0 length 0x400 00:25:49.705 Nvme5n1 : 0.98 194.94 12.18 64.98 0.00 224158.93 21299.20 227191.47 00:25:49.705 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:49.705 Job: Nvme6n1 ended in about 0.96 seconds with error 00:25:49.705 Verification LBA range: start 0x0 length 0x400 00:25:49.705 Nvme6n1 : 0.96 199.59 12.47 66.53 0.00 213641.60 19223.89 258648.75 00:25:49.705 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:49.705 Verification LBA range: start 0x0 length 0x400 00:25:49.705 Nvme7n1 : 0.96 267.52 16.72 0.00 0.00 207444.91 19005.44 248162.99 00:25:49.705 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:49.705 Verification LBA range: start 0x0 length 0x400 00:25:49.705 Nvme8n1 : 0.95 213.78 13.36 0.00 0.00 251839.60 2280.11 248162.99 00:25:49.705 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:49.705 Verification LBA range: start 0x0 length 0x400 00:25:49.705 Nvme9n1 : 0.95 201.68 12.61 0.00 0.00 262189.23 21080.75 253405.87 00:25:49.705 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:49.705 Job: Nvme10n1 ended in about 0.98 seconds with error 00:25:49.705 Verification LBA range: start 0x0 length 0x400 00:25:49.705 Nvme10n1 : 0.98 131.05 8.19 65.52 0.00 264021.05 15947.09 270882.13 00:25:49.705 =================================================================================================================== 00:25:49.705 Total : 1950.28 121.89 449.80 0.00 239318.67 2280.11 270882.13 00:25:49.705 [2024-04-26 13:08:54.563430] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:25:49.705 [2024-04-26 13:08:54.563481] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:25:49.705 [2024-04-26 13:08:54.563538] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c0ed0 (9): Bad file descriptor 00:25:49.705 [2024-04-26 13:08:54.563551] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c5a90 (9): Bad file descriptor 00:25:49.705 [2024-04-26 13:08:54.563561] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x147b6e0 (9): Bad file descriptor 00:25:49.705 [2024-04-26 13:08:54.564071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.705 [2024-04-26 13:08:54.564273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.705 [2024-04-26 13:08:54.564283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x147be90 with addr=10.0.0.2, port=4420 00:25:49.705 [2024-04-26 13:08:54.564293] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x147be90 is same with the state(5) to be set 00:25:49.705 [2024-04-26 13:08:54.564729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.705 [2024-04-26 13:08:54.565109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.705 [2024-04-26 13:08:54.565124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e6630 with addr=10.0.0.2, port=4420 00:25:49.705 [2024-04-26 13:08:54.565131] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e6630 is same with the state(5) to be set 00:25:49.705 [2024-04-26 13:08:54.565335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.705 [2024-04-26 13:08:54.565680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.705 [2024-04-26 13:08:54.565689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14748f0 with addr=10.0.0.2, port=4420 00:25:49.705 [2024-04-26 13:08:54.565696] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14748f0 is same with the state(5) to be set 00:25:49.705 [2024-04-26 13:08:54.565704] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:25:49.705 [2024-04-26 13:08:54.565711] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:25:49.705 [2024-04-26 13:08:54.565719] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:25:49.705 [2024-04-26 13:08:54.565732] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:25:49.705 [2024-04-26 13:08:54.565738] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:25:49.705 [2024-04-26 13:08:54.565745] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:25:49.705 [2024-04-26 13:08:54.565756] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:25:49.705 [2024-04-26 13:08:54.565762] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:25:49.705 [2024-04-26 13:08:54.565769] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:25:49.705 [2024-04-26 13:08:54.565810] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:49.705 [2024-04-26 13:08:54.565821] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:49.705 [2024-04-26 13:08:54.565831] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:49.705 [2024-04-26 13:08:54.566444] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.705 [2024-04-26 13:08:54.566456] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.705 [2024-04-26 13:08:54.566463] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.705 [2024-04-26 13:08:54.566486] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x147be90 (9): Bad file descriptor 00:25:49.705 [2024-04-26 13:08:54.566496] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e6630 (9): Bad file descriptor 00:25:49.705 [2024-04-26 13:08:54.566506] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14748f0 (9): Bad file descriptor 00:25:49.705 [2024-04-26 13:08:54.566779] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:49.705 [2024-04-26 13:08:54.566793] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:25:49.705 [2024-04-26 13:08:54.566801] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:25:49.705 [2024-04-26 13:08:54.566810] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:25:49.705 [2024-04-26 13:08:54.566847] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:25:49.705 [2024-04-26 13:08:54.566854] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:25:49.705 [2024-04-26 13:08:54.566865] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:25:49.705 [2024-04-26 13:08:54.566875] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:25:49.705 [2024-04-26 13:08:54.566881] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:25:49.705 [2024-04-26 13:08:54.566888] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:25:49.705 [2024-04-26 13:08:54.566897] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:25:49.705 [2024-04-26 13:08:54.566903] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:25:49.705 [2024-04-26 13:08:54.566910] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:25:49.705 [2024-04-26 13:08:54.566949] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.705 [2024-04-26 13:08:54.566957] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.705 [2024-04-26 13:08:54.566963] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.705 [2024-04-26 13:08:54.567303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.705 [2024-04-26 13:08:54.567630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.705 [2024-04-26 13:08:54.567640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeb0220 with addr=10.0.0.2, port=4420 00:25:49.705 [2024-04-26 13:08:54.567648] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb0220 is same with the state(5) to be set 00:25:49.705 [2024-04-26 13:08:54.567972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.705 [2024-04-26 13:08:54.568321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.705 [2024-04-26 13:08:54.568330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13c6b40 with addr=10.0.0.2, port=4420 00:25:49.705 [2024-04-26 13:08:54.568337] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6b40 is same with the state(5) to be set 00:25:49.705 [2024-04-26 13:08:54.568524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.705 [2024-04-26 13:08:54.568880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.705 [2024-04-26 13:08:54.568889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12ee280 with addr=10.0.0.2, port=4420 00:25:49.706 [2024-04-26 13:08:54.568897] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ee280 is same with the state(5) to be set 00:25:49.706 [2024-04-26 13:08:54.569142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.706 [2024-04-26 13:08:54.569474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.706 [2024-04-26 13:08:54.569483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13ef450 with addr=10.0.0.2, port=4420 00:25:49.706 [2024-04-26 13:08:54.569490] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ef450 is same with the state(5) to be set 00:25:49.706 [2024-04-26 13:08:54.569521] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xeb0220 (9): Bad file descriptor 00:25:49.706 [2024-04-26 13:08:54.569531] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13c6b40 (9): Bad file descriptor 00:25:49.706 [2024-04-26 13:08:54.569540] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12ee280 (9): Bad file descriptor 00:25:49.706 [2024-04-26 13:08:54.569549] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13ef450 (9): Bad file descriptor 00:25:49.706 [2024-04-26 13:08:54.569586] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:49.706 [2024-04-26 13:08:54.569594] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:49.706 [2024-04-26 13:08:54.569604] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:49.706 [2024-04-26 13:08:54.569614] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:25:49.706 [2024-04-26 13:08:54.569620] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:25:49.706 [2024-04-26 13:08:54.569626] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:25:49.706 [2024-04-26 13:08:54.569635] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:25:49.706 [2024-04-26 13:08:54.569642] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:25:49.706 [2024-04-26 13:08:54.569648] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:25:49.706 [2024-04-26 13:08:54.569657] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:25:49.706 [2024-04-26 13:08:54.569663] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:25:49.706 [2024-04-26 13:08:54.569669] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:25:49.706 [2024-04-26 13:08:54.569698] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.706 [2024-04-26 13:08:54.569705] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.706 [2024-04-26 13:08:54.569711] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.706 [2024-04-26 13:08:54.569717] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.967 13:08:54 -- target/shutdown.sh@136 -- # nvmfpid= 00:25:49.967 13:08:54 -- target/shutdown.sh@139 -- # sleep 1 00:25:50.964 13:08:55 -- target/shutdown.sh@142 -- # kill -9 4093246 00:25:50.964 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (4093246) - No such process 00:25:50.964 13:08:55 -- target/shutdown.sh@142 -- # true 00:25:50.964 13:08:55 -- target/shutdown.sh@144 -- # stoptarget 00:25:50.964 13:08:55 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:25:50.964 13:08:55 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:25:50.964 13:08:55 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:50.964 13:08:55 -- target/shutdown.sh@45 -- # nvmftestfini 00:25:50.964 13:08:55 -- nvmf/common.sh@477 -- # nvmfcleanup 00:25:50.964 13:08:55 -- nvmf/common.sh@117 -- # sync 00:25:50.964 13:08:55 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:50.964 13:08:55 -- nvmf/common.sh@120 -- # set +e 00:25:50.964 13:08:55 -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:50.964 13:08:55 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:50.964 rmmod nvme_tcp 00:25:50.964 rmmod nvme_fabrics 00:25:50.964 rmmod nvme_keyring 00:25:50.964 13:08:55 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:50.964 13:08:55 -- nvmf/common.sh@124 -- # set -e 00:25:50.964 13:08:55 -- nvmf/common.sh@125 -- # return 0 00:25:50.964 13:08:55 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:25:50.964 13:08:55 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:25:50.964 13:08:55 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:25:50.964 13:08:55 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:25:50.964 13:08:55 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:50.964 13:08:55 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:50.964 13:08:55 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:50.964 13:08:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:50.964 13:08:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:52.892 13:08:57 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:52.893 00:25:52.893 real 0m7.581s 00:25:52.893 user 0m17.882s 00:25:52.893 sys 0m1.190s 00:25:52.893 13:08:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:52.893 13:08:57 -- common/autotest_common.sh@10 -- # set +x 00:25:52.893 ************************************ 00:25:52.893 END TEST nvmf_shutdown_tc3 00:25:52.893 ************************************ 00:25:53.153 13:08:57 -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:25:53.153 00:25:53.153 real 0m32.580s 00:25:53.153 user 1m15.564s 00:25:53.153 sys 0m9.227s 00:25:53.153 13:08:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:53.153 13:08:57 -- common/autotest_common.sh@10 -- # set +x 00:25:53.153 ************************************ 00:25:53.153 END TEST nvmf_shutdown 00:25:53.153 ************************************ 00:25:53.153 13:08:58 -- nvmf/nvmf.sh@84 -- # timing_exit target 00:25:53.153 13:08:58 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:53.153 13:08:58 -- common/autotest_common.sh@10 -- # set +x 00:25:53.153 13:08:58 -- nvmf/nvmf.sh@86 -- # timing_enter host 00:25:53.153 13:08:58 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:53.153 13:08:58 -- common/autotest_common.sh@10 -- # set +x 00:25:53.153 13:08:58 -- nvmf/nvmf.sh@88 -- # [[ 0 -eq 0 ]] 00:25:53.153 13:08:58 -- nvmf/nvmf.sh@89 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:25:53.153 13:08:58 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:53.154 13:08:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:53.154 13:08:58 -- common/autotest_common.sh@10 -- # set +x 00:25:53.154 ************************************ 00:25:53.154 START TEST nvmf_multicontroller 00:25:53.154 ************************************ 00:25:53.415 13:08:58 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:25:53.415 * Looking for test storage... 00:25:53.415 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:53.415 13:08:58 -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:53.415 13:08:58 -- nvmf/common.sh@7 -- # uname -s 00:25:53.415 13:08:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:53.415 13:08:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:53.415 13:08:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:53.415 13:08:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:53.415 13:08:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:53.415 13:08:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:53.415 13:08:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:53.415 13:08:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:53.415 13:08:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:53.415 13:08:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:53.415 13:08:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:53.415 13:08:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:53.415 13:08:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:53.415 13:08:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:53.415 13:08:58 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:53.415 13:08:58 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:53.415 13:08:58 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:53.415 13:08:58 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:53.415 13:08:58 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:53.415 13:08:58 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:53.415 13:08:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:53.415 13:08:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:53.415 13:08:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:53.415 13:08:58 -- paths/export.sh@5 -- # export PATH 00:25:53.416 13:08:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:53.416 13:08:58 -- nvmf/common.sh@47 -- # : 0 00:25:53.416 13:08:58 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:53.416 13:08:58 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:53.416 13:08:58 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:53.416 13:08:58 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:53.416 13:08:58 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:53.416 13:08:58 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:53.416 13:08:58 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:53.416 13:08:58 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:53.416 13:08:58 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:53.416 13:08:58 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:53.416 13:08:58 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:25:53.416 13:08:58 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:25:53.416 13:08:58 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:53.416 13:08:58 -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:25:53.416 13:08:58 -- host/multicontroller.sh@23 -- # nvmftestinit 00:25:53.416 13:08:58 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:25:53.416 13:08:58 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:53.416 13:08:58 -- nvmf/common.sh@437 -- # prepare_net_devs 00:25:53.416 13:08:58 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:25:53.416 13:08:58 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:25:53.416 13:08:58 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:53.416 13:08:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:53.416 13:08:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:53.416 13:08:58 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:25:53.416 13:08:58 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:25:53.416 13:08:58 -- nvmf/common.sh@285 -- # xtrace_disable 00:25:53.416 13:08:58 -- common/autotest_common.sh@10 -- # set +x 00:26:01.565 13:09:05 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:01.565 13:09:05 -- nvmf/common.sh@291 -- # pci_devs=() 00:26:01.565 13:09:05 -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:01.565 13:09:05 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:01.565 13:09:05 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:01.565 13:09:05 -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:01.565 13:09:05 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:01.565 13:09:05 -- nvmf/common.sh@295 -- # net_devs=() 00:26:01.565 13:09:05 -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:01.565 13:09:05 -- nvmf/common.sh@296 -- # e810=() 00:26:01.565 13:09:05 -- nvmf/common.sh@296 -- # local -ga e810 00:26:01.565 13:09:05 -- nvmf/common.sh@297 -- # x722=() 00:26:01.565 13:09:05 -- nvmf/common.sh@297 -- # local -ga x722 00:26:01.565 13:09:05 -- nvmf/common.sh@298 -- # mlx=() 00:26:01.565 13:09:05 -- nvmf/common.sh@298 -- # local -ga mlx 00:26:01.565 13:09:05 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:01.565 13:09:05 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:01.565 13:09:05 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:01.565 13:09:05 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:01.565 13:09:05 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:01.565 13:09:05 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:01.565 13:09:05 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:01.565 13:09:05 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:01.565 13:09:05 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:01.566 13:09:05 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:01.566 13:09:05 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:01.566 13:09:05 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:01.566 13:09:05 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:01.566 13:09:05 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:01.566 13:09:05 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:01.566 13:09:05 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:01.566 13:09:05 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:01.566 13:09:05 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:01.566 13:09:05 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:01.566 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:01.566 13:09:05 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:01.566 13:09:05 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:01.566 13:09:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:01.566 13:09:05 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:01.566 13:09:05 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:01.566 13:09:05 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:01.566 13:09:05 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:01.566 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:01.566 13:09:05 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:01.566 13:09:05 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:01.566 13:09:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:01.566 13:09:05 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:01.566 13:09:05 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:01.566 13:09:05 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:01.566 13:09:05 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:01.566 13:09:05 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:01.566 13:09:05 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:01.566 13:09:05 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:01.566 13:09:05 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:26:01.566 13:09:05 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:01.566 13:09:05 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:01.566 Found net devices under 0000:31:00.0: cvl_0_0 00:26:01.566 13:09:05 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:26:01.566 13:09:05 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:01.566 13:09:05 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:01.566 13:09:05 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:26:01.566 13:09:05 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:01.566 13:09:05 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:01.566 Found net devices under 0000:31:00.1: cvl_0_1 00:26:01.566 13:09:05 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:26:01.566 13:09:05 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:26:01.566 13:09:05 -- nvmf/common.sh@403 -- # is_hw=yes 00:26:01.566 13:09:05 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:26:01.566 13:09:05 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:26:01.566 13:09:05 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:26:01.566 13:09:05 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:01.566 13:09:05 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:01.566 13:09:05 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:01.566 13:09:05 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:01.566 13:09:05 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:01.566 13:09:05 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:01.566 13:09:05 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:01.566 13:09:05 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:01.566 13:09:05 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:01.566 13:09:05 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:01.566 13:09:05 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:01.566 13:09:05 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:01.566 13:09:05 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:01.566 13:09:05 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:01.566 13:09:05 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:01.566 13:09:05 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:01.566 13:09:05 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:01.566 13:09:05 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:01.566 13:09:05 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:01.566 13:09:05 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:01.566 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:01.566 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.468 ms 00:26:01.566 00:26:01.566 --- 10.0.0.2 ping statistics --- 00:26:01.566 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:01.566 rtt min/avg/max/mdev = 0.468/0.468/0.468/0.000 ms 00:26:01.566 13:09:05 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:01.566 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:01.566 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.277 ms 00:26:01.566 00:26:01.566 --- 10.0.0.1 ping statistics --- 00:26:01.566 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:01.566 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:26:01.566 13:09:05 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:01.566 13:09:05 -- nvmf/common.sh@411 -- # return 0 00:26:01.566 13:09:05 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:26:01.566 13:09:05 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:01.566 13:09:05 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:26:01.566 13:09:05 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:26:01.566 13:09:05 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:01.566 13:09:05 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:26:01.566 13:09:05 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:26:01.566 13:09:05 -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:26:01.566 13:09:05 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:26:01.566 13:09:05 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:01.566 13:09:05 -- common/autotest_common.sh@10 -- # set +x 00:26:01.566 13:09:05 -- nvmf/common.sh@470 -- # nvmfpid=4098315 00:26:01.566 13:09:05 -- nvmf/common.sh@471 -- # waitforlisten 4098315 00:26:01.566 13:09:05 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:01.566 13:09:05 -- common/autotest_common.sh@817 -- # '[' -z 4098315 ']' 00:26:01.566 13:09:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:01.566 13:09:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:01.566 13:09:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:01.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:01.566 13:09:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:01.566 13:09:05 -- common/autotest_common.sh@10 -- # set +x 00:26:01.566 [2024-04-26 13:09:05.640388] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:26:01.566 [2024-04-26 13:09:05.640453] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:01.566 EAL: No free 2048 kB hugepages reported on node 1 00:26:01.566 [2024-04-26 13:09:05.728802] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:01.566 [2024-04-26 13:09:05.821463] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:01.566 [2024-04-26 13:09:05.821519] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:01.566 [2024-04-26 13:09:05.821528] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:01.566 [2024-04-26 13:09:05.821535] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:01.566 [2024-04-26 13:09:05.821541] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:01.566 [2024-04-26 13:09:05.821681] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:01.566 [2024-04-26 13:09:05.821854] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:01.566 [2024-04-26 13:09:05.821864] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:01.566 13:09:06 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:01.566 13:09:06 -- common/autotest_common.sh@850 -- # return 0 00:26:01.566 13:09:06 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:26:01.566 13:09:06 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:01.566 13:09:06 -- common/autotest_common.sh@10 -- # set +x 00:26:01.566 13:09:06 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:01.566 13:09:06 -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:01.566 13:09:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:01.566 13:09:06 -- common/autotest_common.sh@10 -- # set +x 00:26:01.566 [2024-04-26 13:09:06.459687] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:01.566 13:09:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:01.566 13:09:06 -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:01.566 13:09:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:01.566 13:09:06 -- common/autotest_common.sh@10 -- # set +x 00:26:01.566 Malloc0 00:26:01.566 13:09:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:01.566 13:09:06 -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:01.566 13:09:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:01.566 13:09:06 -- common/autotest_common.sh@10 -- # set +x 00:26:01.566 13:09:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:01.566 13:09:06 -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:01.566 13:09:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:01.566 13:09:06 -- common/autotest_common.sh@10 -- # set +x 00:26:01.566 13:09:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:01.566 13:09:06 -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:01.566 13:09:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:01.566 13:09:06 -- common/autotest_common.sh@10 -- # set +x 00:26:01.566 [2024-04-26 13:09:06.524166] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:01.567 13:09:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:01.567 13:09:06 -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:01.567 13:09:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:01.567 13:09:06 -- common/autotest_common.sh@10 -- # set +x 00:26:01.567 [2024-04-26 13:09:06.536144] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:01.567 13:09:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:01.567 13:09:06 -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:01.567 13:09:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:01.567 13:09:06 -- common/autotest_common.sh@10 -- # set +x 00:26:01.567 Malloc1 00:26:01.567 13:09:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:01.567 13:09:06 -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:26:01.567 13:09:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:01.567 13:09:06 -- common/autotest_common.sh@10 -- # set +x 00:26:01.567 13:09:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:01.567 13:09:06 -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:26:01.567 13:09:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:01.567 13:09:06 -- common/autotest_common.sh@10 -- # set +x 00:26:01.567 13:09:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:01.567 13:09:06 -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:26:01.567 13:09:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:01.567 13:09:06 -- common/autotest_common.sh@10 -- # set +x 00:26:01.567 13:09:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:01.567 13:09:06 -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:26:01.567 13:09:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:01.567 13:09:06 -- common/autotest_common.sh@10 -- # set +x 00:26:01.567 13:09:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:01.567 13:09:06 -- host/multicontroller.sh@44 -- # bdevperf_pid=4098510 00:26:01.567 13:09:06 -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:01.567 13:09:06 -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:26:01.567 13:09:06 -- host/multicontroller.sh@47 -- # waitforlisten 4098510 /var/tmp/bdevperf.sock 00:26:01.567 13:09:06 -- common/autotest_common.sh@817 -- # '[' -z 4098510 ']' 00:26:01.567 13:09:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:01.567 13:09:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:01.567 13:09:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:01.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:01.567 13:09:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:01.567 13:09:06 -- common/autotest_common.sh@10 -- # set +x 00:26:02.509 13:09:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:02.509 13:09:07 -- common/autotest_common.sh@850 -- # return 0 00:26:02.509 13:09:07 -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:26:02.509 13:09:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:02.509 13:09:07 -- common/autotest_common.sh@10 -- # set +x 00:26:02.509 NVMe0n1 00:26:02.509 13:09:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:02.509 13:09:07 -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:02.509 13:09:07 -- host/multicontroller.sh@54 -- # grep -c NVMe 00:26:02.509 13:09:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:02.509 13:09:07 -- common/autotest_common.sh@10 -- # set +x 00:26:02.509 13:09:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:02.509 1 00:26:02.509 13:09:07 -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:26:02.509 13:09:07 -- common/autotest_common.sh@638 -- # local es=0 00:26:02.509 13:09:07 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:26:02.509 13:09:07 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:26:02.509 13:09:07 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:26:02.509 13:09:07 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:26:02.509 13:09:07 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:26:02.509 13:09:07 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:26:02.509 13:09:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:02.509 13:09:07 -- common/autotest_common.sh@10 -- # set +x 00:26:02.509 request: 00:26:02.509 { 00:26:02.509 "name": "NVMe0", 00:26:02.509 "trtype": "tcp", 00:26:02.509 "traddr": "10.0.0.2", 00:26:02.509 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:26:02.509 "hostaddr": "10.0.0.2", 00:26:02.509 "hostsvcid": "60000", 00:26:02.509 "adrfam": "ipv4", 00:26:02.509 "trsvcid": "4420", 00:26:02.509 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:02.509 "method": "bdev_nvme_attach_controller", 00:26:02.509 "req_id": 1 00:26:02.509 } 00:26:02.509 Got JSON-RPC error response 00:26:02.509 response: 00:26:02.509 { 00:26:02.509 "code": -114, 00:26:02.509 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:26:02.509 } 00:26:02.509 13:09:07 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:26:02.509 13:09:07 -- common/autotest_common.sh@641 -- # es=1 00:26:02.509 13:09:07 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:26:02.509 13:09:07 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:26:02.509 13:09:07 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:26:02.509 13:09:07 -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:26:02.509 13:09:07 -- common/autotest_common.sh@638 -- # local es=0 00:26:02.509 13:09:07 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:26:02.509 13:09:07 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:26:02.509 13:09:07 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:26:02.509 13:09:07 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:26:02.510 13:09:07 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:26:02.510 13:09:07 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:26:02.510 13:09:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:02.510 13:09:07 -- common/autotest_common.sh@10 -- # set +x 00:26:02.510 request: 00:26:02.510 { 00:26:02.510 "name": "NVMe0", 00:26:02.510 "trtype": "tcp", 00:26:02.510 "traddr": "10.0.0.2", 00:26:02.510 "hostaddr": "10.0.0.2", 00:26:02.510 "hostsvcid": "60000", 00:26:02.510 "adrfam": "ipv4", 00:26:02.510 "trsvcid": "4420", 00:26:02.510 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:02.510 "method": "bdev_nvme_attach_controller", 00:26:02.510 "req_id": 1 00:26:02.510 } 00:26:02.510 Got JSON-RPC error response 00:26:02.510 response: 00:26:02.510 { 00:26:02.510 "code": -114, 00:26:02.510 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:26:02.510 } 00:26:02.510 13:09:07 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:26:02.510 13:09:07 -- common/autotest_common.sh@641 -- # es=1 00:26:02.510 13:09:07 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:26:02.510 13:09:07 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:26:02.510 13:09:07 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:26:02.510 13:09:07 -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:26:02.510 13:09:07 -- common/autotest_common.sh@638 -- # local es=0 00:26:02.510 13:09:07 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:26:02.510 13:09:07 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:26:02.510 13:09:07 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:26:02.510 13:09:07 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:26:02.510 13:09:07 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:26:02.510 13:09:07 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:26:02.510 13:09:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:02.510 13:09:07 -- common/autotest_common.sh@10 -- # set +x 00:26:02.771 request: 00:26:02.771 { 00:26:02.771 "name": "NVMe0", 00:26:02.771 "trtype": "tcp", 00:26:02.771 "traddr": "10.0.0.2", 00:26:02.771 "hostaddr": "10.0.0.2", 00:26:02.771 "hostsvcid": "60000", 00:26:02.771 "adrfam": "ipv4", 00:26:02.771 "trsvcid": "4420", 00:26:02.771 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:02.771 "multipath": "disable", 00:26:02.771 "method": "bdev_nvme_attach_controller", 00:26:02.771 "req_id": 1 00:26:02.771 } 00:26:02.771 Got JSON-RPC error response 00:26:02.771 response: 00:26:02.771 { 00:26:02.771 "code": -114, 00:26:02.771 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:26:02.771 } 00:26:02.771 13:09:07 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:26:02.771 13:09:07 -- common/autotest_common.sh@641 -- # es=1 00:26:02.771 13:09:07 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:26:02.771 13:09:07 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:26:02.771 13:09:07 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:26:02.772 13:09:07 -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:26:02.772 13:09:07 -- common/autotest_common.sh@638 -- # local es=0 00:26:02.772 13:09:07 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:26:02.772 13:09:07 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:26:02.772 13:09:07 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:26:02.772 13:09:07 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:26:02.772 13:09:07 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:26:02.772 13:09:07 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:26:02.772 13:09:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:02.772 13:09:07 -- common/autotest_common.sh@10 -- # set +x 00:26:02.772 request: 00:26:02.772 { 00:26:02.772 "name": "NVMe0", 00:26:02.772 "trtype": "tcp", 00:26:02.772 "traddr": "10.0.0.2", 00:26:02.772 "hostaddr": "10.0.0.2", 00:26:02.772 "hostsvcid": "60000", 00:26:02.772 "adrfam": "ipv4", 00:26:02.772 "trsvcid": "4420", 00:26:02.772 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:02.772 "multipath": "failover", 00:26:02.772 "method": "bdev_nvme_attach_controller", 00:26:02.772 "req_id": 1 00:26:02.772 } 00:26:02.772 Got JSON-RPC error response 00:26:02.772 response: 00:26:02.772 { 00:26:02.772 "code": -114, 00:26:02.772 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:26:02.772 } 00:26:02.772 13:09:07 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:26:02.772 13:09:07 -- common/autotest_common.sh@641 -- # es=1 00:26:02.772 13:09:07 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:26:02.772 13:09:07 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:26:02.772 13:09:07 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:26:02.772 13:09:07 -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:02.772 13:09:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:02.772 13:09:07 -- common/autotest_common.sh@10 -- # set +x 00:26:02.772 00:26:02.772 13:09:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:02.772 13:09:07 -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:02.772 13:09:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:02.772 13:09:07 -- common/autotest_common.sh@10 -- # set +x 00:26:02.772 13:09:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:02.772 13:09:07 -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:26:02.772 13:09:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:02.772 13:09:07 -- common/autotest_common.sh@10 -- # set +x 00:26:03.032 00:26:03.032 13:09:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:03.032 13:09:07 -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:03.032 13:09:07 -- host/multicontroller.sh@90 -- # grep -c NVMe 00:26:03.032 13:09:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:03.032 13:09:07 -- common/autotest_common.sh@10 -- # set +x 00:26:03.033 13:09:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:03.033 13:09:07 -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:26:03.033 13:09:07 -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:03.976 0 00:26:04.237 13:09:09 -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:26:04.237 13:09:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:04.237 13:09:09 -- common/autotest_common.sh@10 -- # set +x 00:26:04.237 13:09:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:04.237 13:09:09 -- host/multicontroller.sh@100 -- # killprocess 4098510 00:26:04.237 13:09:09 -- common/autotest_common.sh@936 -- # '[' -z 4098510 ']' 00:26:04.237 13:09:09 -- common/autotest_common.sh@940 -- # kill -0 4098510 00:26:04.237 13:09:09 -- common/autotest_common.sh@941 -- # uname 00:26:04.237 13:09:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:04.237 13:09:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4098510 00:26:04.237 13:09:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:04.237 13:09:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:04.237 13:09:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4098510' 00:26:04.237 killing process with pid 4098510 00:26:04.237 13:09:09 -- common/autotest_common.sh@955 -- # kill 4098510 00:26:04.237 13:09:09 -- common/autotest_common.sh@960 -- # wait 4098510 00:26:04.237 13:09:09 -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:04.237 13:09:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:04.237 13:09:09 -- common/autotest_common.sh@10 -- # set +x 00:26:04.237 13:09:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:04.237 13:09:09 -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:04.237 13:09:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:04.237 13:09:09 -- common/autotest_common.sh@10 -- # set +x 00:26:04.237 13:09:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:04.237 13:09:09 -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:26:04.237 13:09:09 -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:04.237 13:09:09 -- common/autotest_common.sh@1598 -- # read -r file 00:26:04.237 13:09:09 -- common/autotest_common.sh@1597 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:26:04.237 13:09:09 -- common/autotest_common.sh@1597 -- # sort -u 00:26:04.237 13:09:09 -- common/autotest_common.sh@1599 -- # cat 00:26:04.237 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:26:04.237 [2024-04-26 13:09:06.653595] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:26:04.237 [2024-04-26 13:09:06.653651] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4098510 ] 00:26:04.237 EAL: No free 2048 kB hugepages reported on node 1 00:26:04.237 [2024-04-26 13:09:06.713542] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:04.237 [2024-04-26 13:09:06.776375] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:04.237 [2024-04-26 13:09:07.912246] bdev.c:4551:bdev_name_add: *ERROR*: Bdev name 77d08c7e-30d3-446d-a486-3ae4d72750a3 already exists 00:26:04.237 [2024-04-26 13:09:07.912278] bdev.c:7668:bdev_register: *ERROR*: Unable to add uuid:77d08c7e-30d3-446d-a486-3ae4d72750a3 alias for bdev NVMe1n1 00:26:04.237 [2024-04-26 13:09:07.912288] bdev_nvme.c:4276:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:26:04.237 Running I/O for 1 seconds... 00:26:04.237 00:26:04.237 Latency(us) 00:26:04.237 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:04.237 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:26:04.237 NVMe0n1 : 1.00 25109.11 98.08 0.00 0.00 5082.14 4532.91 13052.59 00:26:04.237 =================================================================================================================== 00:26:04.237 Total : 25109.11 98.08 0.00 0.00 5082.14 4532.91 13052.59 00:26:04.237 Received shutdown signal, test time was about 1.000000 seconds 00:26:04.237 00:26:04.237 Latency(us) 00:26:04.237 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:04.237 =================================================================================================================== 00:26:04.237 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:04.237 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:26:04.238 13:09:09 -- common/autotest_common.sh@1604 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:04.238 13:09:09 -- common/autotest_common.sh@1598 -- # read -r file 00:26:04.238 13:09:09 -- host/multicontroller.sh@108 -- # nvmftestfini 00:26:04.238 13:09:09 -- nvmf/common.sh@477 -- # nvmfcleanup 00:26:04.238 13:09:09 -- nvmf/common.sh@117 -- # sync 00:26:04.498 13:09:09 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:04.498 13:09:09 -- nvmf/common.sh@120 -- # set +e 00:26:04.498 13:09:09 -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:04.498 13:09:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:04.498 rmmod nvme_tcp 00:26:04.498 rmmod nvme_fabrics 00:26:04.498 rmmod nvme_keyring 00:26:04.498 13:09:09 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:04.498 13:09:09 -- nvmf/common.sh@124 -- # set -e 00:26:04.498 13:09:09 -- nvmf/common.sh@125 -- # return 0 00:26:04.498 13:09:09 -- nvmf/common.sh@478 -- # '[' -n 4098315 ']' 00:26:04.498 13:09:09 -- nvmf/common.sh@479 -- # killprocess 4098315 00:26:04.498 13:09:09 -- common/autotest_common.sh@936 -- # '[' -z 4098315 ']' 00:26:04.498 13:09:09 -- common/autotest_common.sh@940 -- # kill -0 4098315 00:26:04.498 13:09:09 -- common/autotest_common.sh@941 -- # uname 00:26:04.498 13:09:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:04.498 13:09:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4098315 00:26:04.498 13:09:09 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:26:04.498 13:09:09 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:26:04.498 13:09:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4098315' 00:26:04.498 killing process with pid 4098315 00:26:04.498 13:09:09 -- common/autotest_common.sh@955 -- # kill 4098315 00:26:04.498 13:09:09 -- common/autotest_common.sh@960 -- # wait 4098315 00:26:04.758 13:09:09 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:26:04.758 13:09:09 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:26:04.758 13:09:09 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:26:04.758 13:09:09 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:04.758 13:09:09 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:04.758 13:09:09 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:04.758 13:09:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:04.758 13:09:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:06.665 13:09:11 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:06.665 00:26:06.665 real 0m13.408s 00:26:06.665 user 0m16.145s 00:26:06.665 sys 0m6.125s 00:26:06.665 13:09:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:06.665 13:09:11 -- common/autotest_common.sh@10 -- # set +x 00:26:06.665 ************************************ 00:26:06.665 END TEST nvmf_multicontroller 00:26:06.665 ************************************ 00:26:06.665 13:09:11 -- nvmf/nvmf.sh@90 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:26:06.665 13:09:11 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:26:06.665 13:09:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:06.665 13:09:11 -- common/autotest_common.sh@10 -- # set +x 00:26:06.925 ************************************ 00:26:06.925 START TEST nvmf_aer 00:26:06.925 ************************************ 00:26:06.925 13:09:11 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:26:06.925 * Looking for test storage... 00:26:06.925 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:06.925 13:09:11 -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:06.925 13:09:11 -- nvmf/common.sh@7 -- # uname -s 00:26:06.925 13:09:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:06.925 13:09:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:06.925 13:09:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:06.925 13:09:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:06.925 13:09:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:06.925 13:09:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:06.925 13:09:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:06.925 13:09:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:06.925 13:09:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:06.925 13:09:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:06.925 13:09:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:06.925 13:09:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:06.925 13:09:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:06.925 13:09:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:06.925 13:09:11 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:06.925 13:09:11 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:06.925 13:09:11 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:06.925 13:09:11 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:06.925 13:09:11 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:06.925 13:09:11 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:06.925 13:09:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:06.925 13:09:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:06.925 13:09:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:06.925 13:09:11 -- paths/export.sh@5 -- # export PATH 00:26:06.925 13:09:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:06.925 13:09:11 -- nvmf/common.sh@47 -- # : 0 00:26:06.925 13:09:11 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:06.925 13:09:11 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:06.925 13:09:11 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:06.925 13:09:11 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:06.925 13:09:11 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:06.925 13:09:11 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:06.925 13:09:11 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:06.925 13:09:11 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:06.925 13:09:11 -- host/aer.sh@11 -- # nvmftestinit 00:26:06.925 13:09:11 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:26:06.925 13:09:11 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:06.925 13:09:11 -- nvmf/common.sh@437 -- # prepare_net_devs 00:26:06.925 13:09:11 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:26:06.925 13:09:11 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:26:06.925 13:09:11 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:06.925 13:09:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:06.925 13:09:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:06.925 13:09:11 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:26:06.925 13:09:11 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:26:06.925 13:09:11 -- nvmf/common.sh@285 -- # xtrace_disable 00:26:06.925 13:09:11 -- common/autotest_common.sh@10 -- # set +x 00:26:15.067 13:09:18 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:15.067 13:09:18 -- nvmf/common.sh@291 -- # pci_devs=() 00:26:15.067 13:09:18 -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:15.067 13:09:18 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:15.067 13:09:18 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:15.067 13:09:18 -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:15.067 13:09:18 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:15.067 13:09:18 -- nvmf/common.sh@295 -- # net_devs=() 00:26:15.067 13:09:18 -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:15.067 13:09:18 -- nvmf/common.sh@296 -- # e810=() 00:26:15.067 13:09:18 -- nvmf/common.sh@296 -- # local -ga e810 00:26:15.067 13:09:18 -- nvmf/common.sh@297 -- # x722=() 00:26:15.067 13:09:18 -- nvmf/common.sh@297 -- # local -ga x722 00:26:15.067 13:09:18 -- nvmf/common.sh@298 -- # mlx=() 00:26:15.067 13:09:18 -- nvmf/common.sh@298 -- # local -ga mlx 00:26:15.067 13:09:18 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:15.067 13:09:18 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:15.067 13:09:18 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:15.067 13:09:18 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:15.067 13:09:18 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:15.067 13:09:18 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:15.067 13:09:18 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:15.067 13:09:18 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:15.067 13:09:18 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:15.067 13:09:18 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:15.067 13:09:18 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:15.067 13:09:18 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:15.067 13:09:18 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:15.067 13:09:18 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:15.067 13:09:18 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:15.067 13:09:18 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:15.067 13:09:18 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:15.067 13:09:18 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:15.067 13:09:18 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:15.067 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:15.067 13:09:18 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:15.067 13:09:18 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:15.067 13:09:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:15.067 13:09:18 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:15.067 13:09:18 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:15.067 13:09:18 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:15.067 13:09:18 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:15.067 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:15.067 13:09:18 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:15.067 13:09:18 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:15.067 13:09:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:15.067 13:09:18 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:15.067 13:09:18 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:15.067 13:09:18 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:15.067 13:09:18 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:15.067 13:09:18 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:15.067 13:09:18 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:15.067 13:09:18 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:15.067 13:09:18 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:26:15.067 13:09:18 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:15.067 13:09:18 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:15.067 Found net devices under 0000:31:00.0: cvl_0_0 00:26:15.067 13:09:18 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:26:15.067 13:09:18 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:15.067 13:09:18 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:15.067 13:09:18 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:26:15.067 13:09:18 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:15.067 13:09:18 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:15.067 Found net devices under 0000:31:00.1: cvl_0_1 00:26:15.067 13:09:18 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:26:15.067 13:09:18 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:26:15.067 13:09:18 -- nvmf/common.sh@403 -- # is_hw=yes 00:26:15.067 13:09:18 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:26:15.067 13:09:18 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:26:15.067 13:09:18 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:26:15.067 13:09:18 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:15.067 13:09:18 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:15.067 13:09:18 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:15.067 13:09:18 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:15.067 13:09:18 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:15.067 13:09:18 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:15.067 13:09:18 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:15.067 13:09:18 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:15.067 13:09:18 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:15.067 13:09:18 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:15.067 13:09:18 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:15.067 13:09:18 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:15.067 13:09:18 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:15.067 13:09:19 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:15.067 13:09:19 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:15.067 13:09:19 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:15.067 13:09:19 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:15.067 13:09:19 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:15.067 13:09:19 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:15.067 13:09:19 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:15.067 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:15.067 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.697 ms 00:26:15.067 00:26:15.067 --- 10.0.0.2 ping statistics --- 00:26:15.067 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:15.067 rtt min/avg/max/mdev = 0.697/0.697/0.697/0.000 ms 00:26:15.067 13:09:19 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:15.067 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:15.067 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.333 ms 00:26:15.067 00:26:15.067 --- 10.0.0.1 ping statistics --- 00:26:15.067 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:15.067 rtt min/avg/max/mdev = 0.333/0.333/0.333/0.000 ms 00:26:15.067 13:09:19 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:15.067 13:09:19 -- nvmf/common.sh@411 -- # return 0 00:26:15.067 13:09:19 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:26:15.067 13:09:19 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:15.067 13:09:19 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:26:15.067 13:09:19 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:26:15.067 13:09:19 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:15.067 13:09:19 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:26:15.067 13:09:19 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:26:15.067 13:09:19 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:26:15.067 13:09:19 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:26:15.067 13:09:19 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:15.067 13:09:19 -- common/autotest_common.sh@10 -- # set +x 00:26:15.067 13:09:19 -- nvmf/common.sh@470 -- # nvmfpid=4103719 00:26:15.067 13:09:19 -- nvmf/common.sh@471 -- # waitforlisten 4103719 00:26:15.067 13:09:19 -- common/autotest_common.sh@817 -- # '[' -z 4103719 ']' 00:26:15.067 13:09:19 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:15.067 13:09:19 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:15.067 13:09:19 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:15.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:15.067 13:09:19 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:15.067 13:09:19 -- common/autotest_common.sh@10 -- # set +x 00:26:15.067 13:09:19 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:15.068 [2024-04-26 13:09:19.252255] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:26:15.068 [2024-04-26 13:09:19.252318] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:15.068 EAL: No free 2048 kB hugepages reported on node 1 00:26:15.068 [2024-04-26 13:09:19.324623] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:15.068 [2024-04-26 13:09:19.399050] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:15.068 [2024-04-26 13:09:19.399092] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:15.068 [2024-04-26 13:09:19.399100] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:15.068 [2024-04-26 13:09:19.399109] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:15.068 [2024-04-26 13:09:19.399115] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:15.068 [2024-04-26 13:09:19.399267] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:15.068 [2024-04-26 13:09:19.399360] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:15.068 [2024-04-26 13:09:19.399516] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:15.068 [2024-04-26 13:09:19.399517] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:15.068 13:09:20 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:15.068 13:09:20 -- common/autotest_common.sh@850 -- # return 0 00:26:15.068 13:09:20 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:26:15.068 13:09:20 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:15.068 13:09:20 -- common/autotest_common.sh@10 -- # set +x 00:26:15.068 13:09:20 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:15.068 13:09:20 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:15.068 13:09:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:15.068 13:09:20 -- common/autotest_common.sh@10 -- # set +x 00:26:15.068 [2024-04-26 13:09:20.079373] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:15.068 13:09:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:15.068 13:09:20 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:26:15.068 13:09:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:15.068 13:09:20 -- common/autotest_common.sh@10 -- # set +x 00:26:15.068 Malloc0 00:26:15.068 13:09:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:15.068 13:09:20 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:26:15.068 13:09:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:15.068 13:09:20 -- common/autotest_common.sh@10 -- # set +x 00:26:15.068 13:09:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:15.068 13:09:20 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:15.068 13:09:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:15.068 13:09:20 -- common/autotest_common.sh@10 -- # set +x 00:26:15.068 13:09:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:15.068 13:09:20 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:15.068 13:09:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:15.068 13:09:20 -- common/autotest_common.sh@10 -- # set +x 00:26:15.068 [2024-04-26 13:09:20.122802] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:15.328 13:09:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:15.328 13:09:20 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:26:15.329 13:09:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:15.329 13:09:20 -- common/autotest_common.sh@10 -- # set +x 00:26:15.329 [2024-04-26 13:09:20.130610] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:26:15.329 [ 00:26:15.329 { 00:26:15.329 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:26:15.329 "subtype": "Discovery", 00:26:15.329 "listen_addresses": [], 00:26:15.329 "allow_any_host": true, 00:26:15.329 "hosts": [] 00:26:15.329 }, 00:26:15.329 { 00:26:15.329 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:15.329 "subtype": "NVMe", 00:26:15.329 "listen_addresses": [ 00:26:15.329 { 00:26:15.329 "transport": "TCP", 00:26:15.329 "trtype": "TCP", 00:26:15.329 "adrfam": "IPv4", 00:26:15.329 "traddr": "10.0.0.2", 00:26:15.329 "trsvcid": "4420" 00:26:15.329 } 00:26:15.329 ], 00:26:15.329 "allow_any_host": true, 00:26:15.329 "hosts": [], 00:26:15.329 "serial_number": "SPDK00000000000001", 00:26:15.329 "model_number": "SPDK bdev Controller", 00:26:15.329 "max_namespaces": 2, 00:26:15.329 "min_cntlid": 1, 00:26:15.329 "max_cntlid": 65519, 00:26:15.329 "namespaces": [ 00:26:15.329 { 00:26:15.329 "nsid": 1, 00:26:15.329 "bdev_name": "Malloc0", 00:26:15.329 "name": "Malloc0", 00:26:15.329 "nguid": "B29078B2BA94499F8CE39D1A04A9D307", 00:26:15.329 "uuid": "b29078b2-ba94-499f-8ce3-9d1a04a9d307" 00:26:15.329 } 00:26:15.329 ] 00:26:15.329 } 00:26:15.329 ] 00:26:15.329 13:09:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:15.329 13:09:20 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:26:15.329 13:09:20 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:26:15.329 13:09:20 -- host/aer.sh@33 -- # aerpid=4104021 00:26:15.329 13:09:20 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:26:15.329 13:09:20 -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:26:15.329 13:09:20 -- common/autotest_common.sh@1251 -- # local i=0 00:26:15.329 13:09:20 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:15.329 13:09:20 -- common/autotest_common.sh@1253 -- # '[' 0 -lt 200 ']' 00:26:15.329 13:09:20 -- common/autotest_common.sh@1254 -- # i=1 00:26:15.329 13:09:20 -- common/autotest_common.sh@1255 -- # sleep 0.1 00:26:15.329 EAL: No free 2048 kB hugepages reported on node 1 00:26:15.329 13:09:20 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:15.329 13:09:20 -- common/autotest_common.sh@1253 -- # '[' 1 -lt 200 ']' 00:26:15.329 13:09:20 -- common/autotest_common.sh@1254 -- # i=2 00:26:15.329 13:09:20 -- common/autotest_common.sh@1255 -- # sleep 0.1 00:26:15.329 13:09:20 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:15.329 13:09:20 -- common/autotest_common.sh@1253 -- # '[' 2 -lt 200 ']' 00:26:15.329 13:09:20 -- common/autotest_common.sh@1254 -- # i=3 00:26:15.329 13:09:20 -- common/autotest_common.sh@1255 -- # sleep 0.1 00:26:15.589 13:09:20 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:15.589 13:09:20 -- common/autotest_common.sh@1258 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:15.589 13:09:20 -- common/autotest_common.sh@1262 -- # return 0 00:26:15.589 13:09:20 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:26:15.589 13:09:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:15.589 13:09:20 -- common/autotest_common.sh@10 -- # set +x 00:26:15.589 Malloc1 00:26:15.589 13:09:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:15.589 13:09:20 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:26:15.589 13:09:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:15.589 13:09:20 -- common/autotest_common.sh@10 -- # set +x 00:26:15.589 13:09:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:15.589 13:09:20 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:26:15.589 13:09:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:15.589 13:09:20 -- common/autotest_common.sh@10 -- # set +x 00:26:15.589 Asynchronous Event Request test 00:26:15.589 Attaching to 10.0.0.2 00:26:15.589 Attached to 10.0.0.2 00:26:15.589 Registering asynchronous event callbacks... 00:26:15.589 Starting namespace attribute notice tests for all controllers... 00:26:15.589 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:26:15.589 aer_cb - Changed Namespace 00:26:15.589 Cleaning up... 00:26:15.589 [ 00:26:15.589 { 00:26:15.589 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:26:15.589 "subtype": "Discovery", 00:26:15.589 "listen_addresses": [], 00:26:15.589 "allow_any_host": true, 00:26:15.589 "hosts": [] 00:26:15.589 }, 00:26:15.589 { 00:26:15.589 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:15.589 "subtype": "NVMe", 00:26:15.589 "listen_addresses": [ 00:26:15.589 { 00:26:15.589 "transport": "TCP", 00:26:15.589 "trtype": "TCP", 00:26:15.589 "adrfam": "IPv4", 00:26:15.589 "traddr": "10.0.0.2", 00:26:15.589 "trsvcid": "4420" 00:26:15.589 } 00:26:15.589 ], 00:26:15.589 "allow_any_host": true, 00:26:15.589 "hosts": [], 00:26:15.589 "serial_number": "SPDK00000000000001", 00:26:15.589 "model_number": "SPDK bdev Controller", 00:26:15.589 "max_namespaces": 2, 00:26:15.589 "min_cntlid": 1, 00:26:15.589 "max_cntlid": 65519, 00:26:15.589 "namespaces": [ 00:26:15.589 { 00:26:15.589 "nsid": 1, 00:26:15.589 "bdev_name": "Malloc0", 00:26:15.589 "name": "Malloc0", 00:26:15.589 "nguid": "B29078B2BA94499F8CE39D1A04A9D307", 00:26:15.589 "uuid": "b29078b2-ba94-499f-8ce3-9d1a04a9d307" 00:26:15.589 }, 00:26:15.589 { 00:26:15.589 "nsid": 2, 00:26:15.589 "bdev_name": "Malloc1", 00:26:15.589 "name": "Malloc1", 00:26:15.589 "nguid": "7199B552438549268D45D1707E4123D1", 00:26:15.589 "uuid": "7199b552-4385-4926-8d45-d1707e4123d1" 00:26:15.589 } 00:26:15.589 ] 00:26:15.589 } 00:26:15.589 ] 00:26:15.589 13:09:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:15.589 13:09:20 -- host/aer.sh@43 -- # wait 4104021 00:26:15.589 13:09:20 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:26:15.589 13:09:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:15.589 13:09:20 -- common/autotest_common.sh@10 -- # set +x 00:26:15.589 13:09:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:15.589 13:09:20 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:26:15.589 13:09:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:15.589 13:09:20 -- common/autotest_common.sh@10 -- # set +x 00:26:15.589 13:09:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:15.589 13:09:20 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:15.589 13:09:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:15.589 13:09:20 -- common/autotest_common.sh@10 -- # set +x 00:26:15.589 13:09:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:15.589 13:09:20 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:26:15.589 13:09:20 -- host/aer.sh@51 -- # nvmftestfini 00:26:15.589 13:09:20 -- nvmf/common.sh@477 -- # nvmfcleanup 00:26:15.589 13:09:20 -- nvmf/common.sh@117 -- # sync 00:26:15.589 13:09:20 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:15.589 13:09:20 -- nvmf/common.sh@120 -- # set +e 00:26:15.589 13:09:20 -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:15.589 13:09:20 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:15.589 rmmod nvme_tcp 00:26:15.589 rmmod nvme_fabrics 00:26:15.589 rmmod nvme_keyring 00:26:15.589 13:09:20 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:15.589 13:09:20 -- nvmf/common.sh@124 -- # set -e 00:26:15.589 13:09:20 -- nvmf/common.sh@125 -- # return 0 00:26:15.589 13:09:20 -- nvmf/common.sh@478 -- # '[' -n 4103719 ']' 00:26:15.589 13:09:20 -- nvmf/common.sh@479 -- # killprocess 4103719 00:26:15.589 13:09:20 -- common/autotest_common.sh@936 -- # '[' -z 4103719 ']' 00:26:15.589 13:09:20 -- common/autotest_common.sh@940 -- # kill -0 4103719 00:26:15.589 13:09:20 -- common/autotest_common.sh@941 -- # uname 00:26:15.590 13:09:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:15.590 13:09:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4103719 00:26:15.849 13:09:20 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:15.849 13:09:20 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:15.849 13:09:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4103719' 00:26:15.849 killing process with pid 4103719 00:26:15.849 13:09:20 -- common/autotest_common.sh@955 -- # kill 4103719 00:26:15.849 [2024-04-26 13:09:20.694513] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:26:15.849 13:09:20 -- common/autotest_common.sh@960 -- # wait 4103719 00:26:15.849 13:09:20 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:26:15.849 13:09:20 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:26:15.849 13:09:20 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:26:15.849 13:09:20 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:15.849 13:09:20 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:15.849 13:09:20 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:15.849 13:09:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:15.849 13:09:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:18.390 13:09:22 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:18.390 00:26:18.390 real 0m11.096s 00:26:18.390 user 0m7.860s 00:26:18.390 sys 0m5.762s 00:26:18.390 13:09:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:18.390 13:09:22 -- common/autotest_common.sh@10 -- # set +x 00:26:18.390 ************************************ 00:26:18.390 END TEST nvmf_aer 00:26:18.390 ************************************ 00:26:18.390 13:09:22 -- nvmf/nvmf.sh@91 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:26:18.390 13:09:22 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:26:18.390 13:09:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:18.390 13:09:22 -- common/autotest_common.sh@10 -- # set +x 00:26:18.390 ************************************ 00:26:18.390 START TEST nvmf_async_init 00:26:18.390 ************************************ 00:26:18.390 13:09:23 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:26:18.390 * Looking for test storage... 00:26:18.390 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:18.390 13:09:23 -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:18.390 13:09:23 -- nvmf/common.sh@7 -- # uname -s 00:26:18.390 13:09:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:18.390 13:09:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:18.390 13:09:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:18.390 13:09:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:18.390 13:09:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:18.390 13:09:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:18.390 13:09:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:18.390 13:09:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:18.390 13:09:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:18.390 13:09:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:18.390 13:09:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:18.390 13:09:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:18.390 13:09:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:18.390 13:09:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:18.390 13:09:23 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:18.390 13:09:23 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:18.390 13:09:23 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:18.390 13:09:23 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:18.390 13:09:23 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:18.390 13:09:23 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:18.390 13:09:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:18.390 13:09:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:18.390 13:09:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:18.390 13:09:23 -- paths/export.sh@5 -- # export PATH 00:26:18.390 13:09:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:18.390 13:09:23 -- nvmf/common.sh@47 -- # : 0 00:26:18.390 13:09:23 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:18.390 13:09:23 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:18.390 13:09:23 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:18.391 13:09:23 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:18.391 13:09:23 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:18.391 13:09:23 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:18.391 13:09:23 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:18.391 13:09:23 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:18.391 13:09:23 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:26:18.391 13:09:23 -- host/async_init.sh@14 -- # null_block_size=512 00:26:18.391 13:09:23 -- host/async_init.sh@15 -- # null_bdev=null0 00:26:18.391 13:09:23 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:26:18.391 13:09:23 -- host/async_init.sh@20 -- # uuidgen 00:26:18.391 13:09:23 -- host/async_init.sh@20 -- # tr -d - 00:26:18.391 13:09:23 -- host/async_init.sh@20 -- # nguid=a73ebf526edd49a4a247681e128b1599 00:26:18.391 13:09:23 -- host/async_init.sh@22 -- # nvmftestinit 00:26:18.391 13:09:23 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:26:18.391 13:09:23 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:18.391 13:09:23 -- nvmf/common.sh@437 -- # prepare_net_devs 00:26:18.391 13:09:23 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:26:18.391 13:09:23 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:26:18.391 13:09:23 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:18.391 13:09:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:18.391 13:09:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:18.391 13:09:23 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:26:18.391 13:09:23 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:26:18.391 13:09:23 -- nvmf/common.sh@285 -- # xtrace_disable 00:26:18.391 13:09:23 -- common/autotest_common.sh@10 -- # set +x 00:26:26.527 13:09:30 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:26.528 13:09:30 -- nvmf/common.sh@291 -- # pci_devs=() 00:26:26.528 13:09:30 -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:26.528 13:09:30 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:26.528 13:09:30 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:26.528 13:09:30 -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:26.528 13:09:30 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:26.528 13:09:30 -- nvmf/common.sh@295 -- # net_devs=() 00:26:26.528 13:09:30 -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:26.528 13:09:30 -- nvmf/common.sh@296 -- # e810=() 00:26:26.528 13:09:30 -- nvmf/common.sh@296 -- # local -ga e810 00:26:26.528 13:09:30 -- nvmf/common.sh@297 -- # x722=() 00:26:26.528 13:09:30 -- nvmf/common.sh@297 -- # local -ga x722 00:26:26.528 13:09:30 -- nvmf/common.sh@298 -- # mlx=() 00:26:26.528 13:09:30 -- nvmf/common.sh@298 -- # local -ga mlx 00:26:26.528 13:09:30 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:26.528 13:09:30 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:26.528 13:09:30 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:26.528 13:09:30 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:26.528 13:09:30 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:26.528 13:09:30 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:26.528 13:09:30 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:26.528 13:09:30 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:26.528 13:09:30 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:26.528 13:09:30 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:26.528 13:09:30 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:26.528 13:09:30 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:26.528 13:09:30 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:26.528 13:09:30 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:26.528 13:09:30 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:26.528 13:09:30 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:26.528 13:09:30 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:26.528 13:09:30 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:26.528 13:09:30 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:26.528 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:26.528 13:09:30 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:26.528 13:09:30 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:26.528 13:09:30 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:26.528 13:09:30 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:26.528 13:09:30 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:26.528 13:09:30 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:26.528 13:09:30 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:26.528 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:26.528 13:09:30 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:26.528 13:09:30 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:26.528 13:09:30 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:26.528 13:09:30 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:26.528 13:09:30 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:26.528 13:09:30 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:26.528 13:09:30 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:26.528 13:09:30 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:26.528 13:09:30 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:26.528 13:09:30 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:26.528 13:09:30 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:26:26.528 13:09:30 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:26.528 13:09:30 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:26.528 Found net devices under 0000:31:00.0: cvl_0_0 00:26:26.528 13:09:30 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:26:26.528 13:09:30 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:26.528 13:09:30 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:26.528 13:09:30 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:26:26.528 13:09:30 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:26.528 13:09:30 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:26.528 Found net devices under 0000:31:00.1: cvl_0_1 00:26:26.528 13:09:30 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:26:26.528 13:09:30 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:26:26.528 13:09:30 -- nvmf/common.sh@403 -- # is_hw=yes 00:26:26.528 13:09:30 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:26:26.528 13:09:30 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:26:26.528 13:09:30 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:26:26.528 13:09:30 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:26.528 13:09:30 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:26.528 13:09:30 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:26.528 13:09:30 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:26.528 13:09:30 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:26.528 13:09:30 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:26.528 13:09:30 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:26.528 13:09:30 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:26.528 13:09:30 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:26.528 13:09:30 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:26.528 13:09:30 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:26.528 13:09:30 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:26.528 13:09:30 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:26.528 13:09:30 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:26.528 13:09:30 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:26.528 13:09:30 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:26.528 13:09:30 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:26.528 13:09:30 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:26.528 13:09:30 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:26.528 13:09:30 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:26.528 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:26.528 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.628 ms 00:26:26.528 00:26:26.528 --- 10.0.0.2 ping statistics --- 00:26:26.528 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:26.528 rtt min/avg/max/mdev = 0.628/0.628/0.628/0.000 ms 00:26:26.528 13:09:30 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:26.528 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:26.528 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.224 ms 00:26:26.528 00:26:26.528 --- 10.0.0.1 ping statistics --- 00:26:26.528 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:26.528 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:26:26.528 13:09:30 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:26.528 13:09:30 -- nvmf/common.sh@411 -- # return 0 00:26:26.528 13:09:30 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:26:26.528 13:09:30 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:26.528 13:09:30 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:26:26.528 13:09:30 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:26:26.528 13:09:30 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:26.528 13:09:30 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:26:26.528 13:09:30 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:26:26.528 13:09:30 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:26:26.528 13:09:30 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:26:26.528 13:09:30 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:26.528 13:09:30 -- common/autotest_common.sh@10 -- # set +x 00:26:26.528 13:09:30 -- nvmf/common.sh@470 -- # nvmfpid=4108167 00:26:26.528 13:09:30 -- nvmf/common.sh@471 -- # waitforlisten 4108167 00:26:26.528 13:09:30 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:26:26.528 13:09:30 -- common/autotest_common.sh@817 -- # '[' -z 4108167 ']' 00:26:26.528 13:09:30 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:26.528 13:09:30 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:26.528 13:09:30 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:26.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:26.528 13:09:30 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:26.528 13:09:30 -- common/autotest_common.sh@10 -- # set +x 00:26:26.528 [2024-04-26 13:09:30.503063] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:26:26.528 [2024-04-26 13:09:30.503134] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:26.528 EAL: No free 2048 kB hugepages reported on node 1 00:26:26.528 [2024-04-26 13:09:30.574615] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:26.528 [2024-04-26 13:09:30.647570] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:26.528 [2024-04-26 13:09:30.647607] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:26.528 [2024-04-26 13:09:30.647614] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:26.528 [2024-04-26 13:09:30.647621] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:26.528 [2024-04-26 13:09:30.647627] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:26.528 [2024-04-26 13:09:30.647645] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:26.528 13:09:31 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:26.528 13:09:31 -- common/autotest_common.sh@850 -- # return 0 00:26:26.528 13:09:31 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:26:26.528 13:09:31 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:26.528 13:09:31 -- common/autotest_common.sh@10 -- # set +x 00:26:26.528 13:09:31 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:26.528 13:09:31 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:26:26.528 13:09:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:26.528 13:09:31 -- common/autotest_common.sh@10 -- # set +x 00:26:26.528 [2024-04-26 13:09:31.314831] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:26.528 13:09:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:26.529 13:09:31 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:26:26.529 13:09:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:26.529 13:09:31 -- common/autotest_common.sh@10 -- # set +x 00:26:26.529 null0 00:26:26.529 13:09:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:26.529 13:09:31 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:26:26.529 13:09:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:26.529 13:09:31 -- common/autotest_common.sh@10 -- # set +x 00:26:26.529 13:09:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:26.529 13:09:31 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:26:26.529 13:09:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:26.529 13:09:31 -- common/autotest_common.sh@10 -- # set +x 00:26:26.529 13:09:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:26.529 13:09:31 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g a73ebf526edd49a4a247681e128b1599 00:26:26.529 13:09:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:26.529 13:09:31 -- common/autotest_common.sh@10 -- # set +x 00:26:26.529 13:09:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:26.529 13:09:31 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:26.529 13:09:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:26.529 13:09:31 -- common/autotest_common.sh@10 -- # set +x 00:26:26.529 [2024-04-26 13:09:31.371074] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:26.529 13:09:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:26.529 13:09:31 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:26:26.529 13:09:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:26.529 13:09:31 -- common/autotest_common.sh@10 -- # set +x 00:26:26.789 nvme0n1 00:26:26.789 13:09:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:26.789 13:09:31 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:26:26.789 13:09:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:26.789 13:09:31 -- common/autotest_common.sh@10 -- # set +x 00:26:26.789 [ 00:26:26.789 { 00:26:26.789 "name": "nvme0n1", 00:26:26.789 "aliases": [ 00:26:26.789 "a73ebf52-6edd-49a4-a247-681e128b1599" 00:26:26.789 ], 00:26:26.789 "product_name": "NVMe disk", 00:26:26.789 "block_size": 512, 00:26:26.789 "num_blocks": 2097152, 00:26:26.789 "uuid": "a73ebf52-6edd-49a4-a247-681e128b1599", 00:26:26.789 "assigned_rate_limits": { 00:26:26.789 "rw_ios_per_sec": 0, 00:26:26.789 "rw_mbytes_per_sec": 0, 00:26:26.789 "r_mbytes_per_sec": 0, 00:26:26.789 "w_mbytes_per_sec": 0 00:26:26.789 }, 00:26:26.789 "claimed": false, 00:26:26.789 "zoned": false, 00:26:26.789 "supported_io_types": { 00:26:26.789 "read": true, 00:26:26.789 "write": true, 00:26:26.789 "unmap": false, 00:26:26.789 "write_zeroes": true, 00:26:26.789 "flush": true, 00:26:26.789 "reset": true, 00:26:26.789 "compare": true, 00:26:26.789 "compare_and_write": true, 00:26:26.789 "abort": true, 00:26:26.789 "nvme_admin": true, 00:26:26.789 "nvme_io": true 00:26:26.789 }, 00:26:26.789 "memory_domains": [ 00:26:26.789 { 00:26:26.789 "dma_device_id": "system", 00:26:26.789 "dma_device_type": 1 00:26:26.789 } 00:26:26.789 ], 00:26:26.789 "driver_specific": { 00:26:26.789 "nvme": [ 00:26:26.789 { 00:26:26.789 "trid": { 00:26:26.789 "trtype": "TCP", 00:26:26.789 "adrfam": "IPv4", 00:26:26.789 "traddr": "10.0.0.2", 00:26:26.789 "trsvcid": "4420", 00:26:26.789 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:26:26.789 }, 00:26:26.789 "ctrlr_data": { 00:26:26.789 "cntlid": 1, 00:26:26.789 "vendor_id": "0x8086", 00:26:26.789 "model_number": "SPDK bdev Controller", 00:26:26.789 "serial_number": "00000000000000000000", 00:26:26.789 "firmware_revision": "24.05", 00:26:26.789 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:26.789 "oacs": { 00:26:26.789 "security": 0, 00:26:26.789 "format": 0, 00:26:26.789 "firmware": 0, 00:26:26.789 "ns_manage": 0 00:26:26.789 }, 00:26:26.789 "multi_ctrlr": true, 00:26:26.789 "ana_reporting": false 00:26:26.789 }, 00:26:26.789 "vs": { 00:26:26.789 "nvme_version": "1.3" 00:26:26.789 }, 00:26:26.789 "ns_data": { 00:26:26.789 "id": 1, 00:26:26.789 "can_share": true 00:26:26.789 } 00:26:26.789 } 00:26:26.789 ], 00:26:26.790 "mp_policy": "active_passive" 00:26:26.790 } 00:26:26.790 } 00:26:26.790 ] 00:26:26.790 13:09:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:26.790 13:09:31 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:26:26.790 13:09:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:26.790 13:09:31 -- common/autotest_common.sh@10 -- # set +x 00:26:26.790 [2024-04-26 13:09:31.635635] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:26.790 [2024-04-26 13:09:31.635695] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7b3130 (9): Bad file descriptor 00:26:26.790 [2024-04-26 13:09:31.767932] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:26.790 13:09:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:26.790 13:09:31 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:26:26.790 13:09:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:26.790 13:09:31 -- common/autotest_common.sh@10 -- # set +x 00:26:26.790 [ 00:26:26.790 { 00:26:26.790 "name": "nvme0n1", 00:26:26.790 "aliases": [ 00:26:26.790 "a73ebf52-6edd-49a4-a247-681e128b1599" 00:26:26.790 ], 00:26:26.790 "product_name": "NVMe disk", 00:26:26.790 "block_size": 512, 00:26:26.790 "num_blocks": 2097152, 00:26:26.790 "uuid": "a73ebf52-6edd-49a4-a247-681e128b1599", 00:26:26.790 "assigned_rate_limits": { 00:26:26.790 "rw_ios_per_sec": 0, 00:26:26.790 "rw_mbytes_per_sec": 0, 00:26:26.790 "r_mbytes_per_sec": 0, 00:26:26.790 "w_mbytes_per_sec": 0 00:26:26.790 }, 00:26:26.790 "claimed": false, 00:26:26.790 "zoned": false, 00:26:26.790 "supported_io_types": { 00:26:26.790 "read": true, 00:26:26.790 "write": true, 00:26:26.790 "unmap": false, 00:26:26.790 "write_zeroes": true, 00:26:26.790 "flush": true, 00:26:26.790 "reset": true, 00:26:26.790 "compare": true, 00:26:26.790 "compare_and_write": true, 00:26:26.790 "abort": true, 00:26:26.790 "nvme_admin": true, 00:26:26.790 "nvme_io": true 00:26:26.790 }, 00:26:26.790 "memory_domains": [ 00:26:26.790 { 00:26:26.790 "dma_device_id": "system", 00:26:26.790 "dma_device_type": 1 00:26:26.790 } 00:26:26.790 ], 00:26:26.790 "driver_specific": { 00:26:26.790 "nvme": [ 00:26:26.790 { 00:26:26.790 "trid": { 00:26:26.790 "trtype": "TCP", 00:26:26.790 "adrfam": "IPv4", 00:26:26.790 "traddr": "10.0.0.2", 00:26:26.790 "trsvcid": "4420", 00:26:26.790 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:26:26.790 }, 00:26:26.790 "ctrlr_data": { 00:26:26.790 "cntlid": 2, 00:26:26.790 "vendor_id": "0x8086", 00:26:26.790 "model_number": "SPDK bdev Controller", 00:26:26.790 "serial_number": "00000000000000000000", 00:26:26.790 "firmware_revision": "24.05", 00:26:26.790 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:26.790 "oacs": { 00:26:26.790 "security": 0, 00:26:26.790 "format": 0, 00:26:26.790 "firmware": 0, 00:26:26.790 "ns_manage": 0 00:26:26.790 }, 00:26:26.790 "multi_ctrlr": true, 00:26:26.790 "ana_reporting": false 00:26:26.790 }, 00:26:26.790 "vs": { 00:26:26.790 "nvme_version": "1.3" 00:26:26.790 }, 00:26:26.790 "ns_data": { 00:26:26.790 "id": 1, 00:26:26.790 "can_share": true 00:26:26.790 } 00:26:26.790 } 00:26:26.790 ], 00:26:26.790 "mp_policy": "active_passive" 00:26:26.790 } 00:26:26.790 } 00:26:26.790 ] 00:26:26.790 13:09:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:26.790 13:09:31 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:26.790 13:09:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:26.790 13:09:31 -- common/autotest_common.sh@10 -- # set +x 00:26:26.790 13:09:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:26.790 13:09:31 -- host/async_init.sh@53 -- # mktemp 00:26:26.790 13:09:31 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.Bdba2ShINA 00:26:26.790 13:09:31 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:26:26.790 13:09:31 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.Bdba2ShINA 00:26:26.790 13:09:31 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:26:26.790 13:09:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:26.790 13:09:31 -- common/autotest_common.sh@10 -- # set +x 00:26:26.790 13:09:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:26.790 13:09:31 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:26:26.790 13:09:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:26.790 13:09:31 -- common/autotest_common.sh@10 -- # set +x 00:26:26.790 [2024-04-26 13:09:31.832263] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:26:26.790 [2024-04-26 13:09:31.832376] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:26.790 13:09:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:26.790 13:09:31 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Bdba2ShINA 00:26:26.790 13:09:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:26.790 13:09:31 -- common/autotest_common.sh@10 -- # set +x 00:26:26.790 [2024-04-26 13:09:31.844282] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:26:26.790 13:09:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:27.050 13:09:31 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Bdba2ShINA 00:26:27.050 13:09:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:27.050 13:09:31 -- common/autotest_common.sh@10 -- # set +x 00:26:27.050 [2024-04-26 13:09:31.856320] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:27.050 [2024-04-26 13:09:31.856358] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:26:27.050 nvme0n1 00:26:27.050 13:09:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:27.050 13:09:31 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:26:27.050 13:09:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:27.050 13:09:31 -- common/autotest_common.sh@10 -- # set +x 00:26:27.050 [ 00:26:27.050 { 00:26:27.050 "name": "nvme0n1", 00:26:27.050 "aliases": [ 00:26:27.050 "a73ebf52-6edd-49a4-a247-681e128b1599" 00:26:27.050 ], 00:26:27.050 "product_name": "NVMe disk", 00:26:27.050 "block_size": 512, 00:26:27.050 "num_blocks": 2097152, 00:26:27.050 "uuid": "a73ebf52-6edd-49a4-a247-681e128b1599", 00:26:27.050 "assigned_rate_limits": { 00:26:27.050 "rw_ios_per_sec": 0, 00:26:27.050 "rw_mbytes_per_sec": 0, 00:26:27.050 "r_mbytes_per_sec": 0, 00:26:27.050 "w_mbytes_per_sec": 0 00:26:27.050 }, 00:26:27.050 "claimed": false, 00:26:27.050 "zoned": false, 00:26:27.050 "supported_io_types": { 00:26:27.050 "read": true, 00:26:27.050 "write": true, 00:26:27.050 "unmap": false, 00:26:27.050 "write_zeroes": true, 00:26:27.050 "flush": true, 00:26:27.050 "reset": true, 00:26:27.050 "compare": true, 00:26:27.050 "compare_and_write": true, 00:26:27.050 "abort": true, 00:26:27.050 "nvme_admin": true, 00:26:27.050 "nvme_io": true 00:26:27.050 }, 00:26:27.050 "memory_domains": [ 00:26:27.050 { 00:26:27.050 "dma_device_id": "system", 00:26:27.050 "dma_device_type": 1 00:26:27.050 } 00:26:27.050 ], 00:26:27.050 "driver_specific": { 00:26:27.050 "nvme": [ 00:26:27.050 { 00:26:27.050 "trid": { 00:26:27.050 "trtype": "TCP", 00:26:27.050 "adrfam": "IPv4", 00:26:27.050 "traddr": "10.0.0.2", 00:26:27.050 "trsvcid": "4421", 00:26:27.050 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:26:27.050 }, 00:26:27.050 "ctrlr_data": { 00:26:27.050 "cntlid": 3, 00:26:27.050 "vendor_id": "0x8086", 00:26:27.050 "model_number": "SPDK bdev Controller", 00:26:27.050 "serial_number": "00000000000000000000", 00:26:27.050 "firmware_revision": "24.05", 00:26:27.050 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:27.050 "oacs": { 00:26:27.050 "security": 0, 00:26:27.050 "format": 0, 00:26:27.050 "firmware": 0, 00:26:27.050 "ns_manage": 0 00:26:27.050 }, 00:26:27.050 "multi_ctrlr": true, 00:26:27.050 "ana_reporting": false 00:26:27.050 }, 00:26:27.050 "vs": { 00:26:27.050 "nvme_version": "1.3" 00:26:27.050 }, 00:26:27.050 "ns_data": { 00:26:27.050 "id": 1, 00:26:27.050 "can_share": true 00:26:27.050 } 00:26:27.050 } 00:26:27.050 ], 00:26:27.050 "mp_policy": "active_passive" 00:26:27.050 } 00:26:27.050 } 00:26:27.050 ] 00:26:27.050 13:09:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:27.050 13:09:31 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:27.050 13:09:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:27.050 13:09:31 -- common/autotest_common.sh@10 -- # set +x 00:26:27.051 13:09:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:27.051 13:09:31 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.Bdba2ShINA 00:26:27.051 13:09:31 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:26:27.051 13:09:31 -- host/async_init.sh@78 -- # nvmftestfini 00:26:27.051 13:09:31 -- nvmf/common.sh@477 -- # nvmfcleanup 00:26:27.051 13:09:31 -- nvmf/common.sh@117 -- # sync 00:26:27.051 13:09:31 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:27.051 13:09:31 -- nvmf/common.sh@120 -- # set +e 00:26:27.051 13:09:31 -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:27.051 13:09:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:27.051 rmmod nvme_tcp 00:26:27.051 rmmod nvme_fabrics 00:26:27.051 rmmod nvme_keyring 00:26:27.051 13:09:32 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:27.051 13:09:32 -- nvmf/common.sh@124 -- # set -e 00:26:27.051 13:09:32 -- nvmf/common.sh@125 -- # return 0 00:26:27.051 13:09:32 -- nvmf/common.sh@478 -- # '[' -n 4108167 ']' 00:26:27.051 13:09:32 -- nvmf/common.sh@479 -- # killprocess 4108167 00:26:27.051 13:09:32 -- common/autotest_common.sh@936 -- # '[' -z 4108167 ']' 00:26:27.051 13:09:32 -- common/autotest_common.sh@940 -- # kill -0 4108167 00:26:27.051 13:09:32 -- common/autotest_common.sh@941 -- # uname 00:26:27.051 13:09:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:27.051 13:09:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4108167 00:26:27.051 13:09:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:27.051 13:09:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:27.051 13:09:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4108167' 00:26:27.051 killing process with pid 4108167 00:26:27.051 13:09:32 -- common/autotest_common.sh@955 -- # kill 4108167 00:26:27.051 [2024-04-26 13:09:32.082203] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:26:27.051 [2024-04-26 13:09:32.082230] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:26:27.051 13:09:32 -- common/autotest_common.sh@960 -- # wait 4108167 00:26:27.311 13:09:32 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:26:27.311 13:09:32 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:26:27.311 13:09:32 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:26:27.311 13:09:32 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:27.311 13:09:32 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:27.311 13:09:32 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:27.311 13:09:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:27.311 13:09:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:29.221 13:09:34 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:29.221 00:26:29.221 real 0m11.169s 00:26:29.221 user 0m3.992s 00:26:29.221 sys 0m5.620s 00:26:29.221 13:09:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:29.221 13:09:34 -- common/autotest_common.sh@10 -- # set +x 00:26:29.221 ************************************ 00:26:29.221 END TEST nvmf_async_init 00:26:29.221 ************************************ 00:26:29.482 13:09:34 -- nvmf/nvmf.sh@92 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:26:29.482 13:09:34 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:26:29.482 13:09:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:29.482 13:09:34 -- common/autotest_common.sh@10 -- # set +x 00:26:29.482 ************************************ 00:26:29.482 START TEST dma 00:26:29.482 ************************************ 00:26:29.482 13:09:34 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:26:29.743 * Looking for test storage... 00:26:29.743 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:29.743 13:09:34 -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:29.743 13:09:34 -- nvmf/common.sh@7 -- # uname -s 00:26:29.743 13:09:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:29.743 13:09:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:29.743 13:09:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:29.743 13:09:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:29.743 13:09:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:29.743 13:09:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:29.743 13:09:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:29.743 13:09:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:29.743 13:09:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:29.743 13:09:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:29.743 13:09:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:29.743 13:09:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:29.743 13:09:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:29.743 13:09:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:29.743 13:09:34 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:29.743 13:09:34 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:29.743 13:09:34 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:29.743 13:09:34 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:29.743 13:09:34 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:29.743 13:09:34 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:29.743 13:09:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:29.743 13:09:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:29.743 13:09:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:29.743 13:09:34 -- paths/export.sh@5 -- # export PATH 00:26:29.743 13:09:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:29.743 13:09:34 -- nvmf/common.sh@47 -- # : 0 00:26:29.743 13:09:34 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:29.744 13:09:34 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:29.744 13:09:34 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:29.744 13:09:34 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:29.744 13:09:34 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:29.744 13:09:34 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:29.744 13:09:34 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:29.744 13:09:34 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:29.744 13:09:34 -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:26:29.744 13:09:34 -- host/dma.sh@13 -- # exit 0 00:26:29.744 00:26:29.744 real 0m0.135s 00:26:29.744 user 0m0.057s 00:26:29.744 sys 0m0.087s 00:26:29.744 13:09:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:29.744 13:09:34 -- common/autotest_common.sh@10 -- # set +x 00:26:29.744 ************************************ 00:26:29.744 END TEST dma 00:26:29.744 ************************************ 00:26:29.744 13:09:34 -- nvmf/nvmf.sh@95 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:26:29.744 13:09:34 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:26:29.744 13:09:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:29.744 13:09:34 -- common/autotest_common.sh@10 -- # set +x 00:26:30.004 ************************************ 00:26:30.004 START TEST nvmf_identify 00:26:30.004 ************************************ 00:26:30.004 13:09:34 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:26:30.004 * Looking for test storage... 00:26:30.004 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:30.004 13:09:34 -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:30.004 13:09:34 -- nvmf/common.sh@7 -- # uname -s 00:26:30.004 13:09:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:30.004 13:09:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:30.004 13:09:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:30.004 13:09:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:30.004 13:09:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:30.004 13:09:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:30.004 13:09:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:30.004 13:09:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:30.004 13:09:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:30.004 13:09:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:30.004 13:09:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:30.004 13:09:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:30.004 13:09:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:30.004 13:09:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:30.004 13:09:34 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:30.004 13:09:34 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:30.004 13:09:34 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:30.004 13:09:34 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:30.004 13:09:34 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:30.004 13:09:34 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:30.005 13:09:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:30.005 13:09:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:30.005 13:09:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:30.005 13:09:34 -- paths/export.sh@5 -- # export PATH 00:26:30.005 13:09:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:30.005 13:09:34 -- nvmf/common.sh@47 -- # : 0 00:26:30.005 13:09:34 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:30.005 13:09:34 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:30.005 13:09:34 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:30.005 13:09:34 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:30.005 13:09:34 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:30.005 13:09:34 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:30.005 13:09:34 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:30.005 13:09:34 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:30.005 13:09:34 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:30.005 13:09:34 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:30.005 13:09:34 -- host/identify.sh@14 -- # nvmftestinit 00:26:30.005 13:09:34 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:26:30.005 13:09:34 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:30.005 13:09:34 -- nvmf/common.sh@437 -- # prepare_net_devs 00:26:30.005 13:09:34 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:26:30.005 13:09:34 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:26:30.005 13:09:34 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:30.005 13:09:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:30.005 13:09:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:30.005 13:09:34 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:26:30.005 13:09:34 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:26:30.005 13:09:34 -- nvmf/common.sh@285 -- # xtrace_disable 00:26:30.005 13:09:34 -- common/autotest_common.sh@10 -- # set +x 00:26:38.173 13:09:41 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:38.173 13:09:41 -- nvmf/common.sh@291 -- # pci_devs=() 00:26:38.173 13:09:41 -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:38.173 13:09:41 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:38.173 13:09:41 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:38.173 13:09:41 -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:38.173 13:09:41 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:38.173 13:09:41 -- nvmf/common.sh@295 -- # net_devs=() 00:26:38.173 13:09:41 -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:38.173 13:09:41 -- nvmf/common.sh@296 -- # e810=() 00:26:38.173 13:09:41 -- nvmf/common.sh@296 -- # local -ga e810 00:26:38.173 13:09:41 -- nvmf/common.sh@297 -- # x722=() 00:26:38.173 13:09:41 -- nvmf/common.sh@297 -- # local -ga x722 00:26:38.173 13:09:41 -- nvmf/common.sh@298 -- # mlx=() 00:26:38.173 13:09:41 -- nvmf/common.sh@298 -- # local -ga mlx 00:26:38.173 13:09:41 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:38.173 13:09:41 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:38.173 13:09:41 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:38.173 13:09:41 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:38.173 13:09:41 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:38.173 13:09:41 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:38.173 13:09:41 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:38.173 13:09:41 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:38.173 13:09:41 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:38.173 13:09:41 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:38.173 13:09:41 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:38.173 13:09:41 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:38.173 13:09:41 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:38.173 13:09:41 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:38.173 13:09:41 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:38.173 13:09:41 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:38.173 13:09:41 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:38.173 13:09:41 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:38.173 13:09:41 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:38.173 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:38.173 13:09:41 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:38.173 13:09:41 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:38.173 13:09:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:38.173 13:09:41 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:38.173 13:09:41 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:38.173 13:09:41 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:38.173 13:09:41 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:38.173 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:38.173 13:09:41 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:38.173 13:09:41 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:38.173 13:09:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:38.173 13:09:41 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:38.173 13:09:41 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:38.173 13:09:41 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:38.173 13:09:41 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:38.173 13:09:41 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:38.173 13:09:41 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:38.173 13:09:41 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:38.173 13:09:41 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:26:38.173 13:09:41 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:38.173 13:09:41 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:38.173 Found net devices under 0000:31:00.0: cvl_0_0 00:26:38.173 13:09:41 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:26:38.173 13:09:41 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:38.173 13:09:41 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:38.173 13:09:41 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:26:38.173 13:09:41 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:38.173 13:09:41 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:38.173 Found net devices under 0000:31:00.1: cvl_0_1 00:26:38.173 13:09:41 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:26:38.173 13:09:41 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:26:38.173 13:09:41 -- nvmf/common.sh@403 -- # is_hw=yes 00:26:38.173 13:09:41 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:26:38.173 13:09:41 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:26:38.173 13:09:41 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:26:38.173 13:09:41 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:38.173 13:09:41 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:38.173 13:09:41 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:38.173 13:09:41 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:38.173 13:09:41 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:38.173 13:09:41 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:38.173 13:09:41 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:38.173 13:09:41 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:38.173 13:09:41 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:38.173 13:09:41 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:38.173 13:09:41 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:38.173 13:09:41 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:38.173 13:09:41 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:38.174 13:09:42 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:38.174 13:09:42 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:38.174 13:09:42 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:38.174 13:09:42 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:38.174 13:09:42 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:38.174 13:09:42 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:38.174 13:09:42 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:38.174 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:38.174 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.569 ms 00:26:38.174 00:26:38.174 --- 10.0.0.2 ping statistics --- 00:26:38.174 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:38.174 rtt min/avg/max/mdev = 0.569/0.569/0.569/0.000 ms 00:26:38.174 13:09:42 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:38.174 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:38.174 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.268 ms 00:26:38.174 00:26:38.174 --- 10.0.0.1 ping statistics --- 00:26:38.174 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:38.174 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:26:38.174 13:09:42 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:38.174 13:09:42 -- nvmf/common.sh@411 -- # return 0 00:26:38.174 13:09:42 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:26:38.174 13:09:42 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:38.174 13:09:42 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:26:38.174 13:09:42 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:26:38.174 13:09:42 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:38.174 13:09:42 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:26:38.174 13:09:42 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:26:38.174 13:09:42 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:26:38.174 13:09:42 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:38.174 13:09:42 -- common/autotest_common.sh@10 -- # set +x 00:26:38.174 13:09:42 -- host/identify.sh@19 -- # nvmfpid=4112935 00:26:38.174 13:09:42 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:38.174 13:09:42 -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:38.174 13:09:42 -- host/identify.sh@23 -- # waitforlisten 4112935 00:26:38.174 13:09:42 -- common/autotest_common.sh@817 -- # '[' -z 4112935 ']' 00:26:38.174 13:09:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:38.174 13:09:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:38.174 13:09:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:38.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:38.174 13:09:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:38.174 13:09:42 -- common/autotest_common.sh@10 -- # set +x 00:26:38.174 [2024-04-26 13:09:42.256449] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:26:38.174 [2024-04-26 13:09:42.256499] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:38.174 EAL: No free 2048 kB hugepages reported on node 1 00:26:38.174 [2024-04-26 13:09:42.324403] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:38.174 [2024-04-26 13:09:42.390336] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:38.174 [2024-04-26 13:09:42.390375] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:38.174 [2024-04-26 13:09:42.390383] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:38.174 [2024-04-26 13:09:42.390391] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:38.174 [2024-04-26 13:09:42.390398] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:38.174 [2024-04-26 13:09:42.390574] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:38.174 [2024-04-26 13:09:42.390687] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:38.174 [2024-04-26 13:09:42.390848] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:38.174 [2024-04-26 13:09:42.390857] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:38.174 13:09:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:38.174 13:09:43 -- common/autotest_common.sh@850 -- # return 0 00:26:38.174 13:09:43 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:38.174 13:09:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:38.174 13:09:43 -- common/autotest_common.sh@10 -- # set +x 00:26:38.174 [2024-04-26 13:09:43.035293] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:38.174 13:09:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:38.174 13:09:43 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:26:38.174 13:09:43 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:38.174 13:09:43 -- common/autotest_common.sh@10 -- # set +x 00:26:38.174 13:09:43 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:38.174 13:09:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:38.174 13:09:43 -- common/autotest_common.sh@10 -- # set +x 00:26:38.174 Malloc0 00:26:38.174 13:09:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:38.174 13:09:43 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:38.174 13:09:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:38.174 13:09:43 -- common/autotest_common.sh@10 -- # set +x 00:26:38.174 13:09:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:38.174 13:09:43 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:26:38.174 13:09:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:38.174 13:09:43 -- common/autotest_common.sh@10 -- # set +x 00:26:38.174 13:09:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:38.174 13:09:43 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:38.174 13:09:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:38.174 13:09:43 -- common/autotest_common.sh@10 -- # set +x 00:26:38.174 [2024-04-26 13:09:43.134824] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:38.174 13:09:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:38.174 13:09:43 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:38.174 13:09:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:38.174 13:09:43 -- common/autotest_common.sh@10 -- # set +x 00:26:38.174 13:09:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:38.174 13:09:43 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:26:38.174 13:09:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:38.174 13:09:43 -- common/autotest_common.sh@10 -- # set +x 00:26:38.174 [2024-04-26 13:09:43.158663] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:26:38.174 [ 00:26:38.174 { 00:26:38.174 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:26:38.174 "subtype": "Discovery", 00:26:38.174 "listen_addresses": [ 00:26:38.174 { 00:26:38.174 "transport": "TCP", 00:26:38.174 "trtype": "TCP", 00:26:38.174 "adrfam": "IPv4", 00:26:38.174 "traddr": "10.0.0.2", 00:26:38.174 "trsvcid": "4420" 00:26:38.174 } 00:26:38.174 ], 00:26:38.174 "allow_any_host": true, 00:26:38.174 "hosts": [] 00:26:38.174 }, 00:26:38.174 { 00:26:38.174 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:38.174 "subtype": "NVMe", 00:26:38.174 "listen_addresses": [ 00:26:38.174 { 00:26:38.174 "transport": "TCP", 00:26:38.174 "trtype": "TCP", 00:26:38.174 "adrfam": "IPv4", 00:26:38.174 "traddr": "10.0.0.2", 00:26:38.174 "trsvcid": "4420" 00:26:38.174 } 00:26:38.174 ], 00:26:38.174 "allow_any_host": true, 00:26:38.174 "hosts": [], 00:26:38.174 "serial_number": "SPDK00000000000001", 00:26:38.174 "model_number": "SPDK bdev Controller", 00:26:38.174 "max_namespaces": 32, 00:26:38.174 "min_cntlid": 1, 00:26:38.174 "max_cntlid": 65519, 00:26:38.174 "namespaces": [ 00:26:38.174 { 00:26:38.174 "nsid": 1, 00:26:38.174 "bdev_name": "Malloc0", 00:26:38.174 "name": "Malloc0", 00:26:38.174 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:26:38.174 "eui64": "ABCDEF0123456789", 00:26:38.174 "uuid": "e4915dec-44ca-4e40-b2d7-07f9be99ea22" 00:26:38.174 } 00:26:38.174 ] 00:26:38.174 } 00:26:38.174 ] 00:26:38.174 13:09:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:38.174 13:09:43 -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:26:38.174 [2024-04-26 13:09:43.195037] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:26:38.174 [2024-04-26 13:09:43.195077] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4113020 ] 00:26:38.174 EAL: No free 2048 kB hugepages reported on node 1 00:26:38.174 [2024-04-26 13:09:43.227477] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:26:38.174 [2024-04-26 13:09:43.227525] nvme_tcp.c:2326:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:26:38.174 [2024-04-26 13:09:43.227530] nvme_tcp.c:2330:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:26:38.174 [2024-04-26 13:09:43.227543] nvme_tcp.c:2348:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:26:38.174 [2024-04-26 13:09:43.227550] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:26:38.174 [2024-04-26 13:09:43.230867] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:26:38.174 [2024-04-26 13:09:43.230899] nvme_tcp.c:1543:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1c7cd10 0 00:26:38.461 [2024-04-26 13:09:43.238850] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:26:38.461 [2024-04-26 13:09:43.238861] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:26:38.461 [2024-04-26 13:09:43.238865] nvme_tcp.c:1589:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:26:38.461 [2024-04-26 13:09:43.238868] nvme_tcp.c:1590:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:26:38.461 [2024-04-26 13:09:43.238903] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:38.461 [2024-04-26 13:09:43.238909] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:38.461 [2024-04-26 13:09:43.238913] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c7cd10) 00:26:38.461 [2024-04-26 13:09:43.238925] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:26:38.461 [2024-04-26 13:09:43.238941] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce4a60, cid 0, qid 0 00:26:38.461 [2024-04-26 13:09:43.246847] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:38.461 [2024-04-26 13:09:43.246856] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:38.461 [2024-04-26 13:09:43.246860] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:38.461 [2024-04-26 13:09:43.246864] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ce4a60) on tqpair=0x1c7cd10 00:26:38.461 [2024-04-26 13:09:43.246878] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:26:38.461 [2024-04-26 13:09:43.246885] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:26:38.461 [2024-04-26 13:09:43.246890] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:26:38.461 [2024-04-26 13:09:43.246903] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:38.461 [2024-04-26 13:09:43.246907] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:38.461 [2024-04-26 13:09:43.246910] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c7cd10) 00:26:38.461 [2024-04-26 13:09:43.246918] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.461 [2024-04-26 13:09:43.246930] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce4a60, cid 0, qid 0 00:26:38.461 [2024-04-26 13:09:43.247120] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:38.461 [2024-04-26 13:09:43.247127] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:38.461 [2024-04-26 13:09:43.247130] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:38.461 [2024-04-26 13:09:43.247134] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ce4a60) on tqpair=0x1c7cd10 00:26:38.461 [2024-04-26 13:09:43.247142] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:26:38.461 [2024-04-26 13:09:43.247149] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:26:38.461 [2024-04-26 13:09:43.247156] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:38.461 [2024-04-26 13:09:43.247160] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:38.461 [2024-04-26 13:09:43.247163] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c7cd10) 00:26:38.461 [2024-04-26 13:09:43.247170] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.461 [2024-04-26 13:09:43.247180] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce4a60, cid 0, qid 0 00:26:38.461 [2024-04-26 13:09:43.247342] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:38.461 [2024-04-26 13:09:43.247348] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:38.461 [2024-04-26 13:09:43.247354] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:38.461 [2024-04-26 13:09:43.247358] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ce4a60) on tqpair=0x1c7cd10 00:26:38.461 [2024-04-26 13:09:43.247364] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:26:38.461 [2024-04-26 13:09:43.247372] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:26:38.461 [2024-04-26 13:09:43.247379] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:38.461 [2024-04-26 13:09:43.247383] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:38.461 [2024-04-26 13:09:43.247386] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c7cd10) 00:26:38.461 [2024-04-26 13:09:43.247393] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.461 [2024-04-26 13:09:43.247403] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce4a60, cid 0, qid 0 00:26:38.461 [2024-04-26 13:09:43.247573] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:38.461 [2024-04-26 13:09:43.247580] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:38.461 [2024-04-26 13:09:43.247583] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:38.461 [2024-04-26 13:09:43.247587] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ce4a60) on tqpair=0x1c7cd10 00:26:38.461 [2024-04-26 13:09:43.247592] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:26:38.461 [2024-04-26 13:09:43.247601] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:38.461 [2024-04-26 13:09:43.247605] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:38.461 [2024-04-26 13:09:43.247608] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c7cd10) 00:26:38.461 [2024-04-26 13:09:43.247615] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.461 [2024-04-26 13:09:43.247625] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce4a60, cid 0, qid 0 00:26:38.461 [2024-04-26 13:09:43.247831] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:38.461 [2024-04-26 13:09:43.247841] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:38.461 [2024-04-26 13:09:43.247845] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:38.461 [2024-04-26 13:09:43.247849] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ce4a60) on tqpair=0x1c7cd10 00:26:38.461 [2024-04-26 13:09:43.247854] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:26:38.461 [2024-04-26 13:09:43.247859] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:26:38.461 [2024-04-26 13:09:43.247866] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:26:38.461 [2024-04-26 13:09:43.247971] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:26:38.461 [2024-04-26 13:09:43.247976] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:26:38.461 [2024-04-26 13:09:43.247984] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:38.461 [2024-04-26 13:09:43.247988] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:38.461 [2024-04-26 13:09:43.247991] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c7cd10) 00:26:38.461 [2024-04-26 13:09:43.247998] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.461 [2024-04-26 13:09:43.248010] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce4a60, cid 0, qid 0 00:26:38.461 [2024-04-26 13:09:43.248181] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:38.461 [2024-04-26 13:09:43.248187] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:38.461 [2024-04-26 13:09:43.248191] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:38.461 [2024-04-26 13:09:43.248195] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ce4a60) on tqpair=0x1c7cd10 00:26:38.461 [2024-04-26 13:09:43.248200] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:26:38.461 [2024-04-26 13:09:43.248209] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:38.461 [2024-04-26 13:09:43.248213] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:38.461 [2024-04-26 13:09:43.248216] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c7cd10) 00:26:38.461 [2024-04-26 13:09:43.248223] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.461 [2024-04-26 13:09:43.248232] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce4a60, cid 0, qid 0 00:26:38.461 [2024-04-26 13:09:43.248435] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:38.461 [2024-04-26 13:09:43.248441] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:38.461 [2024-04-26 13:09:43.248445] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:38.461 [2024-04-26 13:09:43.248448] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ce4a60) on tqpair=0x1c7cd10 00:26:38.461 [2024-04-26 13:09:43.248454] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:26:38.461 [2024-04-26 13:09:43.248458] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:26:38.461 [2024-04-26 13:09:43.248466] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:26:38.461 [2024-04-26 13:09:43.248477] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:26:38.461 [2024-04-26 13:09:43.248485] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:38.461 [2024-04-26 13:09:43.248489] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c7cd10) 00:26:38.461 [2024-04-26 13:09:43.248496] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.461 [2024-04-26 13:09:43.248506] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce4a60, cid 0, qid 0 00:26:38.461 [2024-04-26 13:09:43.248748] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:38.462 [2024-04-26 13:09:43.248754] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:38.462 [2024-04-26 13:09:43.248758] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:38.462 [2024-04-26 13:09:43.248762] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c7cd10): datao=0, datal=4096, cccid=0 00:26:38.462 [2024-04-26 13:09:43.248766] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ce4a60) on tqpair(0x1c7cd10): expected_datao=0, payload_size=4096 00:26:38.462 [2024-04-26 13:09:43.248771] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:38.462 [2024-04-26 13:09:43.248789] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:38.462 [2024-04-26 13:09:43.248794] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:38.462 [2024-04-26 13:09:43.292846] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:38.462 [2024-04-26 13:09:43.292856] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:38.462 [2024-04-26 13:09:43.292859] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:38.462 [2024-04-26 13:09:43.292866] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ce4a60) on tqpair=0x1c7cd10 00:26:38.462 [2024-04-26 13:09:43.292875] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:26:38.462 [2024-04-26 13:09:43.292880] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:26:38.462 [2024-04-26 13:09:43.292885] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:26:38.462 [2024-04-26 13:09:43.292892] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:26:38.462 [2024-04-26 13:09:43.292897] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:26:38.462 [2024-04-26 13:09:43.292902] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:26:38.462 [2024-04-26 13:09:43.292910] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:26:38.462 [2024-04-26 13:09:43.292917] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:38.462 [2024-04-26 13:09:43.292920] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:38.462 [2024-04-26 13:09:43.292924] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c7cd10) 00:26:38.462 [2024-04-26 13:09:43.292931] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:26:38.462 [2024-04-26 13:09:43.292943] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce4a60, cid 0, qid 0 00:26:38.462 [2024-04-26 13:09:43.293129] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:38.462 [2024-04-26 13:09:43.293136] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:38.462 [2024-04-26 13:09:43.293139] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:38.462 [2024-04-26 13:09:43.293143] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ce4a60) on tqpair=0x1c7cd10 00:26:38.462 [2024-04-26 13:09:43.293151] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:38.462 [2024-04-26 13:09:43.293155] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:38.462 [2024-04-26 13:09:43.293158] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c7cd10) 00:26:38.462 [2024-04-26 13:09:43.293164] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:38.462 [2024-04-26 13:09:43.293170] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:38.462 [2024-04-26 13:09:43.293174] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:38.462 [2024-04-26 13:09:43.293177] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1c7cd10) 00:26:38.462 [2024-04-26 13:09:43.293183] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:38.462 [2024-04-26 13:09:43.293189] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:38.462 [2024-04-26 13:09:43.293193] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:38.462 [2024-04-26 13:09:43.293196] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1c7cd10) 00:26:38.462 [2024-04-26 13:09:43.293202] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:38.462 [2024-04-26 13:09:43.293208] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:38.462 [2024-04-26 13:09:43.293211] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:38.462 [2024-04-26 13:09:43.293215] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c7cd10) 00:26:38.462 [2024-04-26 13:09:43.293220] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:38.462 [2024-04-26 13:09:43.293227] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:26:38.462 [2024-04-26 13:09:43.293238] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:26:38.462 [2024-04-26 13:09:43.293244] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:38.462 [2024-04-26 13:09:43.293248] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c7cd10) 00:26:38.462 [2024-04-26 13:09:43.293255] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.462 [2024-04-26 13:09:43.293266] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce4a60, cid 0, qid 0 00:26:38.462 [2024-04-26 13:09:43.293271] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce4bc0, cid 1, qid 0 00:26:38.462 [2024-04-26 13:09:43.293276] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce4d20, cid 2, qid 0 00:26:38.462 [2024-04-26 13:09:43.293280] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce4e80, cid 3, qid 0 00:26:38.462 [2024-04-26 13:09:43.293285] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce4fe0, cid 4, qid 0 00:26:38.462 [2024-04-26 13:09:43.293503] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:38.462 [2024-04-26 13:09:43.293510] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:38.462 [2024-04-26 13:09:43.293513] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:38.462 [2024-04-26 13:09:43.293517] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ce4fe0) on tqpair=0x1c7cd10 00:26:38.462 [2024-04-26 13:09:43.293522] nvme_ctrlr.c:2902:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:26:38.462 [2024-04-26 13:09:43.293527] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:26:38.462 [2024-04-26 13:09:43.293537] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:38.462 [2024-04-26 13:09:43.293541] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c7cd10) 00:26:38.462 [2024-04-26 13:09:43.293548] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.462 [2024-04-26 13:09:43.293557] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce4fe0, cid 4, qid 0 00:26:38.462 [2024-04-26 13:09:43.293822] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:38.462 [2024-04-26 13:09:43.293829] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:38.462 [2024-04-26 13:09:43.293832] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:38.462 [2024-04-26 13:09:43.293835] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c7cd10): datao=0, datal=4096, cccid=4 00:26:38.462 [2024-04-26 13:09:43.293844] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ce4fe0) on tqpair(0x1c7cd10): expected_datao=0, payload_size=4096 00:26:38.462 [2024-04-26 13:09:43.293849] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:38.462 [2024-04-26 13:09:43.293855] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:38.462 [2024-04-26 13:09:43.293859] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:38.462 [2024-04-26 13:09:43.294000] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:38.462 [2024-04-26 13:09:43.294006] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:38.462 [2024-04-26 13:09:43.294010] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:38.462 [2024-04-26 13:09:43.294013] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ce4fe0) on tqpair=0x1c7cd10 00:26:38.462 [2024-04-26 13:09:43.294025] nvme_ctrlr.c:4036:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:26:38.462 [2024-04-26 13:09:43.294046] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:38.462 [2024-04-26 13:09:43.294050] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c7cd10) 00:26:38.462 [2024-04-26 13:09:43.294056] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.462 [2024-04-26 13:09:43.294063] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:38.462 [2024-04-26 13:09:43.294067] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:38.462 [2024-04-26 13:09:43.294070] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1c7cd10) 00:26:38.462 [2024-04-26 13:09:43.294077] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:26:38.462 [2024-04-26 13:09:43.294093] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce4fe0, cid 4, qid 0 00:26:38.462 [2024-04-26 13:09:43.294098] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce5140, cid 5, qid 0 00:26:38.462 [2024-04-26 13:09:43.294336] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:38.462 [2024-04-26 13:09:43.294343] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:38.462 [2024-04-26 13:09:43.294346] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:38.462 [2024-04-26 13:09:43.294350] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c7cd10): datao=0, datal=1024, cccid=4 00:26:38.462 [2024-04-26 13:09:43.294354] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ce4fe0) on tqpair(0x1c7cd10): expected_datao=0, payload_size=1024 00:26:38.462 [2024-04-26 13:09:43.294358] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:38.462 [2024-04-26 13:09:43.294365] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:38.462 [2024-04-26 13:09:43.294368] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:38.462 [2024-04-26 13:09:43.294374] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:38.462 [2024-04-26 13:09:43.294380] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:38.462 [2024-04-26 13:09:43.294383] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:38.462 [2024-04-26 13:09:43.294387] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ce5140) on tqpair=0x1c7cd10 00:26:38.462 [2024-04-26 13:09:43.335057] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:38.462 [2024-04-26 13:09:43.335067] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:38.462 [2024-04-26 13:09:43.335071] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:38.462 [2024-04-26 13:09:43.335077] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ce4fe0) on tqpair=0x1c7cd10 00:26:38.462 [2024-04-26 13:09:43.335089] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:38.462 [2024-04-26 13:09:43.335093] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c7cd10) 00:26:38.462 [2024-04-26 13:09:43.335099] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.462 [2024-04-26 13:09:43.335113] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce4fe0, cid 4, qid 0 00:26:38.462 [2024-04-26 13:09:43.335331] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:38.462 [2024-04-26 13:09:43.335339] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:38.462 [2024-04-26 13:09:43.335343] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:38.462 [2024-04-26 13:09:43.335346] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c7cd10): datao=0, datal=3072, cccid=4 00:26:38.462 [2024-04-26 13:09:43.335351] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ce4fe0) on tqpair(0x1c7cd10): expected_datao=0, payload_size=3072 00:26:38.462 [2024-04-26 13:09:43.335355] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:38.462 [2024-04-26 13:09:43.335361] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:38.462 [2024-04-26 13:09:43.335367] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:38.462 [2024-04-26 13:09:43.335529] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:38.462 [2024-04-26 13:09:43.335536] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:38.462 [2024-04-26 13:09:43.335540] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:38.462 [2024-04-26 13:09:43.335543] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ce4fe0) on tqpair=0x1c7cd10 00:26:38.462 [2024-04-26 13:09:43.335552] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:38.462 [2024-04-26 13:09:43.335556] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c7cd10) 00:26:38.462 [2024-04-26 13:09:43.335562] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.462 [2024-04-26 13:09:43.335575] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce4fe0, cid 4, qid 0 00:26:38.462 [2024-04-26 13:09:43.335802] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:38.462 [2024-04-26 13:09:43.335808] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:38.462 [2024-04-26 13:09:43.335811] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:38.462 [2024-04-26 13:09:43.335815] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c7cd10): datao=0, datal=8, cccid=4 00:26:38.462 [2024-04-26 13:09:43.335819] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ce4fe0) on tqpair(0x1c7cd10): expected_datao=0, payload_size=8 00:26:38.462 [2024-04-26 13:09:43.335823] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:38.462 [2024-04-26 13:09:43.335830] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:38.462 [2024-04-26 13:09:43.335833] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:38.462 [2024-04-26 13:09:43.380845] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:38.462 [2024-04-26 13:09:43.380855] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:38.462 [2024-04-26 13:09:43.380859] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:38.462 [2024-04-26 13:09:43.380862] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ce4fe0) on tqpair=0x1c7cd10 00:26:38.462 ===================================================== 00:26:38.462 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:26:38.462 ===================================================== 00:26:38.462 Controller Capabilities/Features 00:26:38.462 ================================ 00:26:38.462 Vendor ID: 0000 00:26:38.462 Subsystem Vendor ID: 0000 00:26:38.462 Serial Number: .................... 00:26:38.462 Model Number: ........................................ 00:26:38.462 Firmware Version: 24.05 00:26:38.462 Recommended Arb Burst: 0 00:26:38.462 IEEE OUI Identifier: 00 00 00 00:26:38.462 Multi-path I/O 00:26:38.462 May have multiple subsystem ports: No 00:26:38.462 May have multiple controllers: No 00:26:38.462 Associated with SR-IOV VF: No 00:26:38.462 Max Data Transfer Size: 131072 00:26:38.462 Max Number of Namespaces: 0 00:26:38.462 Max Number of I/O Queues: 1024 00:26:38.462 NVMe Specification Version (VS): 1.3 00:26:38.462 NVMe Specification Version (Identify): 1.3 00:26:38.462 Maximum Queue Entries: 128 00:26:38.462 Contiguous Queues Required: Yes 00:26:38.462 Arbitration Mechanisms Supported 00:26:38.462 Weighted Round Robin: Not Supported 00:26:38.462 Vendor Specific: Not Supported 00:26:38.462 Reset Timeout: 15000 ms 00:26:38.462 Doorbell Stride: 4 bytes 00:26:38.462 NVM Subsystem Reset: Not Supported 00:26:38.462 Command Sets Supported 00:26:38.462 NVM Command Set: Supported 00:26:38.462 Boot Partition: Not Supported 00:26:38.462 Memory Page Size Minimum: 4096 bytes 00:26:38.462 Memory Page Size Maximum: 4096 bytes 00:26:38.462 Persistent Memory Region: Not Supported 00:26:38.462 Optional Asynchronous Events Supported 00:26:38.462 Namespace Attribute Notices: Not Supported 00:26:38.462 Firmware Activation Notices: Not Supported 00:26:38.463 ANA Change Notices: Not Supported 00:26:38.463 PLE Aggregate Log Change Notices: Not Supported 00:26:38.463 LBA Status Info Alert Notices: Not Supported 00:26:38.463 EGE Aggregate Log Change Notices: Not Supported 00:26:38.463 Normal NVM Subsystem Shutdown event: Not Supported 00:26:38.463 Zone Descriptor Change Notices: Not Supported 00:26:38.463 Discovery Log Change Notices: Supported 00:26:38.463 Controller Attributes 00:26:38.463 128-bit Host Identifier: Not Supported 00:26:38.463 Non-Operational Permissive Mode: Not Supported 00:26:38.463 NVM Sets: Not Supported 00:26:38.463 Read Recovery Levels: Not Supported 00:26:38.463 Endurance Groups: Not Supported 00:26:38.463 Predictable Latency Mode: Not Supported 00:26:38.463 Traffic Based Keep ALive: Not Supported 00:26:38.463 Namespace Granularity: Not Supported 00:26:38.463 SQ Associations: Not Supported 00:26:38.463 UUID List: Not Supported 00:26:38.463 Multi-Domain Subsystem: Not Supported 00:26:38.463 Fixed Capacity Management: Not Supported 00:26:38.463 Variable Capacity Management: Not Supported 00:26:38.463 Delete Endurance Group: Not Supported 00:26:38.463 Delete NVM Set: Not Supported 00:26:38.463 Extended LBA Formats Supported: Not Supported 00:26:38.463 Flexible Data Placement Supported: Not Supported 00:26:38.463 00:26:38.463 Controller Memory Buffer Support 00:26:38.463 ================================ 00:26:38.463 Supported: No 00:26:38.463 00:26:38.463 Persistent Memory Region Support 00:26:38.463 ================================ 00:26:38.463 Supported: No 00:26:38.463 00:26:38.463 Admin Command Set Attributes 00:26:38.463 ============================ 00:26:38.463 Security Send/Receive: Not Supported 00:26:38.463 Format NVM: Not Supported 00:26:38.463 Firmware Activate/Download: Not Supported 00:26:38.463 Namespace Management: Not Supported 00:26:38.463 Device Self-Test: Not Supported 00:26:38.463 Directives: Not Supported 00:26:38.463 NVMe-MI: Not Supported 00:26:38.463 Virtualization Management: Not Supported 00:26:38.463 Doorbell Buffer Config: Not Supported 00:26:38.463 Get LBA Status Capability: Not Supported 00:26:38.463 Command & Feature Lockdown Capability: Not Supported 00:26:38.463 Abort Command Limit: 1 00:26:38.463 Async Event Request Limit: 4 00:26:38.463 Number of Firmware Slots: N/A 00:26:38.463 Firmware Slot 1 Read-Only: N/A 00:26:38.463 Firmware Activation Without Reset: N/A 00:26:38.463 Multiple Update Detection Support: N/A 00:26:38.463 Firmware Update Granularity: No Information Provided 00:26:38.463 Per-Namespace SMART Log: No 00:26:38.463 Asymmetric Namespace Access Log Page: Not Supported 00:26:38.463 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:26:38.463 Command Effects Log Page: Not Supported 00:26:38.463 Get Log Page Extended Data: Supported 00:26:38.463 Telemetry Log Pages: Not Supported 00:26:38.463 Persistent Event Log Pages: Not Supported 00:26:38.463 Supported Log Pages Log Page: May Support 00:26:38.463 Commands Supported & Effects Log Page: Not Supported 00:26:38.463 Feature Identifiers & Effects Log Page:May Support 00:26:38.463 NVMe-MI Commands & Effects Log Page: May Support 00:26:38.463 Data Area 4 for Telemetry Log: Not Supported 00:26:38.463 Error Log Page Entries Supported: 128 00:26:38.463 Keep Alive: Not Supported 00:26:38.463 00:26:38.463 NVM Command Set Attributes 00:26:38.463 ========================== 00:26:38.463 Submission Queue Entry Size 00:26:38.463 Max: 1 00:26:38.463 Min: 1 00:26:38.463 Completion Queue Entry Size 00:26:38.463 Max: 1 00:26:38.463 Min: 1 00:26:38.463 Number of Namespaces: 0 00:26:38.463 Compare Command: Not Supported 00:26:38.463 Write Uncorrectable Command: Not Supported 00:26:38.463 Dataset Management Command: Not Supported 00:26:38.463 Write Zeroes Command: Not Supported 00:26:38.463 Set Features Save Field: Not Supported 00:26:38.463 Reservations: Not Supported 00:26:38.463 Timestamp: Not Supported 00:26:38.463 Copy: Not Supported 00:26:38.463 Volatile Write Cache: Not Present 00:26:38.463 Atomic Write Unit (Normal): 1 00:26:38.463 Atomic Write Unit (PFail): 1 00:26:38.463 Atomic Compare & Write Unit: 1 00:26:38.463 Fused Compare & Write: Supported 00:26:38.463 Scatter-Gather List 00:26:38.463 SGL Command Set: Supported 00:26:38.463 SGL Keyed: Supported 00:26:38.463 SGL Bit Bucket Descriptor: Not Supported 00:26:38.463 SGL Metadata Pointer: Not Supported 00:26:38.463 Oversized SGL: Not Supported 00:26:38.463 SGL Metadata Address: Not Supported 00:26:38.463 SGL Offset: Supported 00:26:38.463 Transport SGL Data Block: Not Supported 00:26:38.463 Replay Protected Memory Block: Not Supported 00:26:38.463 00:26:38.463 Firmware Slot Information 00:26:38.463 ========================= 00:26:38.463 Active slot: 0 00:26:38.463 00:26:38.463 00:26:38.463 Error Log 00:26:38.463 ========= 00:26:38.463 00:26:38.463 Active Namespaces 00:26:38.463 ================= 00:26:38.463 Discovery Log Page 00:26:38.463 ================== 00:26:38.463 Generation Counter: 2 00:26:38.463 Number of Records: 2 00:26:38.463 Record Format: 0 00:26:38.463 00:26:38.463 Discovery Log Entry 0 00:26:38.463 ---------------------- 00:26:38.463 Transport Type: 3 (TCP) 00:26:38.463 Address Family: 1 (IPv4) 00:26:38.463 Subsystem Type: 3 (Current Discovery Subsystem) 00:26:38.463 Entry Flags: 00:26:38.463 Duplicate Returned Information: 1 00:26:38.463 Explicit Persistent Connection Support for Discovery: 1 00:26:38.463 Transport Requirements: 00:26:38.463 Secure Channel: Not Required 00:26:38.463 Port ID: 0 (0x0000) 00:26:38.463 Controller ID: 65535 (0xffff) 00:26:38.463 Admin Max SQ Size: 128 00:26:38.463 Transport Service Identifier: 4420 00:26:38.463 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:26:38.463 Transport Address: 10.0.0.2 00:26:38.463 Discovery Log Entry 1 00:26:38.463 ---------------------- 00:26:38.463 Transport Type: 3 (TCP) 00:26:38.463 Address Family: 1 (IPv4) 00:26:38.463 Subsystem Type: 2 (NVM Subsystem) 00:26:38.463 Entry Flags: 00:26:38.463 Duplicate Returned Information: 0 00:26:38.463 Explicit Persistent Connection Support for Discovery: 0 00:26:38.463 Transport Requirements: 00:26:38.463 Secure Channel: Not Required 00:26:38.463 Port ID: 0 (0x0000) 00:26:38.463 Controller ID: 65535 (0xffff) 00:26:38.463 Admin Max SQ Size: 128 00:26:38.463 Transport Service Identifier: 4420 00:26:38.463 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:26:38.463 Transport Address: 10.0.0.2 [2024-04-26 13:09:43.380948] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:26:38.463 [2024-04-26 13:09:43.380962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.463 [2024-04-26 13:09:43.380970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.463 [2024-04-26 13:09:43.380976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.463 [2024-04-26 13:09:43.380982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.463 [2024-04-26 13:09:43.380990] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:38.463 [2024-04-26 13:09:43.380994] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:38.463 [2024-04-26 13:09:43.380997] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c7cd10) 00:26:38.463 [2024-04-26 13:09:43.381004] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.463 [2024-04-26 13:09:43.381017] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce4e80, cid 3, qid 0 00:26:38.463 [2024-04-26 13:09:43.381264] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:38.463 [2024-04-26 13:09:43.381271] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:38.463 [2024-04-26 13:09:43.381274] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:38.463 [2024-04-26 13:09:43.381278] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ce4e80) on tqpair=0x1c7cd10 00:26:38.463 [2024-04-26 13:09:43.381290] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:38.463 [2024-04-26 13:09:43.381294] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:38.463 [2024-04-26 13:09:43.381297] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c7cd10) 00:26:38.463 [2024-04-26 13:09:43.381304] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.463 [2024-04-26 13:09:43.381317] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce4e80, cid 3, qid 0 00:26:38.463 [2024-04-26 13:09:43.381507] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:38.463 [2024-04-26 13:09:43.381513] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:38.463 [2024-04-26 13:09:43.381516] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:38.463 [2024-04-26 13:09:43.381520] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ce4e80) on tqpair=0x1c7cd10 00:26:38.463 [2024-04-26 13:09:43.381525] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:26:38.463 [2024-04-26 13:09:43.381530] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:26:38.463 [2024-04-26 13:09:43.381539] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:38.463 [2024-04-26 13:09:43.381542] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:38.463 [2024-04-26 13:09:43.381546] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c7cd10) 00:26:38.463 [2024-04-26 13:09:43.381552] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.463 [2024-04-26 13:09:43.381562] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce4e80, cid 3, qid 0 00:26:38.463 [2024-04-26 13:09:43.381761] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:38.463 [2024-04-26 13:09:43.381767] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:38.463 [2024-04-26 13:09:43.381770] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:38.463 [2024-04-26 13:09:43.381774] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ce4e80) on tqpair=0x1c7cd10 00:26:38.463 [2024-04-26 13:09:43.381785] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:38.463 [2024-04-26 13:09:43.381789] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:38.463 [2024-04-26 13:09:43.381792] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c7cd10) 00:26:38.463 [2024-04-26 13:09:43.381799] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.463 [2024-04-26 13:09:43.381808] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce4e80, cid 3, qid 0 00:26:38.463 [2024-04-26 13:09:43.382020] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:38.463 [2024-04-26 13:09:43.382027] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:38.463 [2024-04-26 13:09:43.382031] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:38.463 [2024-04-26 13:09:43.382034] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ce4e80) on tqpair=0x1c7cd10 00:26:38.463 [2024-04-26 13:09:43.382044] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:38.463 [2024-04-26 13:09:43.382048] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:38.463 [2024-04-26 13:09:43.382052] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c7cd10) 00:26:38.463 [2024-04-26 13:09:43.382058] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.463 [2024-04-26 13:09:43.382068] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce4e80, cid 3, qid 0 00:26:38.463 [2024-04-26 13:09:43.382270] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:38.463 [2024-04-26 13:09:43.382276] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:38.463 [2024-04-26 13:09:43.382282] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:38.463 [2024-04-26 13:09:43.382285] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ce4e80) on tqpair=0x1c7cd10 00:26:38.463 [2024-04-26 13:09:43.382296] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:38.463 [2024-04-26 13:09:43.382299] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:38.463 [2024-04-26 13:09:43.382303] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c7cd10) 00:26:38.463 [2024-04-26 13:09:43.382309] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.463 [2024-04-26 13:09:43.382319] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce4e80, cid 3, qid 0 00:26:38.463 [2024-04-26 13:09:43.382527] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:38.463 [2024-04-26 13:09:43.382533] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:38.463 [2024-04-26 13:09:43.382536] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:38.463 [2024-04-26 13:09:43.382540] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ce4e80) on tqpair=0x1c7cd10 00:26:38.463 [2024-04-26 13:09:43.382550] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:38.463 [2024-04-26 13:09:43.382554] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:38.463 [2024-04-26 13:09:43.382558] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c7cd10) 00:26:38.463 [2024-04-26 13:09:43.382564] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.463 [2024-04-26 13:09:43.382574] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce4e80, cid 3, qid 0 00:26:38.463 [2024-04-26 13:09:43.382757] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:38.464 [2024-04-26 13:09:43.382763] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:38.464 [2024-04-26 13:09:43.382766] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:38.464 [2024-04-26 13:09:43.382770] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ce4e80) on tqpair=0x1c7cd10 00:26:38.464 [2024-04-26 13:09:43.382780] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:38.464 [2024-04-26 13:09:43.382784] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:38.464 [2024-04-26 13:09:43.382788] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c7cd10) 00:26:38.464 [2024-04-26 13:09:43.382794] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.464 [2024-04-26 13:09:43.382803] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce4e80, cid 3, qid 0 00:26:38.464 [2024-04-26 13:09:43.382979] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:38.464 [2024-04-26 13:09:43.382986] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:38.464 [2024-04-26 13:09:43.382989] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:38.464 [2024-04-26 13:09:43.382993] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ce4e80) on tqpair=0x1c7cd10 00:26:38.464 [2024-04-26 13:09:43.383003] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:38.464 [2024-04-26 13:09:43.383007] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:38.464 [2024-04-26 13:09:43.383011] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c7cd10) 00:26:38.464 [2024-04-26 13:09:43.383017] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.464 [2024-04-26 13:09:43.383027] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce4e80, cid 3, qid 0 00:26:38.464 [2024-04-26 13:09:43.383280] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:38.464 [2024-04-26 13:09:43.383286] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:38.464 [2024-04-26 13:09:43.383289] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:38.464 [2024-04-26 13:09:43.383295] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ce4e80) on tqpair=0x1c7cd10 00:26:38.464 [2024-04-26 13:09:43.383305] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:38.464 [2024-04-26 13:09:43.383309] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:38.464 [2024-04-26 13:09:43.383312] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c7cd10) 00:26:38.464 [2024-04-26 13:09:43.383319] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.464 [2024-04-26 13:09:43.383328] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce4e80, cid 3, qid 0 00:26:38.464 [2024-04-26 13:09:43.383533] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:38.464 [2024-04-26 13:09:43.383539] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:38.464 [2024-04-26 13:09:43.383542] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:38.464 [2024-04-26 13:09:43.383546] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ce4e80) on tqpair=0x1c7cd10 00:26:38.464 [2024-04-26 13:09:43.383556] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:38.464 [2024-04-26 13:09:43.383560] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:38.464 [2024-04-26 13:09:43.383564] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c7cd10) 00:26:38.464 [2024-04-26 13:09:43.383570] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.464 [2024-04-26 13:09:43.383580] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce4e80, cid 3, qid 0 00:26:38.464 [2024-04-26 13:09:43.383763] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:38.464 [2024-04-26 13:09:43.383769] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:38.464 [2024-04-26 13:09:43.383772] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:38.464 [2024-04-26 13:09:43.383776] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ce4e80) on tqpair=0x1c7cd10 00:26:38.464 [2024-04-26 13:09:43.383786] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:38.464 [2024-04-26 13:09:43.383790] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:38.464 [2024-04-26 13:09:43.383794] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c7cd10) 00:26:38.464 [2024-04-26 13:09:43.383800] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.464 [2024-04-26 13:09:43.383809] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce4e80, cid 3, qid 0 00:26:38.464 [2024-04-26 13:09:43.384018] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:38.464 [2024-04-26 13:09:43.384024] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:38.464 [2024-04-26 13:09:43.384028] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:38.464 [2024-04-26 13:09:43.384032] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ce4e80) on tqpair=0x1c7cd10 00:26:38.464 [2024-04-26 13:09:43.384042] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:38.464 [2024-04-26 13:09:43.384046] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:38.464 [2024-04-26 13:09:43.384049] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c7cd10) 00:26:38.464 [2024-04-26 13:09:43.384056] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.464 [2024-04-26 13:09:43.384065] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce4e80, cid 3, qid 0 00:26:38.464 [2024-04-26 13:09:43.384321] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:38.464 [2024-04-26 13:09:43.384327] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:38.464 [2024-04-26 13:09:43.384330] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:38.464 [2024-04-26 13:09:43.384334] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ce4e80) on tqpair=0x1c7cd10 00:26:38.464 [2024-04-26 13:09:43.384346] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:38.464 [2024-04-26 13:09:43.384350] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:38.464 [2024-04-26 13:09:43.384353] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c7cd10) 00:26:38.464 [2024-04-26 13:09:43.384360] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.464 [2024-04-26 13:09:43.384369] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce4e80, cid 3, qid 0 00:26:38.464 [2024-04-26 13:09:43.384570] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:38.464 [2024-04-26 13:09:43.384576] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:38.464 [2024-04-26 13:09:43.384579] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:38.464 [2024-04-26 13:09:43.384583] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ce4e80) on tqpair=0x1c7cd10 00:26:38.464 [2024-04-26 13:09:43.384593] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:38.464 [2024-04-26 13:09:43.384597] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:38.464 [2024-04-26 13:09:43.384600] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c7cd10) 00:26:38.464 [2024-04-26 13:09:43.384607] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.464 [2024-04-26 13:09:43.384616] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce4e80, cid 3, qid 0 00:26:38.464 [2024-04-26 13:09:43.384818] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:38.464 [2024-04-26 13:09:43.384824] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:38.464 [2024-04-26 13:09:43.384827] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:38.464 [2024-04-26 13:09:43.384831] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ce4e80) on tqpair=0x1c7cd10 00:26:38.464 [2024-04-26 13:09:43.388847] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:38.464 [2024-04-26 13:09:43.388853] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:38.464 [2024-04-26 13:09:43.388856] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c7cd10) 00:26:38.464 [2024-04-26 13:09:43.388863] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.464 [2024-04-26 13:09:43.388874] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce4e80, cid 3, qid 0 00:26:38.464 [2024-04-26 13:09:43.389059] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:38.464 [2024-04-26 13:09:43.389066] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:38.464 [2024-04-26 13:09:43.389069] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:38.464 [2024-04-26 13:09:43.389073] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ce4e80) on tqpair=0x1c7cd10 00:26:38.464 [2024-04-26 13:09:43.389081] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:26:38.464 00:26:38.464 13:09:43 -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:26:38.464 [2024-04-26 13:09:43.431300] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:26:38.464 [2024-04-26 13:09:43.431377] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4113136 ] 00:26:38.464 EAL: No free 2048 kB hugepages reported on node 1 00:26:38.464 [2024-04-26 13:09:43.468379] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:26:38.464 [2024-04-26 13:09:43.468419] nvme_tcp.c:2326:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:26:38.464 [2024-04-26 13:09:43.468425] nvme_tcp.c:2330:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:26:38.464 [2024-04-26 13:09:43.468437] nvme_tcp.c:2348:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:26:38.464 [2024-04-26 13:09:43.468444] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:26:38.464 [2024-04-26 13:09:43.471867] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:26:38.464 [2024-04-26 13:09:43.471891] nvme_tcp.c:1543:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1240d10 0 00:26:38.464 [2024-04-26 13:09:43.479844] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:26:38.464 [2024-04-26 13:09:43.479853] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:26:38.464 [2024-04-26 13:09:43.479858] nvme_tcp.c:1589:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:26:38.464 [2024-04-26 13:09:43.479861] nvme_tcp.c:1590:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:26:38.464 [2024-04-26 13:09:43.479890] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:38.464 [2024-04-26 13:09:43.479896] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:38.464 [2024-04-26 13:09:43.479899] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1240d10) 00:26:38.464 [2024-04-26 13:09:43.479911] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:26:38.464 [2024-04-26 13:09:43.479925] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a8a60, cid 0, qid 0 00:26:38.464 [2024-04-26 13:09:43.487848] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:38.464 [2024-04-26 13:09:43.487857] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:38.464 [2024-04-26 13:09:43.487861] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:38.464 [2024-04-26 13:09:43.487865] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12a8a60) on tqpair=0x1240d10 00:26:38.464 [2024-04-26 13:09:43.487877] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:26:38.464 [2024-04-26 13:09:43.487883] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:26:38.464 [2024-04-26 13:09:43.487888] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:26:38.464 [2024-04-26 13:09:43.487899] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:38.464 [2024-04-26 13:09:43.487903] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:38.464 [2024-04-26 13:09:43.487906] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1240d10) 00:26:38.464 [2024-04-26 13:09:43.487913] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.464 [2024-04-26 13:09:43.487926] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a8a60, cid 0, qid 0 00:26:38.464 [2024-04-26 13:09:43.488110] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:38.464 [2024-04-26 13:09:43.488116] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:38.464 [2024-04-26 13:09:43.488120] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:38.464 [2024-04-26 13:09:43.488123] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12a8a60) on tqpair=0x1240d10 00:26:38.464 [2024-04-26 13:09:43.488131] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:26:38.464 [2024-04-26 13:09:43.488137] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:26:38.464 [2024-04-26 13:09:43.488144] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:38.464 [2024-04-26 13:09:43.488150] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:38.464 [2024-04-26 13:09:43.488154] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1240d10) 00:26:38.464 [2024-04-26 13:09:43.488161] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.464 [2024-04-26 13:09:43.488171] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a8a60, cid 0, qid 0 00:26:38.464 [2024-04-26 13:09:43.488369] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:38.464 [2024-04-26 13:09:43.488376] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:38.464 [2024-04-26 13:09:43.488379] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:38.464 [2024-04-26 13:09:43.488383] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12a8a60) on tqpair=0x1240d10 00:26:38.464 [2024-04-26 13:09:43.488389] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:26:38.464 [2024-04-26 13:09:43.488396] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:26:38.464 [2024-04-26 13:09:43.488403] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:38.465 [2024-04-26 13:09:43.488406] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:38.465 [2024-04-26 13:09:43.488410] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1240d10) 00:26:38.465 [2024-04-26 13:09:43.488416] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.465 [2024-04-26 13:09:43.488426] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a8a60, cid 0, qid 0 00:26:38.465 [2024-04-26 13:09:43.488634] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:38.465 [2024-04-26 13:09:43.488640] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:38.465 [2024-04-26 13:09:43.488644] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:38.465 [2024-04-26 13:09:43.488647] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12a8a60) on tqpair=0x1240d10 00:26:38.465 [2024-04-26 13:09:43.488653] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:26:38.465 [2024-04-26 13:09:43.488662] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:38.465 [2024-04-26 13:09:43.488665] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:38.465 [2024-04-26 13:09:43.488669] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1240d10) 00:26:38.465 [2024-04-26 13:09:43.488675] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.465 [2024-04-26 13:09:43.488685] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a8a60, cid 0, qid 0 00:26:38.465 [2024-04-26 13:09:43.488863] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:38.465 [2024-04-26 13:09:43.488870] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:38.465 [2024-04-26 13:09:43.488873] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:38.465 [2024-04-26 13:09:43.488877] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12a8a60) on tqpair=0x1240d10 00:26:38.465 [2024-04-26 13:09:43.488882] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:26:38.465 [2024-04-26 13:09:43.488886] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:26:38.465 [2024-04-26 13:09:43.488894] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:26:38.465 [2024-04-26 13:09:43.488999] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:26:38.465 [2024-04-26 13:09:43.489004] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:26:38.465 [2024-04-26 13:09:43.489012] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:38.465 [2024-04-26 13:09:43.489015] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:38.465 [2024-04-26 13:09:43.489019] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1240d10) 00:26:38.465 [2024-04-26 13:09:43.489026] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.465 [2024-04-26 13:09:43.489036] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a8a60, cid 0, qid 0 00:26:38.465 [2024-04-26 13:09:43.489199] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:38.465 [2024-04-26 13:09:43.489205] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:38.465 [2024-04-26 13:09:43.489208] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:38.465 [2024-04-26 13:09:43.489212] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12a8a60) on tqpair=0x1240d10 00:26:38.465 [2024-04-26 13:09:43.489217] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:26:38.465 [2024-04-26 13:09:43.489226] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:38.465 [2024-04-26 13:09:43.489230] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:38.465 [2024-04-26 13:09:43.489233] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1240d10) 00:26:38.465 [2024-04-26 13:09:43.489240] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.465 [2024-04-26 13:09:43.489249] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a8a60, cid 0, qid 0 00:26:38.465 [2024-04-26 13:09:43.489433] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:38.465 [2024-04-26 13:09:43.489439] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:38.465 [2024-04-26 13:09:43.489443] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:38.465 [2024-04-26 13:09:43.489446] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12a8a60) on tqpair=0x1240d10 00:26:38.465 [2024-04-26 13:09:43.489451] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:26:38.465 [2024-04-26 13:09:43.489456] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:26:38.465 [2024-04-26 13:09:43.489463] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:26:38.465 [2024-04-26 13:09:43.489470] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:26:38.465 [2024-04-26 13:09:43.489478] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:38.465 [2024-04-26 13:09:43.489482] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1240d10) 00:26:38.465 [2024-04-26 13:09:43.489489] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.465 [2024-04-26 13:09:43.489499] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a8a60, cid 0, qid 0 00:26:38.465 [2024-04-26 13:09:43.489726] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:38.465 [2024-04-26 13:09:43.489732] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:38.465 [2024-04-26 13:09:43.489736] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:38.465 [2024-04-26 13:09:43.489740] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1240d10): datao=0, datal=4096, cccid=0 00:26:38.465 [2024-04-26 13:09:43.489744] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12a8a60) on tqpair(0x1240d10): expected_datao=0, payload_size=4096 00:26:38.465 [2024-04-26 13:09:43.489750] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:38.465 [2024-04-26 13:09:43.489771] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:38.465 [2024-04-26 13:09:43.489775] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:38.465 [2024-04-26 13:09:43.489921] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:38.465 [2024-04-26 13:09:43.489928] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:38.465 [2024-04-26 13:09:43.489931] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:38.465 [2024-04-26 13:09:43.489935] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12a8a60) on tqpair=0x1240d10 00:26:38.465 [2024-04-26 13:09:43.489943] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:26:38.465 [2024-04-26 13:09:43.489948] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:26:38.465 [2024-04-26 13:09:43.489952] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:26:38.465 [2024-04-26 13:09:43.489958] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:26:38.465 [2024-04-26 13:09:43.489963] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:26:38.465 [2024-04-26 13:09:43.489967] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:26:38.465 [2024-04-26 13:09:43.489975] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:26:38.465 [2024-04-26 13:09:43.489982] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:38.465 [2024-04-26 13:09:43.489986] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:38.465 [2024-04-26 13:09:43.489989] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1240d10) 00:26:38.465 [2024-04-26 13:09:43.489996] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:26:38.465 [2024-04-26 13:09:43.490007] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a8a60, cid 0, qid 0 00:26:38.465 [2024-04-26 13:09:43.490200] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:38.465 [2024-04-26 13:09:43.490207] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:38.465 [2024-04-26 13:09:43.490210] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:38.465 [2024-04-26 13:09:43.490214] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12a8a60) on tqpair=0x1240d10 00:26:38.465 [2024-04-26 13:09:43.490221] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:38.465 [2024-04-26 13:09:43.490224] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:38.465 [2024-04-26 13:09:43.490228] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1240d10) 00:26:38.465 [2024-04-26 13:09:43.490234] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:38.465 [2024-04-26 13:09:43.490240] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:38.465 [2024-04-26 13:09:43.490244] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:38.465 [2024-04-26 13:09:43.490247] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1240d10) 00:26:38.465 [2024-04-26 13:09:43.490253] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:38.465 [2024-04-26 13:09:43.490259] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:38.465 [2024-04-26 13:09:43.490262] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:38.465 [2024-04-26 13:09:43.490266] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1240d10) 00:26:38.465 [2024-04-26 13:09:43.490271] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:38.465 [2024-04-26 13:09:43.490279] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:38.465 [2024-04-26 13:09:43.490282] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:38.465 [2024-04-26 13:09:43.490286] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1240d10) 00:26:38.465 [2024-04-26 13:09:43.490291] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:38.465 [2024-04-26 13:09:43.490296] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:26:38.465 [2024-04-26 13:09:43.490306] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:26:38.465 [2024-04-26 13:09:43.490312] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:38.465 [2024-04-26 13:09:43.490315] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1240d10) 00:26:38.465 [2024-04-26 13:09:43.490322] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.465 [2024-04-26 13:09:43.490334] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a8a60, cid 0, qid 0 00:26:38.465 [2024-04-26 13:09:43.490339] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a8bc0, cid 1, qid 0 00:26:38.465 [2024-04-26 13:09:43.490343] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a8d20, cid 2, qid 0 00:26:38.465 [2024-04-26 13:09:43.490348] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a8e80, cid 3, qid 0 00:26:38.465 [2024-04-26 13:09:43.490352] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a8fe0, cid 4, qid 0 00:26:38.465 [2024-04-26 13:09:43.490580] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:38.465 [2024-04-26 13:09:43.490586] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:38.465 [2024-04-26 13:09:43.490589] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:38.465 [2024-04-26 13:09:43.490593] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12a8fe0) on tqpair=0x1240d10 00:26:38.465 [2024-04-26 13:09:43.490598] nvme_ctrlr.c:2902:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:26:38.465 [2024-04-26 13:09:43.490603] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:26:38.465 [2024-04-26 13:09:43.490611] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:26:38.465 [2024-04-26 13:09:43.490616] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:26:38.465 [2024-04-26 13:09:43.490622] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:38.465 [2024-04-26 13:09:43.490626] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:38.465 [2024-04-26 13:09:43.490629] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1240d10) 00:26:38.465 [2024-04-26 13:09:43.490636] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:26:38.465 [2024-04-26 13:09:43.490645] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a8fe0, cid 4, qid 0 00:26:38.465 [2024-04-26 13:09:43.490849] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:38.465 [2024-04-26 13:09:43.490855] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:38.465 [2024-04-26 13:09:43.490859] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:38.465 [2024-04-26 13:09:43.490862] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12a8fe0) on tqpair=0x1240d10 00:26:38.465 [2024-04-26 13:09:43.490912] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:26:38.465 [2024-04-26 13:09:43.490923] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:26:38.465 [2024-04-26 13:09:43.490930] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:38.465 [2024-04-26 13:09:43.490933] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1240d10) 00:26:38.465 [2024-04-26 13:09:43.490940] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.465 [2024-04-26 13:09:43.490950] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a8fe0, cid 4, qid 0 00:26:38.465 [2024-04-26 13:09:43.491201] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:38.465 [2024-04-26 13:09:43.491207] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:38.465 [2024-04-26 13:09:43.491210] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:38.465 [2024-04-26 13:09:43.491214] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1240d10): datao=0, datal=4096, cccid=4 00:26:38.465 [2024-04-26 13:09:43.491218] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12a8fe0) on tqpair(0x1240d10): expected_datao=0, payload_size=4096 00:26:38.465 [2024-04-26 13:09:43.491223] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:38.465 [2024-04-26 13:09:43.491229] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:38.465 [2024-04-26 13:09:43.491233] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:38.465 [2024-04-26 13:09:43.491332] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:38.465 [2024-04-26 13:09:43.491338] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:38.465 [2024-04-26 13:09:43.491341] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:38.465 [2024-04-26 13:09:43.491345] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12a8fe0) on tqpair=0x1240d10 00:26:38.465 [2024-04-26 13:09:43.491354] nvme_ctrlr.c:4557:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:26:38.465 [2024-04-26 13:09:43.491366] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:26:38.465 [2024-04-26 13:09:43.491375] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:26:38.465 [2024-04-26 13:09:43.491381] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:38.465 [2024-04-26 13:09:43.491385] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1240d10) 00:26:38.465 [2024-04-26 13:09:43.491391] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.465 [2024-04-26 13:09:43.491401] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a8fe0, cid 4, qid 0 00:26:38.465 [2024-04-26 13:09:43.491629] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:38.465 [2024-04-26 13:09:43.491635] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:38.465 [2024-04-26 13:09:43.491638] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:38.465 [2024-04-26 13:09:43.491642] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1240d10): datao=0, datal=4096, cccid=4 00:26:38.465 [2024-04-26 13:09:43.491646] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12a8fe0) on tqpair(0x1240d10): expected_datao=0, payload_size=4096 00:26:38.465 [2024-04-26 13:09:43.491650] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:38.465 [2024-04-26 13:09:43.491683] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:38.466 [2024-04-26 13:09:43.491686] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:38.466 [2024-04-26 13:09:43.491817] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:38.466 [2024-04-26 13:09:43.491823] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:38.466 [2024-04-26 13:09:43.491830] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:38.466 [2024-04-26 13:09:43.491834] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12a8fe0) on tqpair=0x1240d10 00:26:38.466 [2024-04-26 13:09:43.495853] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:26:38.466 [2024-04-26 13:09:43.495864] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:26:38.466 [2024-04-26 13:09:43.495871] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:38.466 [2024-04-26 13:09:43.495874] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1240d10) 00:26:38.466 [2024-04-26 13:09:43.495881] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.466 [2024-04-26 13:09:43.495892] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a8fe0, cid 4, qid 0 00:26:38.466 [2024-04-26 13:09:43.496063] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:38.466 [2024-04-26 13:09:43.496069] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:38.466 [2024-04-26 13:09:43.496072] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:38.466 [2024-04-26 13:09:43.496076] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1240d10): datao=0, datal=4096, cccid=4 00:26:38.466 [2024-04-26 13:09:43.496080] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12a8fe0) on tqpair(0x1240d10): expected_datao=0, payload_size=4096 00:26:38.466 [2024-04-26 13:09:43.496084] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:38.466 [2024-04-26 13:09:43.496103] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:38.466 [2024-04-26 13:09:43.496107] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:38.466 [2024-04-26 13:09:43.496273] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:38.466 [2024-04-26 13:09:43.496279] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:38.466 [2024-04-26 13:09:43.496282] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:38.466 [2024-04-26 13:09:43.496286] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12a8fe0) on tqpair=0x1240d10 00:26:38.466 [2024-04-26 13:09:43.496293] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:26:38.466 [2024-04-26 13:09:43.496301] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:26:38.466 [2024-04-26 13:09:43.496309] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:26:38.466 [2024-04-26 13:09:43.496314] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:26:38.466 [2024-04-26 13:09:43.496320] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:26:38.466 [2024-04-26 13:09:43.496324] nvme_ctrlr.c:2990:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:26:38.466 [2024-04-26 13:09:43.496329] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:26:38.466 [2024-04-26 13:09:43.496334] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:26:38.466 [2024-04-26 13:09:43.496347] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:38.466 [2024-04-26 13:09:43.496351] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1240d10) 00:26:38.466 [2024-04-26 13:09:43.496357] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.466 [2024-04-26 13:09:43.496365] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:38.466 [2024-04-26 13:09:43.496369] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:38.466 [2024-04-26 13:09:43.496372] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1240d10) 00:26:38.466 [2024-04-26 13:09:43.496378] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:26:38.466 [2024-04-26 13:09:43.496391] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a8fe0, cid 4, qid 0 00:26:38.466 [2024-04-26 13:09:43.496396] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a9140, cid 5, qid 0 00:26:38.466 [2024-04-26 13:09:43.496593] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:38.466 [2024-04-26 13:09:43.496600] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:38.466 [2024-04-26 13:09:43.496603] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:38.466 [2024-04-26 13:09:43.496607] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12a8fe0) on tqpair=0x1240d10 00:26:38.466 [2024-04-26 13:09:43.496614] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:38.466 [2024-04-26 13:09:43.496620] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:38.466 [2024-04-26 13:09:43.496623] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:38.466 [2024-04-26 13:09:43.496627] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12a9140) on tqpair=0x1240d10 00:26:38.466 [2024-04-26 13:09:43.496636] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:38.466 [2024-04-26 13:09:43.496640] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1240d10) 00:26:38.466 [2024-04-26 13:09:43.496646] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.466 [2024-04-26 13:09:43.496655] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a9140, cid 5, qid 0 00:26:38.466 [2024-04-26 13:09:43.496847] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:38.466 [2024-04-26 13:09:43.496854] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:38.466 [2024-04-26 13:09:43.496857] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:38.466 [2024-04-26 13:09:43.496861] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12a9140) on tqpair=0x1240d10 00:26:38.466 [2024-04-26 13:09:43.496870] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:38.466 [2024-04-26 13:09:43.496874] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1240d10) 00:26:38.466 [2024-04-26 13:09:43.496880] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.466 [2024-04-26 13:09:43.496890] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a9140, cid 5, qid 0 00:26:38.466 [2024-04-26 13:09:43.497110] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:38.466 [2024-04-26 13:09:43.497116] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:38.466 [2024-04-26 13:09:43.497119] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:38.466 [2024-04-26 13:09:43.497123] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12a9140) on tqpair=0x1240d10 00:26:38.466 [2024-04-26 13:09:43.497132] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:38.466 [2024-04-26 13:09:43.497136] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1240d10) 00:26:38.466 [2024-04-26 13:09:43.497142] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.466 [2024-04-26 13:09:43.497151] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a9140, cid 5, qid 0 00:26:38.466 [2024-04-26 13:09:43.497368] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:38.466 [2024-04-26 13:09:43.497374] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:38.466 [2024-04-26 13:09:43.497379] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:38.466 [2024-04-26 13:09:43.497383] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12a9140) on tqpair=0x1240d10 00:26:38.466 [2024-04-26 13:09:43.497394] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:38.466 [2024-04-26 13:09:43.497398] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1240d10) 00:26:38.466 [2024-04-26 13:09:43.497404] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.466 [2024-04-26 13:09:43.497411] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:38.466 [2024-04-26 13:09:43.497415] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1240d10) 00:26:38.466 [2024-04-26 13:09:43.497421] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.466 [2024-04-26 13:09:43.497428] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:38.466 [2024-04-26 13:09:43.497432] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1240d10) 00:26:38.466 [2024-04-26 13:09:43.497438] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.466 [2024-04-26 13:09:43.497445] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:38.466 [2024-04-26 13:09:43.497448] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1240d10) 00:26:38.466 [2024-04-26 13:09:43.497455] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.466 [2024-04-26 13:09:43.497465] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a9140, cid 5, qid 0 00:26:38.466 [2024-04-26 13:09:43.497470] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a8fe0, cid 4, qid 0 00:26:38.466 [2024-04-26 13:09:43.497475] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a92a0, cid 6, qid 0 00:26:38.466 [2024-04-26 13:09:43.497479] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a9400, cid 7, qid 0 00:26:38.466 [2024-04-26 13:09:43.497680] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:38.466 [2024-04-26 13:09:43.497687] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:38.466 [2024-04-26 13:09:43.497690] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:38.466 [2024-04-26 13:09:43.497694] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1240d10): datao=0, datal=8192, cccid=5 00:26:38.466 [2024-04-26 13:09:43.497698] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12a9140) on tqpair(0x1240d10): expected_datao=0, payload_size=8192 00:26:38.466 [2024-04-26 13:09:43.497702] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:38.466 [2024-04-26 13:09:43.497789] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:38.466 [2024-04-26 13:09:43.497793] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:38.466 [2024-04-26 13:09:43.497798] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:38.466 [2024-04-26 13:09:43.497804] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:38.466 [2024-04-26 13:09:43.497807] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:38.466 [2024-04-26 13:09:43.497811] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1240d10): datao=0, datal=512, cccid=4 00:26:38.466 [2024-04-26 13:09:43.497815] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12a8fe0) on tqpair(0x1240d10): expected_datao=0, payload_size=512 00:26:38.466 [2024-04-26 13:09:43.497819] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:38.466 [2024-04-26 13:09:43.497826] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:38.466 [2024-04-26 13:09:43.497829] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:38.466 [2024-04-26 13:09:43.497840] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:38.466 [2024-04-26 13:09:43.497846] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:38.466 [2024-04-26 13:09:43.497850] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:38.466 [2024-04-26 13:09:43.497853] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1240d10): datao=0, datal=512, cccid=6 00:26:38.466 [2024-04-26 13:09:43.497858] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12a92a0) on tqpair(0x1240d10): expected_datao=0, payload_size=512 00:26:38.466 [2024-04-26 13:09:43.497862] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:38.466 [2024-04-26 13:09:43.497868] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:38.466 [2024-04-26 13:09:43.497871] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:38.466 [2024-04-26 13:09:43.497877] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:38.466 [2024-04-26 13:09:43.497883] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:38.466 [2024-04-26 13:09:43.497886] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:38.466 [2024-04-26 13:09:43.497889] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1240d10): datao=0, datal=4096, cccid=7 00:26:38.466 [2024-04-26 13:09:43.497894] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12a9400) on tqpair(0x1240d10): expected_datao=0, payload_size=4096 00:26:38.466 [2024-04-26 13:09:43.497898] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:38.466 [2024-04-26 13:09:43.497917] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:38.466 [2024-04-26 13:09:43.497920] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:38.466 [2024-04-26 13:09:43.498145] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:38.466 [2024-04-26 13:09:43.498151] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:38.466 [2024-04-26 13:09:43.498155] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:38.466 [2024-04-26 13:09:43.498158] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12a9140) on tqpair=0x1240d10 00:26:38.466 [2024-04-26 13:09:43.498171] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:38.466 [2024-04-26 13:09:43.498177] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:38.466 [2024-04-26 13:09:43.498181] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:38.466 [2024-04-26 13:09:43.498184] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12a8fe0) on tqpair=0x1240d10 00:26:38.466 [2024-04-26 13:09:43.498194] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:38.466 [2024-04-26 13:09:43.498199] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:38.466 [2024-04-26 13:09:43.498203] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:38.466 [2024-04-26 13:09:43.498206] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12a92a0) on tqpair=0x1240d10 00:26:38.466 [2024-04-26 13:09:43.498214] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:38.466 [2024-04-26 13:09:43.498220] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:38.466 [2024-04-26 13:09:43.498223] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:38.466 [2024-04-26 13:09:43.498227] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12a9400) on tqpair=0x1240d10 00:26:38.466 ===================================================== 00:26:38.466 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:38.466 ===================================================== 00:26:38.466 Controller Capabilities/Features 00:26:38.466 ================================ 00:26:38.466 Vendor ID: 8086 00:26:38.466 Subsystem Vendor ID: 8086 00:26:38.466 Serial Number: SPDK00000000000001 00:26:38.466 Model Number: SPDK bdev Controller 00:26:38.466 Firmware Version: 24.05 00:26:38.466 Recommended Arb Burst: 6 00:26:38.466 IEEE OUI Identifier: e4 d2 5c 00:26:38.466 Multi-path I/O 00:26:38.466 May have multiple subsystem ports: Yes 00:26:38.466 May have multiple controllers: Yes 00:26:38.466 Associated with SR-IOV VF: No 00:26:38.466 Max Data Transfer Size: 131072 00:26:38.466 Max Number of Namespaces: 32 00:26:38.466 Max Number of I/O Queues: 127 00:26:38.466 NVMe Specification Version (VS): 1.3 00:26:38.466 NVMe Specification Version (Identify): 1.3 00:26:38.466 Maximum Queue Entries: 128 00:26:38.466 Contiguous Queues Required: Yes 00:26:38.466 Arbitration Mechanisms Supported 00:26:38.466 Weighted Round Robin: Not Supported 00:26:38.466 Vendor Specific: Not Supported 00:26:38.466 Reset Timeout: 15000 ms 00:26:38.466 Doorbell Stride: 4 bytes 00:26:38.466 NVM Subsystem Reset: Not Supported 00:26:38.466 Command Sets Supported 00:26:38.466 NVM Command Set: Supported 00:26:38.466 Boot Partition: Not Supported 00:26:38.466 Memory Page Size Minimum: 4096 bytes 00:26:38.466 Memory Page Size Maximum: 4096 bytes 00:26:38.466 Persistent Memory Region: Not Supported 00:26:38.466 Optional Asynchronous Events Supported 00:26:38.466 Namespace Attribute Notices: Supported 00:26:38.466 Firmware Activation Notices: Not Supported 00:26:38.466 ANA Change Notices: Not Supported 00:26:38.466 PLE Aggregate Log Change Notices: Not Supported 00:26:38.466 LBA Status Info Alert Notices: Not Supported 00:26:38.466 EGE Aggregate Log Change Notices: Not Supported 00:26:38.466 Normal NVM Subsystem Shutdown event: Not Supported 00:26:38.466 Zone Descriptor Change Notices: Not Supported 00:26:38.467 Discovery Log Change Notices: Not Supported 00:26:38.467 Controller Attributes 00:26:38.467 128-bit Host Identifier: Supported 00:26:38.467 Non-Operational Permissive Mode: Not Supported 00:26:38.467 NVM Sets: Not Supported 00:26:38.467 Read Recovery Levels: Not Supported 00:26:38.467 Endurance Groups: Not Supported 00:26:38.467 Predictable Latency Mode: Not Supported 00:26:38.467 Traffic Based Keep ALive: Not Supported 00:26:38.467 Namespace Granularity: Not Supported 00:26:38.467 SQ Associations: Not Supported 00:26:38.467 UUID List: Not Supported 00:26:38.467 Multi-Domain Subsystem: Not Supported 00:26:38.467 Fixed Capacity Management: Not Supported 00:26:38.467 Variable Capacity Management: Not Supported 00:26:38.467 Delete Endurance Group: Not Supported 00:26:38.467 Delete NVM Set: Not Supported 00:26:38.467 Extended LBA Formats Supported: Not Supported 00:26:38.467 Flexible Data Placement Supported: Not Supported 00:26:38.467 00:26:38.467 Controller Memory Buffer Support 00:26:38.467 ================================ 00:26:38.467 Supported: No 00:26:38.467 00:26:38.467 Persistent Memory Region Support 00:26:38.467 ================================ 00:26:38.467 Supported: No 00:26:38.467 00:26:38.467 Admin Command Set Attributes 00:26:38.467 ============================ 00:26:38.467 Security Send/Receive: Not Supported 00:26:38.467 Format NVM: Not Supported 00:26:38.467 Firmware Activate/Download: Not Supported 00:26:38.467 Namespace Management: Not Supported 00:26:38.467 Device Self-Test: Not Supported 00:26:38.467 Directives: Not Supported 00:26:38.467 NVMe-MI: Not Supported 00:26:38.467 Virtualization Management: Not Supported 00:26:38.467 Doorbell Buffer Config: Not Supported 00:26:38.467 Get LBA Status Capability: Not Supported 00:26:38.467 Command & Feature Lockdown Capability: Not Supported 00:26:38.467 Abort Command Limit: 4 00:26:38.467 Async Event Request Limit: 4 00:26:38.467 Number of Firmware Slots: N/A 00:26:38.467 Firmware Slot 1 Read-Only: N/A 00:26:38.467 Firmware Activation Without Reset: N/A 00:26:38.467 Multiple Update Detection Support: N/A 00:26:38.467 Firmware Update Granularity: No Information Provided 00:26:38.467 Per-Namespace SMART Log: No 00:26:38.467 Asymmetric Namespace Access Log Page: Not Supported 00:26:38.467 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:26:38.467 Command Effects Log Page: Supported 00:26:38.467 Get Log Page Extended Data: Supported 00:26:38.467 Telemetry Log Pages: Not Supported 00:26:38.467 Persistent Event Log Pages: Not Supported 00:26:38.467 Supported Log Pages Log Page: May Support 00:26:38.467 Commands Supported & Effects Log Page: Not Supported 00:26:38.467 Feature Identifiers & Effects Log Page:May Support 00:26:38.467 NVMe-MI Commands & Effects Log Page: May Support 00:26:38.467 Data Area 4 for Telemetry Log: Not Supported 00:26:38.467 Error Log Page Entries Supported: 128 00:26:38.467 Keep Alive: Supported 00:26:38.467 Keep Alive Granularity: 10000 ms 00:26:38.467 00:26:38.467 NVM Command Set Attributes 00:26:38.467 ========================== 00:26:38.467 Submission Queue Entry Size 00:26:38.467 Max: 64 00:26:38.467 Min: 64 00:26:38.467 Completion Queue Entry Size 00:26:38.467 Max: 16 00:26:38.467 Min: 16 00:26:38.467 Number of Namespaces: 32 00:26:38.467 Compare Command: Supported 00:26:38.467 Write Uncorrectable Command: Not Supported 00:26:38.467 Dataset Management Command: Supported 00:26:38.467 Write Zeroes Command: Supported 00:26:38.467 Set Features Save Field: Not Supported 00:26:38.467 Reservations: Supported 00:26:38.467 Timestamp: Not Supported 00:26:38.467 Copy: Supported 00:26:38.467 Volatile Write Cache: Present 00:26:38.467 Atomic Write Unit (Normal): 1 00:26:38.467 Atomic Write Unit (PFail): 1 00:26:38.467 Atomic Compare & Write Unit: 1 00:26:38.467 Fused Compare & Write: Supported 00:26:38.467 Scatter-Gather List 00:26:38.467 SGL Command Set: Supported 00:26:38.467 SGL Keyed: Supported 00:26:38.467 SGL Bit Bucket Descriptor: Not Supported 00:26:38.467 SGL Metadata Pointer: Not Supported 00:26:38.467 Oversized SGL: Not Supported 00:26:38.467 SGL Metadata Address: Not Supported 00:26:38.467 SGL Offset: Supported 00:26:38.467 Transport SGL Data Block: Not Supported 00:26:38.467 Replay Protected Memory Block: Not Supported 00:26:38.467 00:26:38.467 Firmware Slot Information 00:26:38.467 ========================= 00:26:38.467 Active slot: 1 00:26:38.467 Slot 1 Firmware Revision: 24.05 00:26:38.467 00:26:38.467 00:26:38.467 Commands Supported and Effects 00:26:38.467 ============================== 00:26:38.467 Admin Commands 00:26:38.467 -------------- 00:26:38.467 Get Log Page (02h): Supported 00:26:38.467 Identify (06h): Supported 00:26:38.467 Abort (08h): Supported 00:26:38.467 Set Features (09h): Supported 00:26:38.467 Get Features (0Ah): Supported 00:26:38.467 Asynchronous Event Request (0Ch): Supported 00:26:38.467 Keep Alive (18h): Supported 00:26:38.467 I/O Commands 00:26:38.467 ------------ 00:26:38.467 Flush (00h): Supported LBA-Change 00:26:38.467 Write (01h): Supported LBA-Change 00:26:38.467 Read (02h): Supported 00:26:38.467 Compare (05h): Supported 00:26:38.467 Write Zeroes (08h): Supported LBA-Change 00:26:38.467 Dataset Management (09h): Supported LBA-Change 00:26:38.467 Copy (19h): Supported LBA-Change 00:26:38.467 Unknown (79h): Supported LBA-Change 00:26:38.467 Unknown (7Ah): Supported 00:26:38.467 00:26:38.467 Error Log 00:26:38.467 ========= 00:26:38.467 00:26:38.467 Arbitration 00:26:38.467 =========== 00:26:38.467 Arbitration Burst: 1 00:26:38.467 00:26:38.467 Power Management 00:26:38.467 ================ 00:26:38.467 Number of Power States: 1 00:26:38.467 Current Power State: Power State #0 00:26:38.467 Power State #0: 00:26:38.467 Max Power: 0.00 W 00:26:38.467 Non-Operational State: Operational 00:26:38.467 Entry Latency: Not Reported 00:26:38.467 Exit Latency: Not Reported 00:26:38.467 Relative Read Throughput: 0 00:26:38.467 Relative Read Latency: 0 00:26:38.467 Relative Write Throughput: 0 00:26:38.467 Relative Write Latency: 0 00:26:38.467 Idle Power: Not Reported 00:26:38.467 Active Power: Not Reported 00:26:38.467 Non-Operational Permissive Mode: Not Supported 00:26:38.467 00:26:38.467 Health Information 00:26:38.467 ================== 00:26:38.467 Critical Warnings: 00:26:38.467 Available Spare Space: OK 00:26:38.467 Temperature: OK 00:26:38.467 Device Reliability: OK 00:26:38.467 Read Only: No 00:26:38.467 Volatile Memory Backup: OK 00:26:38.467 Current Temperature: 0 Kelvin (-273 Celsius) 00:26:38.467 Temperature Threshold: [2024-04-26 13:09:43.498329] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:38.467 [2024-04-26 13:09:43.498334] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1240d10) 00:26:38.467 [2024-04-26 13:09:43.498341] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.467 [2024-04-26 13:09:43.498352] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a9400, cid 7, qid 0 00:26:38.467 [2024-04-26 13:09:43.498511] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:38.467 [2024-04-26 13:09:43.498518] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:38.467 [2024-04-26 13:09:43.498522] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:38.467 [2024-04-26 13:09:43.498526] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12a9400) on tqpair=0x1240d10 00:26:38.467 [2024-04-26 13:09:43.498554] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:26:38.467 [2024-04-26 13:09:43.498565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.467 [2024-04-26 13:09:43.498571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.467 [2024-04-26 13:09:43.498578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.467 [2024-04-26 13:09:43.498583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.467 [2024-04-26 13:09:43.498591] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:38.467 [2024-04-26 13:09:43.498595] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:38.467 [2024-04-26 13:09:43.498598] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1240d10) 00:26:38.467 [2024-04-26 13:09:43.498605] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.467 [2024-04-26 13:09:43.498616] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a8e80, cid 3, qid 0 00:26:38.467 [2024-04-26 13:09:43.498808] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:38.467 [2024-04-26 13:09:43.498814] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:38.467 [2024-04-26 13:09:43.498817] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:38.467 [2024-04-26 13:09:43.498821] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12a8e80) on tqpair=0x1240d10 00:26:38.467 [2024-04-26 13:09:43.498828] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:38.467 [2024-04-26 13:09:43.498832] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:38.467 [2024-04-26 13:09:43.498835] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1240d10) 00:26:38.467 [2024-04-26 13:09:43.498846] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.467 [2024-04-26 13:09:43.498859] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a8e80, cid 3, qid 0 00:26:38.467 [2024-04-26 13:09:43.499068] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:38.467 [2024-04-26 13:09:43.499074] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:38.467 [2024-04-26 13:09:43.499078] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:38.467 [2024-04-26 13:09:43.499081] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12a8e80) on tqpair=0x1240d10 00:26:38.467 [2024-04-26 13:09:43.499087] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:26:38.467 [2024-04-26 13:09:43.499091] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:26:38.467 [2024-04-26 13:09:43.499100] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:38.467 [2024-04-26 13:09:43.499104] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:38.467 [2024-04-26 13:09:43.499107] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1240d10) 00:26:38.467 [2024-04-26 13:09:43.499114] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.467 [2024-04-26 13:09:43.499123] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a8e80, cid 3, qid 0 00:26:38.467 [2024-04-26 13:09:43.499328] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:38.467 [2024-04-26 13:09:43.499334] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:38.467 [2024-04-26 13:09:43.499338] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:38.467 [2024-04-26 13:09:43.499344] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12a8e80) on tqpair=0x1240d10 00:26:38.467 [2024-04-26 13:09:43.499355] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:38.467 [2024-04-26 13:09:43.499358] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:38.467 [2024-04-26 13:09:43.499362] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1240d10) 00:26:38.467 [2024-04-26 13:09:43.499368] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.468 [2024-04-26 13:09:43.499378] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a8e80, cid 3, qid 0 00:26:38.468 [2024-04-26 13:09:43.499550] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:38.468 [2024-04-26 13:09:43.499556] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:38.468 [2024-04-26 13:09:43.499559] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:38.468 [2024-04-26 13:09:43.499563] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12a8e80) on tqpair=0x1240d10 00:26:38.468 [2024-04-26 13:09:43.499573] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:38.468 [2024-04-26 13:09:43.499577] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:38.468 [2024-04-26 13:09:43.499580] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1240d10) 00:26:38.468 [2024-04-26 13:09:43.499587] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.468 [2024-04-26 13:09:43.499596] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a8e80, cid 3, qid 0 00:26:38.468 [2024-04-26 13:09:43.502845] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:38.468 [2024-04-26 13:09:43.502853] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:38.468 [2024-04-26 13:09:43.502856] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:38.468 [2024-04-26 13:09:43.502860] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12a8e80) on tqpair=0x1240d10 00:26:38.468 [2024-04-26 13:09:43.502871] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:38.468 [2024-04-26 13:09:43.502875] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:38.468 [2024-04-26 13:09:43.502878] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1240d10) 00:26:38.468 [2024-04-26 13:09:43.502885] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.468 [2024-04-26 13:09:43.502896] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a8e80, cid 3, qid 0 00:26:38.468 [2024-04-26 13:09:43.503069] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:38.468 [2024-04-26 13:09:43.503076] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:38.468 [2024-04-26 13:09:43.503079] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:38.468 [2024-04-26 13:09:43.503083] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12a8e80) on tqpair=0x1240d10 00:26:38.468 [2024-04-26 13:09:43.503090] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 3 milliseconds 00:26:38.468 0 Kelvin (-273 Celsius) 00:26:38.468 Available Spare: 0% 00:26:38.468 Available Spare Threshold: 0% 00:26:38.468 Life Percentage Used: 0% 00:26:38.468 Data Units Read: 0 00:26:38.468 Data Units Written: 0 00:26:38.468 Host Read Commands: 0 00:26:38.468 Host Write Commands: 0 00:26:38.468 Controller Busy Time: 0 minutes 00:26:38.468 Power Cycles: 0 00:26:38.468 Power On Hours: 0 hours 00:26:38.468 Unsafe Shutdowns: 0 00:26:38.468 Unrecoverable Media Errors: 0 00:26:38.468 Lifetime Error Log Entries: 0 00:26:38.468 Warning Temperature Time: 0 minutes 00:26:38.468 Critical Temperature Time: 0 minutes 00:26:38.468 00:26:38.468 Number of Queues 00:26:38.468 ================ 00:26:38.468 Number of I/O Submission Queues: 127 00:26:38.468 Number of I/O Completion Queues: 127 00:26:38.468 00:26:38.468 Active Namespaces 00:26:38.468 ================= 00:26:38.468 Namespace ID:1 00:26:38.468 Error Recovery Timeout: Unlimited 00:26:38.468 Command Set Identifier: NVM (00h) 00:26:38.468 Deallocate: Supported 00:26:38.468 Deallocated/Unwritten Error: Not Supported 00:26:38.468 Deallocated Read Value: Unknown 00:26:38.468 Deallocate in Write Zeroes: Not Supported 00:26:38.468 Deallocated Guard Field: 0xFFFF 00:26:38.468 Flush: Supported 00:26:38.468 Reservation: Supported 00:26:38.468 Namespace Sharing Capabilities: Multiple Controllers 00:26:38.468 Size (in LBAs): 131072 (0GiB) 00:26:38.468 Capacity (in LBAs): 131072 (0GiB) 00:26:38.468 Utilization (in LBAs): 131072 (0GiB) 00:26:38.468 NGUID: ABCDEF0123456789ABCDEF0123456789 00:26:38.468 EUI64: ABCDEF0123456789 00:26:38.468 UUID: e4915dec-44ca-4e40-b2d7-07f9be99ea22 00:26:38.468 Thin Provisioning: Not Supported 00:26:38.468 Per-NS Atomic Units: Yes 00:26:38.468 Atomic Boundary Size (Normal): 0 00:26:38.468 Atomic Boundary Size (PFail): 0 00:26:38.468 Atomic Boundary Offset: 0 00:26:38.468 Maximum Single Source Range Length: 65535 00:26:38.468 Maximum Copy Length: 65535 00:26:38.468 Maximum Source Range Count: 1 00:26:38.468 NGUID/EUI64 Never Reused: No 00:26:38.468 Namespace Write Protected: No 00:26:38.468 Number of LBA Formats: 1 00:26:38.468 Current LBA Format: LBA Format #00 00:26:38.468 LBA Format #00: Data Size: 512 Metadata Size: 0 00:26:38.468 00:26:38.468 13:09:43 -- host/identify.sh@51 -- # sync 00:26:38.773 13:09:43 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:38.773 13:09:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:38.773 13:09:43 -- common/autotest_common.sh@10 -- # set +x 00:26:38.773 13:09:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:38.773 13:09:43 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:26:38.773 13:09:43 -- host/identify.sh@56 -- # nvmftestfini 00:26:38.773 13:09:43 -- nvmf/common.sh@477 -- # nvmfcleanup 00:26:38.773 13:09:43 -- nvmf/common.sh@117 -- # sync 00:26:38.773 13:09:43 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:38.773 13:09:43 -- nvmf/common.sh@120 -- # set +e 00:26:38.773 13:09:43 -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:38.773 13:09:43 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:38.773 rmmod nvme_tcp 00:26:38.773 rmmod nvme_fabrics 00:26:38.773 rmmod nvme_keyring 00:26:38.773 13:09:43 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:38.773 13:09:43 -- nvmf/common.sh@124 -- # set -e 00:26:38.773 13:09:43 -- nvmf/common.sh@125 -- # return 0 00:26:38.773 13:09:43 -- nvmf/common.sh@478 -- # '[' -n 4112935 ']' 00:26:38.773 13:09:43 -- nvmf/common.sh@479 -- # killprocess 4112935 00:26:38.773 13:09:43 -- common/autotest_common.sh@936 -- # '[' -z 4112935 ']' 00:26:38.773 13:09:43 -- common/autotest_common.sh@940 -- # kill -0 4112935 00:26:38.773 13:09:43 -- common/autotest_common.sh@941 -- # uname 00:26:38.773 13:09:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:38.773 13:09:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4112935 00:26:38.773 13:09:43 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:38.773 13:09:43 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:38.773 13:09:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4112935' 00:26:38.773 killing process with pid 4112935 00:26:38.773 13:09:43 -- common/autotest_common.sh@955 -- # kill 4112935 00:26:38.773 [2024-04-26 13:09:43.654847] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:26:38.773 13:09:43 -- common/autotest_common.sh@960 -- # wait 4112935 00:26:38.773 13:09:43 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:26:38.773 13:09:43 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:26:38.773 13:09:43 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:26:38.773 13:09:43 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:38.773 13:09:43 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:38.773 13:09:43 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:38.773 13:09:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:38.773 13:09:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:41.315 13:09:45 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:41.315 00:26:41.315 real 0m11.060s 00:26:41.315 user 0m7.812s 00:26:41.315 sys 0m5.697s 00:26:41.315 13:09:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:41.315 13:09:45 -- common/autotest_common.sh@10 -- # set +x 00:26:41.315 ************************************ 00:26:41.315 END TEST nvmf_identify 00:26:41.315 ************************************ 00:26:41.315 13:09:45 -- nvmf/nvmf.sh@96 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:26:41.315 13:09:45 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:26:41.315 13:09:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:41.315 13:09:45 -- common/autotest_common.sh@10 -- # set +x 00:26:41.315 ************************************ 00:26:41.315 START TEST nvmf_perf 00:26:41.315 ************************************ 00:26:41.315 13:09:46 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:26:41.315 * Looking for test storage... 00:26:41.315 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:41.315 13:09:46 -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:41.315 13:09:46 -- nvmf/common.sh@7 -- # uname -s 00:26:41.315 13:09:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:41.315 13:09:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:41.315 13:09:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:41.315 13:09:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:41.315 13:09:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:41.315 13:09:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:41.315 13:09:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:41.315 13:09:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:41.315 13:09:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:41.315 13:09:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:41.315 13:09:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:41.315 13:09:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:41.315 13:09:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:41.315 13:09:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:41.315 13:09:46 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:41.315 13:09:46 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:41.315 13:09:46 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:41.315 13:09:46 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:41.315 13:09:46 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:41.315 13:09:46 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:41.315 13:09:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.315 13:09:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.315 13:09:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.315 13:09:46 -- paths/export.sh@5 -- # export PATH 00:26:41.315 13:09:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.315 13:09:46 -- nvmf/common.sh@47 -- # : 0 00:26:41.315 13:09:46 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:41.315 13:09:46 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:41.315 13:09:46 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:41.315 13:09:46 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:41.315 13:09:46 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:41.315 13:09:46 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:41.315 13:09:46 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:41.315 13:09:46 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:41.315 13:09:46 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:26:41.315 13:09:46 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:26:41.315 13:09:46 -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:41.315 13:09:46 -- host/perf.sh@17 -- # nvmftestinit 00:26:41.315 13:09:46 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:26:41.315 13:09:46 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:41.315 13:09:46 -- nvmf/common.sh@437 -- # prepare_net_devs 00:26:41.315 13:09:46 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:26:41.315 13:09:46 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:26:41.315 13:09:46 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:41.315 13:09:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:41.315 13:09:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:41.315 13:09:46 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:26:41.315 13:09:46 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:26:41.315 13:09:46 -- nvmf/common.sh@285 -- # xtrace_disable 00:26:41.315 13:09:46 -- common/autotest_common.sh@10 -- # set +x 00:26:49.456 13:09:53 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:49.456 13:09:53 -- nvmf/common.sh@291 -- # pci_devs=() 00:26:49.456 13:09:53 -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:49.456 13:09:53 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:49.456 13:09:53 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:49.456 13:09:53 -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:49.456 13:09:53 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:49.456 13:09:53 -- nvmf/common.sh@295 -- # net_devs=() 00:26:49.456 13:09:53 -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:49.456 13:09:53 -- nvmf/common.sh@296 -- # e810=() 00:26:49.456 13:09:53 -- nvmf/common.sh@296 -- # local -ga e810 00:26:49.456 13:09:53 -- nvmf/common.sh@297 -- # x722=() 00:26:49.456 13:09:53 -- nvmf/common.sh@297 -- # local -ga x722 00:26:49.456 13:09:53 -- nvmf/common.sh@298 -- # mlx=() 00:26:49.456 13:09:53 -- nvmf/common.sh@298 -- # local -ga mlx 00:26:49.456 13:09:53 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:49.456 13:09:53 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:49.456 13:09:53 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:49.456 13:09:53 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:49.456 13:09:53 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:49.456 13:09:53 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:49.456 13:09:53 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:49.456 13:09:53 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:49.456 13:09:53 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:49.456 13:09:53 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:49.456 13:09:53 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:49.456 13:09:53 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:49.456 13:09:53 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:49.456 13:09:53 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:49.456 13:09:53 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:49.456 13:09:53 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:49.456 13:09:53 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:49.456 13:09:53 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:49.456 13:09:53 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:49.456 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:49.456 13:09:53 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:49.456 13:09:53 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:49.456 13:09:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:49.456 13:09:53 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:49.456 13:09:53 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:49.456 13:09:53 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:49.456 13:09:53 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:49.456 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:49.456 13:09:53 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:49.456 13:09:53 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:49.456 13:09:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:49.456 13:09:53 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:49.456 13:09:53 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:49.456 13:09:53 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:49.456 13:09:53 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:49.456 13:09:53 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:49.456 13:09:53 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:49.457 13:09:53 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:49.457 13:09:53 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:26:49.457 13:09:53 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:49.457 13:09:53 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:49.457 Found net devices under 0000:31:00.0: cvl_0_0 00:26:49.457 13:09:53 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:26:49.457 13:09:53 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:49.457 13:09:53 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:49.457 13:09:53 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:26:49.457 13:09:53 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:49.457 13:09:53 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:49.457 Found net devices under 0000:31:00.1: cvl_0_1 00:26:49.457 13:09:53 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:26:49.457 13:09:53 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:26:49.457 13:09:53 -- nvmf/common.sh@403 -- # is_hw=yes 00:26:49.457 13:09:53 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:26:49.457 13:09:53 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:26:49.457 13:09:53 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:26:49.457 13:09:53 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:49.457 13:09:53 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:49.457 13:09:53 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:49.457 13:09:53 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:49.457 13:09:53 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:49.457 13:09:53 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:49.457 13:09:53 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:49.457 13:09:53 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:49.457 13:09:53 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:49.457 13:09:53 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:49.457 13:09:53 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:49.457 13:09:53 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:49.457 13:09:53 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:49.457 13:09:53 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:49.457 13:09:53 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:49.457 13:09:53 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:49.457 13:09:53 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:49.457 13:09:53 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:49.457 13:09:53 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:49.457 13:09:53 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:49.457 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:49.457 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.643 ms 00:26:49.457 00:26:49.457 --- 10.0.0.2 ping statistics --- 00:26:49.457 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:49.457 rtt min/avg/max/mdev = 0.643/0.643/0.643/0.000 ms 00:26:49.457 13:09:53 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:49.457 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:49.457 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.377 ms 00:26:49.457 00:26:49.457 --- 10.0.0.1 ping statistics --- 00:26:49.457 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:49.457 rtt min/avg/max/mdev = 0.377/0.377/0.377/0.000 ms 00:26:49.457 13:09:53 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:49.457 13:09:53 -- nvmf/common.sh@411 -- # return 0 00:26:49.457 13:09:53 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:26:49.457 13:09:53 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:49.457 13:09:53 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:26:49.457 13:09:53 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:26:49.457 13:09:53 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:49.457 13:09:53 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:26:49.457 13:09:53 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:26:49.457 13:09:53 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:26:49.457 13:09:53 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:26:49.457 13:09:53 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:49.457 13:09:53 -- common/autotest_common.sh@10 -- # set +x 00:26:49.457 13:09:53 -- nvmf/common.sh@470 -- # nvmfpid=4117357 00:26:49.457 13:09:53 -- nvmf/common.sh@471 -- # waitforlisten 4117357 00:26:49.457 13:09:53 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:49.457 13:09:53 -- common/autotest_common.sh@817 -- # '[' -z 4117357 ']' 00:26:49.457 13:09:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:49.457 13:09:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:49.457 13:09:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:49.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:49.457 13:09:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:49.457 13:09:53 -- common/autotest_common.sh@10 -- # set +x 00:26:49.457 [2024-04-26 13:09:53.471653] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:26:49.457 [2024-04-26 13:09:53.471703] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:49.457 EAL: No free 2048 kB hugepages reported on node 1 00:26:49.457 [2024-04-26 13:09:53.538308] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:49.457 [2024-04-26 13:09:53.601110] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:49.457 [2024-04-26 13:09:53.601145] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:49.457 [2024-04-26 13:09:53.601154] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:49.457 [2024-04-26 13:09:53.601162] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:49.457 [2024-04-26 13:09:53.601168] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:49.457 [2024-04-26 13:09:53.601331] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:49.457 [2024-04-26 13:09:53.601348] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:49.457 [2024-04-26 13:09:53.601481] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:49.457 [2024-04-26 13:09:53.601483] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:49.457 13:09:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:49.457 13:09:54 -- common/autotest_common.sh@850 -- # return 0 00:26:49.457 13:09:54 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:26:49.457 13:09:54 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:49.457 13:09:54 -- common/autotest_common.sh@10 -- # set +x 00:26:49.457 13:09:54 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:49.457 13:09:54 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:26:49.457 13:09:54 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:26:49.718 13:09:54 -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:26:49.718 13:09:54 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:26:49.979 13:09:54 -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:26:49.979 13:09:54 -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:26:50.240 13:09:55 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:26:50.240 13:09:55 -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:26:50.240 13:09:55 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:26:50.240 13:09:55 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:26:50.240 13:09:55 -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:26:50.240 [2024-04-26 13:09:55.238722] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:50.240 13:09:55 -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:50.501 13:09:55 -- host/perf.sh@45 -- # for bdev in $bdevs 00:26:50.501 13:09:55 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:50.763 13:09:55 -- host/perf.sh@45 -- # for bdev in $bdevs 00:26:50.763 13:09:55 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:26:50.763 13:09:55 -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:51.025 [2024-04-26 13:09:55.917305] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:51.025 13:09:55 -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:51.286 13:09:56 -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:26:51.286 13:09:56 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:26:51.286 13:09:56 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:26:51.286 13:09:56 -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:26:52.673 Initializing NVMe Controllers 00:26:52.673 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:26:52.673 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:26:52.673 Initialization complete. Launching workers. 00:26:52.673 ======================================================== 00:26:52.673 Latency(us) 00:26:52.673 Device Information : IOPS MiB/s Average min max 00:26:52.673 PCIE (0000:65:00.0) NSID 1 from core 0: 80245.71 313.46 398.13 13.26 5283.92 00:26:52.673 ======================================================== 00:26:52.673 Total : 80245.71 313.46 398.13 13.26 5283.92 00:26:52.673 00:26:52.673 13:09:57 -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:52.673 EAL: No free 2048 kB hugepages reported on node 1 00:26:53.619 Initializing NVMe Controllers 00:26:53.619 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:53.619 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:53.619 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:53.619 Initialization complete. Launching workers. 00:26:53.619 ======================================================== 00:26:53.619 Latency(us) 00:26:53.619 Device Information : IOPS MiB/s Average min max 00:26:53.619 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 83.00 0.32 12514.07 139.67 45146.61 00:26:53.619 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 43.00 0.17 23757.44 7960.44 48886.65 00:26:53.619 ======================================================== 00:26:53.619 Total : 126.00 0.49 16351.09 139.67 48886.65 00:26:53.619 00:26:53.619 13:09:58 -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:53.879 EAL: No free 2048 kB hugepages reported on node 1 00:26:55.265 Initializing NVMe Controllers 00:26:55.265 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:55.265 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:55.265 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:55.265 Initialization complete. Launching workers. 00:26:55.265 ======================================================== 00:26:55.265 Latency(us) 00:26:55.265 Device Information : IOPS MiB/s Average min max 00:26:55.265 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10257.98 40.07 3124.38 492.40 8960.53 00:26:55.265 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3753.99 14.66 8562.81 6226.78 16828.94 00:26:55.265 ======================================================== 00:26:55.265 Total : 14011.97 54.73 4581.41 492.40 16828.94 00:26:55.265 00:26:55.265 13:09:59 -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:26:55.265 13:09:59 -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:26:55.265 13:09:59 -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:55.265 EAL: No free 2048 kB hugepages reported on node 1 00:26:57.813 Initializing NVMe Controllers 00:26:57.813 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:57.813 Controller IO queue size 128, less than required. 00:26:57.813 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:57.813 Controller IO queue size 128, less than required. 00:26:57.813 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:57.813 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:57.813 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:57.813 Initialization complete. Launching workers. 00:26:57.813 ======================================================== 00:26:57.813 Latency(us) 00:26:57.813 Device Information : IOPS MiB/s Average min max 00:26:57.813 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1501.49 375.37 86718.87 54067.41 142792.21 00:26:57.813 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 580.50 145.12 230182.73 64443.18 361878.76 00:26:57.813 ======================================================== 00:26:57.813 Total : 2081.99 520.50 126719.24 54067.41 361878.76 00:26:57.813 00:26:57.813 13:10:02 -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:26:57.813 EAL: No free 2048 kB hugepages reported on node 1 00:26:57.813 No valid NVMe controllers or AIO or URING devices found 00:26:57.813 Initializing NVMe Controllers 00:26:57.813 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:57.813 Controller IO queue size 128, less than required. 00:26:57.813 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:57.813 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:26:57.813 Controller IO queue size 128, less than required. 00:26:57.813 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:57.813 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:26:57.813 WARNING: Some requested NVMe devices were skipped 00:26:57.813 13:10:02 -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:26:57.813 EAL: No free 2048 kB hugepages reported on node 1 00:27:00.358 Initializing NVMe Controllers 00:27:00.358 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:00.358 Controller IO queue size 128, less than required. 00:27:00.358 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:00.358 Controller IO queue size 128, less than required. 00:27:00.358 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:00.358 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:00.358 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:00.358 Initialization complete. Launching workers. 00:27:00.358 00:27:00.358 ==================== 00:27:00.358 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:27:00.358 TCP transport: 00:27:00.358 polls: 25704 00:27:00.358 idle_polls: 13740 00:27:00.358 sock_completions: 11964 00:27:00.358 nvme_completions: 5927 00:27:00.358 submitted_requests: 8824 00:27:00.358 queued_requests: 1 00:27:00.358 00:27:00.358 ==================== 00:27:00.358 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:27:00.358 TCP transport: 00:27:00.358 polls: 26917 00:27:00.358 idle_polls: 13364 00:27:00.358 sock_completions: 13553 00:27:00.358 nvme_completions: 6551 00:27:00.358 submitted_requests: 9754 00:27:00.358 queued_requests: 1 00:27:00.358 ======================================================== 00:27:00.358 Latency(us) 00:27:00.358 Device Information : IOPS MiB/s Average min max 00:27:00.358 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1479.03 369.76 87507.35 47321.18 134769.41 00:27:00.358 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1634.77 408.69 79916.60 50432.89 118093.91 00:27:00.358 ======================================================== 00:27:00.358 Total : 3113.80 778.45 83522.15 47321.18 134769.41 00:27:00.358 00:27:00.358 13:10:05 -- host/perf.sh@66 -- # sync 00:27:00.358 13:10:05 -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:00.358 13:10:05 -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:27:00.358 13:10:05 -- host/perf.sh@71 -- # '[' -n 0000:65:00.0 ']' 00:27:00.358 13:10:05 -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:27:01.744 13:10:06 -- host/perf.sh@72 -- # ls_guid=037a928d-d4b2-48ae-b807-b29d26d8522b 00:27:01.744 13:10:06 -- host/perf.sh@73 -- # get_lvs_free_mb 037a928d-d4b2-48ae-b807-b29d26d8522b 00:27:01.744 13:10:06 -- common/autotest_common.sh@1350 -- # local lvs_uuid=037a928d-d4b2-48ae-b807-b29d26d8522b 00:27:01.744 13:10:06 -- common/autotest_common.sh@1351 -- # local lvs_info 00:27:01.744 13:10:06 -- common/autotest_common.sh@1352 -- # local fc 00:27:01.744 13:10:06 -- common/autotest_common.sh@1353 -- # local cs 00:27:01.744 13:10:06 -- common/autotest_common.sh@1354 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:01.744 13:10:06 -- common/autotest_common.sh@1354 -- # lvs_info='[ 00:27:01.744 { 00:27:01.744 "uuid": "037a928d-d4b2-48ae-b807-b29d26d8522b", 00:27:01.744 "name": "lvs_0", 00:27:01.744 "base_bdev": "Nvme0n1", 00:27:01.744 "total_data_clusters": 457407, 00:27:01.744 "free_clusters": 457407, 00:27:01.744 "block_size": 512, 00:27:01.744 "cluster_size": 4194304 00:27:01.744 } 00:27:01.744 ]' 00:27:01.744 13:10:06 -- common/autotest_common.sh@1355 -- # jq '.[] | select(.uuid=="037a928d-d4b2-48ae-b807-b29d26d8522b") .free_clusters' 00:27:01.744 13:10:06 -- common/autotest_common.sh@1355 -- # fc=457407 00:27:01.744 13:10:06 -- common/autotest_common.sh@1356 -- # jq '.[] | select(.uuid=="037a928d-d4b2-48ae-b807-b29d26d8522b") .cluster_size' 00:27:01.744 13:10:06 -- common/autotest_common.sh@1356 -- # cs=4194304 00:27:01.744 13:10:06 -- common/autotest_common.sh@1359 -- # free_mb=1829628 00:27:01.744 13:10:06 -- common/autotest_common.sh@1360 -- # echo 1829628 00:27:01.744 1829628 00:27:01.744 13:10:06 -- host/perf.sh@77 -- # '[' 1829628 -gt 20480 ']' 00:27:01.744 13:10:06 -- host/perf.sh@78 -- # free_mb=20480 00:27:01.744 13:10:06 -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 037a928d-d4b2-48ae-b807-b29d26d8522b lbd_0 20480 00:27:02.006 13:10:06 -- host/perf.sh@80 -- # lb_guid=001ace30-2fec-49e3-8dbf-0a0b16cc3163 00:27:02.006 13:10:06 -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 001ace30-2fec-49e3-8dbf-0a0b16cc3163 lvs_n_0 00:27:03.923 13:10:08 -- host/perf.sh@83 -- # ls_nested_guid=d929e3ca-e893-4aa2-bb2e-ce3d9a9601f1 00:27:03.923 13:10:08 -- host/perf.sh@84 -- # get_lvs_free_mb d929e3ca-e893-4aa2-bb2e-ce3d9a9601f1 00:27:03.923 13:10:08 -- common/autotest_common.sh@1350 -- # local lvs_uuid=d929e3ca-e893-4aa2-bb2e-ce3d9a9601f1 00:27:03.923 13:10:08 -- common/autotest_common.sh@1351 -- # local lvs_info 00:27:03.923 13:10:08 -- common/autotest_common.sh@1352 -- # local fc 00:27:03.923 13:10:08 -- common/autotest_common.sh@1353 -- # local cs 00:27:03.923 13:10:08 -- common/autotest_common.sh@1354 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:03.923 13:10:08 -- common/autotest_common.sh@1354 -- # lvs_info='[ 00:27:03.923 { 00:27:03.923 "uuid": "037a928d-d4b2-48ae-b807-b29d26d8522b", 00:27:03.923 "name": "lvs_0", 00:27:03.923 "base_bdev": "Nvme0n1", 00:27:03.923 "total_data_clusters": 457407, 00:27:03.923 "free_clusters": 452287, 00:27:03.923 "block_size": 512, 00:27:03.923 "cluster_size": 4194304 00:27:03.923 }, 00:27:03.923 { 00:27:03.923 "uuid": "d929e3ca-e893-4aa2-bb2e-ce3d9a9601f1", 00:27:03.923 "name": "lvs_n_0", 00:27:03.923 "base_bdev": "001ace30-2fec-49e3-8dbf-0a0b16cc3163", 00:27:03.923 "total_data_clusters": 5114, 00:27:03.923 "free_clusters": 5114, 00:27:03.923 "block_size": 512, 00:27:03.923 "cluster_size": 4194304 00:27:03.923 } 00:27:03.923 ]' 00:27:03.923 13:10:08 -- common/autotest_common.sh@1355 -- # jq '.[] | select(.uuid=="d929e3ca-e893-4aa2-bb2e-ce3d9a9601f1") .free_clusters' 00:27:03.923 13:10:08 -- common/autotest_common.sh@1355 -- # fc=5114 00:27:03.923 13:10:08 -- common/autotest_common.sh@1356 -- # jq '.[] | select(.uuid=="d929e3ca-e893-4aa2-bb2e-ce3d9a9601f1") .cluster_size' 00:27:03.923 13:10:08 -- common/autotest_common.sh@1356 -- # cs=4194304 00:27:03.923 13:10:08 -- common/autotest_common.sh@1359 -- # free_mb=20456 00:27:03.923 13:10:08 -- common/autotest_common.sh@1360 -- # echo 20456 00:27:03.923 20456 00:27:03.923 13:10:08 -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:27:03.923 13:10:08 -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u d929e3ca-e893-4aa2-bb2e-ce3d9a9601f1 lbd_nest_0 20456 00:27:03.923 13:10:08 -- host/perf.sh@88 -- # lb_nested_guid=6d5dc6b6-140a-4c66-858f-fc3bdc81784d 00:27:03.923 13:10:08 -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:04.183 13:10:09 -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:27:04.184 13:10:09 -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 6d5dc6b6-140a-4c66-858f-fc3bdc81784d 00:27:04.445 13:10:09 -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:04.445 13:10:09 -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:27:04.445 13:10:09 -- host/perf.sh@96 -- # io_size=("512" "131072") 00:27:04.445 13:10:09 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:27:04.445 13:10:09 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:27:04.445 13:10:09 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:04.445 EAL: No free 2048 kB hugepages reported on node 1 00:27:16.678 Initializing NVMe Controllers 00:27:16.678 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:16.678 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:16.678 Initialization complete. Launching workers. 00:27:16.678 ======================================================== 00:27:16.678 Latency(us) 00:27:16.678 Device Information : IOPS MiB/s Average min max 00:27:16.678 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 46.39 0.02 21589.12 229.97 48666.64 00:27:16.678 ======================================================== 00:27:16.678 Total : 46.39 0.02 21589.12 229.97 48666.64 00:27:16.678 00:27:16.678 13:10:19 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:27:16.678 13:10:19 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:16.678 EAL: No free 2048 kB hugepages reported on node 1 00:27:26.774 Initializing NVMe Controllers 00:27:26.774 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:26.774 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:26.774 Initialization complete. Launching workers. 00:27:26.774 ======================================================== 00:27:26.774 Latency(us) 00:27:26.774 Device Information : IOPS MiB/s Average min max 00:27:26.774 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 66.00 8.25 15161.06 5987.20 48864.79 00:27:26.774 ======================================================== 00:27:26.774 Total : 66.00 8.25 15161.06 5987.20 48864.79 00:27:26.774 00:27:26.774 13:10:30 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:27:26.774 13:10:30 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:27:26.774 13:10:30 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:26.774 EAL: No free 2048 kB hugepages reported on node 1 00:27:36.833 Initializing NVMe Controllers 00:27:36.833 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:36.833 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:36.833 Initialization complete. Launching workers. 00:27:36.833 ======================================================== 00:27:36.833 Latency(us) 00:27:36.833 Device Information : IOPS MiB/s Average min max 00:27:36.833 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8611.92 4.21 3715.44 301.66 7807.51 00:27:36.834 ======================================================== 00:27:36.834 Total : 8611.92 4.21 3715.44 301.66 7807.51 00:27:36.834 00:27:36.834 13:10:40 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:27:36.834 13:10:40 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:36.834 EAL: No free 2048 kB hugepages reported on node 1 00:27:46.832 Initializing NVMe Controllers 00:27:46.832 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:46.832 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:46.832 Initialization complete. Launching workers. 00:27:46.832 ======================================================== 00:27:46.832 Latency(us) 00:27:46.832 Device Information : IOPS MiB/s Average min max 00:27:46.832 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3423.13 427.89 9349.10 776.76 22539.16 00:27:46.832 ======================================================== 00:27:46.832 Total : 3423.13 427.89 9349.10 776.76 22539.16 00:27:46.832 00:27:46.832 13:10:50 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:27:46.832 13:10:50 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:27:46.832 13:10:50 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:46.832 EAL: No free 2048 kB hugepages reported on node 1 00:27:56.841 Initializing NVMe Controllers 00:27:56.841 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:56.841 Controller IO queue size 128, less than required. 00:27:56.841 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:56.841 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:56.841 Initialization complete. Launching workers. 00:27:56.841 ======================================================== 00:27:56.841 Latency(us) 00:27:56.841 Device Information : IOPS MiB/s Average min max 00:27:56.841 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15812.60 7.72 8098.42 1920.17 21161.81 00:27:56.841 ======================================================== 00:27:56.841 Total : 15812.60 7.72 8098.42 1920.17 21161.81 00:27:56.841 00:27:56.841 13:11:01 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:27:56.841 13:11:01 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:56.841 EAL: No free 2048 kB hugepages reported on node 1 00:28:06.837 Initializing NVMe Controllers 00:28:06.837 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:06.837 Controller IO queue size 128, less than required. 00:28:06.837 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:06.837 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:06.837 Initialization complete. Launching workers. 00:28:06.837 ======================================================== 00:28:06.837 Latency(us) 00:28:06.837 Device Information : IOPS MiB/s Average min max 00:28:06.837 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1189.70 148.71 108376.94 14446.76 229967.70 00:28:06.837 ======================================================== 00:28:06.837 Total : 1189.70 148.71 108376.94 14446.76 229967.70 00:28:06.837 00:28:06.837 13:11:11 -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:06.837 13:11:11 -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 6d5dc6b6-140a-4c66-858f-fc3bdc81784d 00:28:08.223 13:11:13 -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:28:08.485 13:11:13 -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 001ace30-2fec-49e3-8dbf-0a0b16cc3163 00:28:08.749 13:11:13 -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:28:08.749 13:11:13 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:28:08.749 13:11:13 -- host/perf.sh@114 -- # nvmftestfini 00:28:08.749 13:11:13 -- nvmf/common.sh@477 -- # nvmfcleanup 00:28:08.749 13:11:13 -- nvmf/common.sh@117 -- # sync 00:28:08.749 13:11:13 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:08.749 13:11:13 -- nvmf/common.sh@120 -- # set +e 00:28:08.749 13:11:13 -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:08.749 13:11:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:08.749 rmmod nvme_tcp 00:28:09.011 rmmod nvme_fabrics 00:28:09.011 rmmod nvme_keyring 00:28:09.011 13:11:13 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:09.011 13:11:13 -- nvmf/common.sh@124 -- # set -e 00:28:09.011 13:11:13 -- nvmf/common.sh@125 -- # return 0 00:28:09.011 13:11:13 -- nvmf/common.sh@478 -- # '[' -n 4117357 ']' 00:28:09.011 13:11:13 -- nvmf/common.sh@479 -- # killprocess 4117357 00:28:09.011 13:11:13 -- common/autotest_common.sh@936 -- # '[' -z 4117357 ']' 00:28:09.011 13:11:13 -- common/autotest_common.sh@940 -- # kill -0 4117357 00:28:09.011 13:11:13 -- common/autotest_common.sh@941 -- # uname 00:28:09.011 13:11:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:09.011 13:11:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4117357 00:28:09.011 13:11:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:28:09.011 13:11:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:28:09.011 13:11:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4117357' 00:28:09.011 killing process with pid 4117357 00:28:09.011 13:11:13 -- common/autotest_common.sh@955 -- # kill 4117357 00:28:09.011 13:11:13 -- common/autotest_common.sh@960 -- # wait 4117357 00:28:10.927 13:11:15 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:28:10.927 13:11:15 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:28:10.927 13:11:15 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:28:10.927 13:11:15 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:10.927 13:11:15 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:10.927 13:11:15 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:10.927 13:11:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:10.927 13:11:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:13.473 13:11:17 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:13.473 00:28:13.473 real 1m31.919s 00:28:13.473 user 5m26.120s 00:28:13.473 sys 0m13.953s 00:28:13.473 13:11:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:28:13.473 13:11:17 -- common/autotest_common.sh@10 -- # set +x 00:28:13.473 ************************************ 00:28:13.473 END TEST nvmf_perf 00:28:13.473 ************************************ 00:28:13.473 13:11:18 -- nvmf/nvmf.sh@97 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:28:13.473 13:11:18 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:28:13.473 13:11:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:13.473 13:11:18 -- common/autotest_common.sh@10 -- # set +x 00:28:13.473 ************************************ 00:28:13.473 START TEST nvmf_fio_host 00:28:13.473 ************************************ 00:28:13.473 13:11:18 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:28:13.473 * Looking for test storage... 00:28:13.473 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:13.473 13:11:18 -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:13.473 13:11:18 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:13.473 13:11:18 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:13.473 13:11:18 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:13.473 13:11:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:13.473 13:11:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:13.473 13:11:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:13.473 13:11:18 -- paths/export.sh@5 -- # export PATH 00:28:13.473 13:11:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:13.473 13:11:18 -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:13.473 13:11:18 -- nvmf/common.sh@7 -- # uname -s 00:28:13.473 13:11:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:13.473 13:11:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:13.473 13:11:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:13.473 13:11:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:13.473 13:11:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:13.473 13:11:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:13.473 13:11:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:13.473 13:11:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:13.473 13:11:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:13.473 13:11:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:13.473 13:11:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:13.473 13:11:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:13.473 13:11:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:13.473 13:11:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:13.473 13:11:18 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:13.473 13:11:18 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:13.473 13:11:18 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:13.473 13:11:18 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:13.473 13:11:18 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:13.473 13:11:18 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:13.473 13:11:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:13.473 13:11:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:13.473 13:11:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:13.473 13:11:18 -- paths/export.sh@5 -- # export PATH 00:28:13.473 13:11:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:13.473 13:11:18 -- nvmf/common.sh@47 -- # : 0 00:28:13.473 13:11:18 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:13.473 13:11:18 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:13.473 13:11:18 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:13.473 13:11:18 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:13.473 13:11:18 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:13.473 13:11:18 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:13.473 13:11:18 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:13.473 13:11:18 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:13.473 13:11:18 -- host/fio.sh@12 -- # nvmftestinit 00:28:13.473 13:11:18 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:28:13.473 13:11:18 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:13.473 13:11:18 -- nvmf/common.sh@437 -- # prepare_net_devs 00:28:13.473 13:11:18 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:28:13.473 13:11:18 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:28:13.473 13:11:18 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:13.473 13:11:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:13.473 13:11:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:13.473 13:11:18 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:28:13.473 13:11:18 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:28:13.473 13:11:18 -- nvmf/common.sh@285 -- # xtrace_disable 00:28:13.473 13:11:18 -- common/autotest_common.sh@10 -- # set +x 00:28:21.613 13:11:25 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:28:21.613 13:11:25 -- nvmf/common.sh@291 -- # pci_devs=() 00:28:21.613 13:11:25 -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:21.613 13:11:25 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:21.613 13:11:25 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:21.613 13:11:25 -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:21.613 13:11:25 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:21.613 13:11:25 -- nvmf/common.sh@295 -- # net_devs=() 00:28:21.613 13:11:25 -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:21.613 13:11:25 -- nvmf/common.sh@296 -- # e810=() 00:28:21.613 13:11:25 -- nvmf/common.sh@296 -- # local -ga e810 00:28:21.613 13:11:25 -- nvmf/common.sh@297 -- # x722=() 00:28:21.613 13:11:25 -- nvmf/common.sh@297 -- # local -ga x722 00:28:21.613 13:11:25 -- nvmf/common.sh@298 -- # mlx=() 00:28:21.613 13:11:25 -- nvmf/common.sh@298 -- # local -ga mlx 00:28:21.613 13:11:25 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:21.613 13:11:25 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:21.613 13:11:25 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:21.613 13:11:25 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:21.613 13:11:25 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:21.613 13:11:25 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:21.613 13:11:25 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:21.613 13:11:25 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:21.613 13:11:25 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:21.613 13:11:25 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:21.613 13:11:25 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:21.613 13:11:25 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:21.613 13:11:25 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:21.613 13:11:25 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:21.613 13:11:25 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:21.613 13:11:25 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:21.613 13:11:25 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:21.613 13:11:25 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:21.613 13:11:25 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:28:21.613 Found 0000:31:00.0 (0x8086 - 0x159b) 00:28:21.613 13:11:25 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:21.613 13:11:25 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:21.613 13:11:25 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:21.613 13:11:25 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:21.613 13:11:25 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:21.613 13:11:25 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:21.613 13:11:25 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:28:21.613 Found 0000:31:00.1 (0x8086 - 0x159b) 00:28:21.613 13:11:25 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:21.613 13:11:25 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:21.613 13:11:25 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:21.613 13:11:25 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:21.613 13:11:25 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:21.613 13:11:25 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:21.613 13:11:25 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:21.613 13:11:25 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:21.613 13:11:25 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:21.613 13:11:25 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:21.613 13:11:25 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:28:21.613 13:11:25 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:21.613 13:11:25 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:28:21.613 Found net devices under 0000:31:00.0: cvl_0_0 00:28:21.613 13:11:25 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:28:21.613 13:11:25 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:21.613 13:11:25 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:21.613 13:11:25 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:28:21.613 13:11:25 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:21.613 13:11:25 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:28:21.613 Found net devices under 0000:31:00.1: cvl_0_1 00:28:21.613 13:11:25 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:28:21.613 13:11:25 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:28:21.613 13:11:25 -- nvmf/common.sh@403 -- # is_hw=yes 00:28:21.613 13:11:25 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:28:21.613 13:11:25 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:28:21.613 13:11:25 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:28:21.613 13:11:25 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:21.613 13:11:25 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:21.613 13:11:25 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:21.613 13:11:25 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:21.613 13:11:25 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:21.613 13:11:25 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:21.613 13:11:25 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:21.613 13:11:25 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:21.613 13:11:25 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:21.613 13:11:25 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:21.613 13:11:25 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:21.613 13:11:25 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:21.613 13:11:25 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:21.613 13:11:25 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:21.613 13:11:25 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:21.613 13:11:25 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:21.613 13:11:25 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:21.613 13:11:25 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:21.613 13:11:25 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:21.613 13:11:25 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:21.613 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:21.613 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.530 ms 00:28:21.613 00:28:21.613 --- 10.0.0.2 ping statistics --- 00:28:21.613 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:21.613 rtt min/avg/max/mdev = 0.530/0.530/0.530/0.000 ms 00:28:21.613 13:11:25 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:21.613 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:21.613 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.319 ms 00:28:21.613 00:28:21.613 --- 10.0.0.1 ping statistics --- 00:28:21.613 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:21.613 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:28:21.613 13:11:25 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:21.613 13:11:25 -- nvmf/common.sh@411 -- # return 0 00:28:21.613 13:11:25 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:28:21.613 13:11:25 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:21.613 13:11:25 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:28:21.613 13:11:25 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:28:21.613 13:11:25 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:21.613 13:11:25 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:28:21.613 13:11:25 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:28:21.613 13:11:25 -- host/fio.sh@14 -- # [[ y != y ]] 00:28:21.614 13:11:25 -- host/fio.sh@19 -- # timing_enter start_nvmf_tgt 00:28:21.614 13:11:25 -- common/autotest_common.sh@710 -- # xtrace_disable 00:28:21.614 13:11:25 -- common/autotest_common.sh@10 -- # set +x 00:28:21.614 13:11:25 -- host/fio.sh@22 -- # nvmfpid=4137192 00:28:21.614 13:11:25 -- host/fio.sh@24 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:21.614 13:11:25 -- host/fio.sh@21 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:21.614 13:11:25 -- host/fio.sh@26 -- # waitforlisten 4137192 00:28:21.614 13:11:25 -- common/autotest_common.sh@817 -- # '[' -z 4137192 ']' 00:28:21.614 13:11:25 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:21.614 13:11:25 -- common/autotest_common.sh@822 -- # local max_retries=100 00:28:21.614 13:11:25 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:21.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:21.614 13:11:25 -- common/autotest_common.sh@826 -- # xtrace_disable 00:28:21.614 13:11:25 -- common/autotest_common.sh@10 -- # set +x 00:28:21.614 [2024-04-26 13:11:25.589407] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:28:21.614 [2024-04-26 13:11:25.589475] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:21.614 EAL: No free 2048 kB hugepages reported on node 1 00:28:21.614 [2024-04-26 13:11:25.662718] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:21.614 [2024-04-26 13:11:25.735095] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:21.614 [2024-04-26 13:11:25.735134] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:21.614 [2024-04-26 13:11:25.735143] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:21.614 [2024-04-26 13:11:25.735151] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:21.614 [2024-04-26 13:11:25.735158] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:21.614 [2024-04-26 13:11:25.735323] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:21.614 [2024-04-26 13:11:25.735456] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:21.614 [2024-04-26 13:11:25.735616] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:21.614 [2024-04-26 13:11:25.735617] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:21.614 13:11:26 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:28:21.614 13:11:26 -- common/autotest_common.sh@850 -- # return 0 00:28:21.614 13:11:26 -- host/fio.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:21.614 13:11:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:21.614 13:11:26 -- common/autotest_common.sh@10 -- # set +x 00:28:21.614 [2024-04-26 13:11:26.377280] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:21.614 13:11:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:21.614 13:11:26 -- host/fio.sh@28 -- # timing_exit start_nvmf_tgt 00:28:21.614 13:11:26 -- common/autotest_common.sh@716 -- # xtrace_disable 00:28:21.614 13:11:26 -- common/autotest_common.sh@10 -- # set +x 00:28:21.614 13:11:26 -- host/fio.sh@30 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:28:21.614 13:11:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:21.614 13:11:26 -- common/autotest_common.sh@10 -- # set +x 00:28:21.614 Malloc1 00:28:21.614 13:11:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:21.614 13:11:26 -- host/fio.sh@31 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:21.614 13:11:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:21.614 13:11:26 -- common/autotest_common.sh@10 -- # set +x 00:28:21.614 13:11:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:21.614 13:11:26 -- host/fio.sh@32 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:28:21.614 13:11:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:21.614 13:11:26 -- common/autotest_common.sh@10 -- # set +x 00:28:21.614 13:11:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:21.614 13:11:26 -- host/fio.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:21.614 13:11:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:21.614 13:11:26 -- common/autotest_common.sh@10 -- # set +x 00:28:21.614 [2024-04-26 13:11:26.471812] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:21.614 13:11:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:21.614 13:11:26 -- host/fio.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:21.614 13:11:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:21.614 13:11:26 -- common/autotest_common.sh@10 -- # set +x 00:28:21.614 13:11:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:21.614 13:11:26 -- host/fio.sh@36 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:28:21.614 13:11:26 -- host/fio.sh@39 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:21.614 13:11:26 -- common/autotest_common.sh@1346 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:21.614 13:11:26 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:28:21.614 13:11:26 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:21.614 13:11:26 -- common/autotest_common.sh@1325 -- # local sanitizers 00:28:21.614 13:11:26 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:21.614 13:11:26 -- common/autotest_common.sh@1327 -- # shift 00:28:21.614 13:11:26 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:28:21.614 13:11:26 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:28:21.614 13:11:26 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:21.614 13:11:26 -- common/autotest_common.sh@1331 -- # grep libasan 00:28:21.614 13:11:26 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:28:21.614 13:11:26 -- common/autotest_common.sh@1331 -- # asan_lib= 00:28:21.614 13:11:26 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:28:21.614 13:11:26 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:28:21.614 13:11:26 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:21.614 13:11:26 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:28:21.614 13:11:26 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:28:21.614 13:11:26 -- common/autotest_common.sh@1331 -- # asan_lib= 00:28:21.614 13:11:26 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:28:21.614 13:11:26 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:28:21.614 13:11:26 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:21.873 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:28:21.873 fio-3.35 00:28:21.873 Starting 1 thread 00:28:21.873 EAL: No free 2048 kB hugepages reported on node 1 00:28:24.410 00:28:24.410 test: (groupid=0, jobs=1): err= 0: pid=4137710: Fri Apr 26 13:11:29 2024 00:28:24.410 read: IOPS=10.4k, BW=40.6MiB/s (42.6MB/s)(81.4MiB/2004msec) 00:28:24.410 slat (usec): min=2, max=271, avg= 2.21, stdev= 2.65 00:28:24.410 clat (usec): min=3628, max=9183, avg=6780.33, stdev=1159.28 00:28:24.410 lat (usec): min=3630, max=9185, avg=6782.54, stdev=1159.27 00:28:24.410 clat percentiles (usec): 00:28:24.410 | 1.00th=[ 4555], 5.00th=[ 4883], 10.00th=[ 5014], 20.00th=[ 5276], 00:28:24.410 | 30.00th=[ 5997], 40.00th=[ 6915], 50.00th=[ 7177], 60.00th=[ 7373], 00:28:24.410 | 70.00th=[ 7570], 80.00th=[ 7767], 90.00th=[ 8029], 95.00th=[ 8225], 00:28:24.410 | 99.00th=[ 8586], 99.50th=[ 8717], 99.90th=[ 8979], 99.95th=[ 8979], 00:28:24.410 | 99.99th=[ 9110] 00:28:24.410 bw ( KiB/s): min=36408, max=53392, per=99.84%, avg=41508.00, stdev=7965.08, samples=4 00:28:24.410 iops : min= 9102, max=13348, avg=10377.00, stdev=1991.27, samples=4 00:28:24.410 write: IOPS=10.4k, BW=40.6MiB/s (42.6MB/s)(81.4MiB/2004msec); 0 zone resets 00:28:24.410 slat (usec): min=2, max=261, avg= 2.30, stdev= 2.01 00:28:24.410 clat (usec): min=2841, max=8063, avg=5450.39, stdev=921.64 00:28:24.410 lat (usec): min=2858, max=8065, avg=5452.69, stdev=921.66 00:28:24.410 clat percentiles (usec): 00:28:24.410 | 1.00th=[ 3687], 5.00th=[ 3949], 10.00th=[ 4080], 20.00th=[ 4293], 00:28:24.410 | 30.00th=[ 4752], 40.00th=[ 5538], 50.00th=[ 5800], 60.00th=[ 5932], 00:28:24.410 | 70.00th=[ 6063], 80.00th=[ 6259], 90.00th=[ 6456], 95.00th=[ 6587], 00:28:24.410 | 99.00th=[ 6915], 99.50th=[ 7046], 99.90th=[ 7242], 99.95th=[ 7439], 00:28:24.410 | 99.99th=[ 7963] 00:28:24.410 bw ( KiB/s): min=37392, max=53376, per=99.95%, avg=41572.00, stdev=7875.58, samples=4 00:28:24.410 iops : min= 9348, max=13344, avg=10393.00, stdev=1968.90, samples=4 00:28:24.410 lat (msec) : 4=3.57%, 10=96.43% 00:28:24.410 cpu : usr=75.04%, sys=24.01%, ctx=27, majf=0, minf=6 00:28:24.410 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:28:24.410 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:24.410 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:24.410 issued rwts: total=20829,20838,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:24.411 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:24.411 00:28:24.411 Run status group 0 (all jobs): 00:28:24.411 READ: bw=40.6MiB/s (42.6MB/s), 40.6MiB/s-40.6MiB/s (42.6MB/s-42.6MB/s), io=81.4MiB (85.3MB), run=2004-2004msec 00:28:24.411 WRITE: bw=40.6MiB/s (42.6MB/s), 40.6MiB/s-40.6MiB/s (42.6MB/s-42.6MB/s), io=81.4MiB (85.4MB), run=2004-2004msec 00:28:24.411 13:11:29 -- host/fio.sh@43 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:28:24.411 13:11:29 -- common/autotest_common.sh@1346 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:28:24.411 13:11:29 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:28:24.411 13:11:29 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:24.411 13:11:29 -- common/autotest_common.sh@1325 -- # local sanitizers 00:28:24.411 13:11:29 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:24.411 13:11:29 -- common/autotest_common.sh@1327 -- # shift 00:28:24.411 13:11:29 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:28:24.411 13:11:29 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:28:24.411 13:11:29 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:24.411 13:11:29 -- common/autotest_common.sh@1331 -- # grep libasan 00:28:24.411 13:11:29 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:28:24.411 13:11:29 -- common/autotest_common.sh@1331 -- # asan_lib= 00:28:24.411 13:11:29 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:28:24.411 13:11:29 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:28:24.411 13:11:29 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:24.411 13:11:29 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:28:24.411 13:11:29 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:28:24.411 13:11:29 -- common/autotest_common.sh@1331 -- # asan_lib= 00:28:24.411 13:11:29 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:28:24.411 13:11:29 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:28:24.411 13:11:29 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:28:24.670 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:28:24.670 fio-3.35 00:28:24.670 Starting 1 thread 00:28:24.670 EAL: No free 2048 kB hugepages reported on node 1 00:28:27.279 00:28:27.279 test: (groupid=0, jobs=1): err= 0: pid=4138356: Fri Apr 26 13:11:31 2024 00:28:27.279 read: IOPS=9481, BW=148MiB/s (155MB/s)(297MiB/2006msec) 00:28:27.279 slat (usec): min=3, max=110, avg= 3.69, stdev= 1.60 00:28:27.279 clat (usec): min=1233, max=16232, avg=8132.56, stdev=2069.71 00:28:27.279 lat (usec): min=1236, max=16236, avg=8136.24, stdev=2069.91 00:28:27.279 clat percentiles (usec): 00:28:27.280 | 1.00th=[ 4228], 5.00th=[ 5211], 10.00th=[ 5669], 20.00th=[ 6325], 00:28:27.280 | 30.00th=[ 6849], 40.00th=[ 7373], 50.00th=[ 7898], 60.00th=[ 8586], 00:28:27.280 | 70.00th=[ 9241], 80.00th=[ 9896], 90.00th=[10683], 95.00th=[11863], 00:28:27.280 | 99.00th=[13829], 99.50th=[14484], 99.90th=[15008], 99.95th=[15139], 00:28:27.280 | 99.99th=[15926] 00:28:27.280 bw ( KiB/s): min=62304, max=91456, per=49.56%, avg=75184.00, stdev=12117.22, samples=4 00:28:27.280 iops : min= 3894, max= 5716, avg=4699.00, stdev=757.33, samples=4 00:28:27.280 write: IOPS=5463, BW=85.4MiB/s (89.5MB/s)(153MiB/1796msec); 0 zone resets 00:28:27.280 slat (usec): min=40, max=404, avg=41.42, stdev= 8.42 00:28:27.280 clat (usec): min=2043, max=17074, avg=9409.92, stdev=1693.18 00:28:27.280 lat (usec): min=2084, max=17212, avg=9451.34, stdev=1695.77 00:28:27.280 clat percentiles (usec): 00:28:27.280 | 1.00th=[ 6587], 5.00th=[ 7308], 10.00th=[ 7570], 20.00th=[ 8029], 00:28:27.280 | 30.00th=[ 8455], 40.00th=[ 8717], 50.00th=[ 9110], 60.00th=[ 9503], 00:28:27.280 | 70.00th=[10028], 80.00th=[10683], 90.00th=[11600], 95.00th=[12518], 00:28:27.280 | 99.00th=[14484], 99.50th=[15664], 99.90th=[16581], 99.95th=[16909], 00:28:27.280 | 99.99th=[17171] 00:28:27.280 bw ( KiB/s): min=65056, max=94208, per=89.28%, avg=78040.00, stdev=12047.94, samples=4 00:28:27.280 iops : min= 4066, max= 5888, avg=4877.50, stdev=753.00, samples=4 00:28:27.280 lat (msec) : 2=0.05%, 4=0.45%, 10=76.48%, 20=23.03% 00:28:27.280 cpu : usr=88.53%, sys=9.93%, ctx=24, majf=0, minf=19 00:28:27.280 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:28:27.280 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:27.280 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:27.280 issued rwts: total=19019,9812,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:27.280 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:27.280 00:28:27.280 Run status group 0 (all jobs): 00:28:27.280 READ: bw=148MiB/s (155MB/s), 148MiB/s-148MiB/s (155MB/s-155MB/s), io=297MiB (312MB), run=2006-2006msec 00:28:27.280 WRITE: bw=85.4MiB/s (89.5MB/s), 85.4MiB/s-85.4MiB/s (89.5MB/s-89.5MB/s), io=153MiB (161MB), run=1796-1796msec 00:28:27.280 13:11:32 -- host/fio.sh@45 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:27.280 13:11:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:27.280 13:11:32 -- common/autotest_common.sh@10 -- # set +x 00:28:27.280 13:11:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:27.280 13:11:32 -- host/fio.sh@47 -- # '[' 1 -eq 1 ']' 00:28:27.280 13:11:32 -- host/fio.sh@49 -- # bdfs=($(get_nvme_bdfs)) 00:28:27.280 13:11:32 -- host/fio.sh@49 -- # get_nvme_bdfs 00:28:27.280 13:11:32 -- common/autotest_common.sh@1499 -- # bdfs=() 00:28:27.280 13:11:32 -- common/autotest_common.sh@1499 -- # local bdfs 00:28:27.280 13:11:32 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:28:27.280 13:11:32 -- common/autotest_common.sh@1500 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:28:27.280 13:11:32 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:28:27.280 13:11:32 -- common/autotest_common.sh@1501 -- # (( 1 == 0 )) 00:28:27.280 13:11:32 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:65:00.0 00:28:27.280 13:11:32 -- host/fio.sh@50 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 -i 10.0.0.2 00:28:27.280 13:11:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:27.280 13:11:32 -- common/autotest_common.sh@10 -- # set +x 00:28:27.540 Nvme0n1 00:28:27.540 13:11:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:27.540 13:11:32 -- host/fio.sh@51 -- # rpc_cmd bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:28:27.540 13:11:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:27.540 13:11:32 -- common/autotest_common.sh@10 -- # set +x 00:28:27.800 13:11:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:27.800 13:11:32 -- host/fio.sh@51 -- # ls_guid=946fad3a-bc63-44a9-b6fa-2bdd0719d903 00:28:27.800 13:11:32 -- host/fio.sh@52 -- # get_lvs_free_mb 946fad3a-bc63-44a9-b6fa-2bdd0719d903 00:28:27.800 13:11:32 -- common/autotest_common.sh@1350 -- # local lvs_uuid=946fad3a-bc63-44a9-b6fa-2bdd0719d903 00:28:27.800 13:11:32 -- common/autotest_common.sh@1351 -- # local lvs_info 00:28:27.800 13:11:32 -- common/autotest_common.sh@1352 -- # local fc 00:28:27.800 13:11:32 -- common/autotest_common.sh@1353 -- # local cs 00:28:27.800 13:11:32 -- common/autotest_common.sh@1354 -- # rpc_cmd bdev_lvol_get_lvstores 00:28:27.800 13:11:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:27.800 13:11:32 -- common/autotest_common.sh@10 -- # set +x 00:28:27.800 13:11:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:27.800 13:11:32 -- common/autotest_common.sh@1354 -- # lvs_info='[ 00:28:27.800 { 00:28:27.800 "uuid": "946fad3a-bc63-44a9-b6fa-2bdd0719d903", 00:28:27.800 "name": "lvs_0", 00:28:27.800 "base_bdev": "Nvme0n1", 00:28:27.800 "total_data_clusters": 1787, 00:28:27.800 "free_clusters": 1787, 00:28:27.800 "block_size": 512, 00:28:27.800 "cluster_size": 1073741824 00:28:27.800 } 00:28:27.800 ]' 00:28:27.800 13:11:32 -- common/autotest_common.sh@1355 -- # jq '.[] | select(.uuid=="946fad3a-bc63-44a9-b6fa-2bdd0719d903") .free_clusters' 00:28:28.060 13:11:32 -- common/autotest_common.sh@1355 -- # fc=1787 00:28:28.060 13:11:32 -- common/autotest_common.sh@1356 -- # jq '.[] | select(.uuid=="946fad3a-bc63-44a9-b6fa-2bdd0719d903") .cluster_size' 00:28:28.060 13:11:32 -- common/autotest_common.sh@1356 -- # cs=1073741824 00:28:28.060 13:11:32 -- common/autotest_common.sh@1359 -- # free_mb=1829888 00:28:28.060 13:11:32 -- common/autotest_common.sh@1360 -- # echo 1829888 00:28:28.060 1829888 00:28:28.060 13:11:32 -- host/fio.sh@53 -- # rpc_cmd bdev_lvol_create -l lvs_0 lbd_0 1829888 00:28:28.060 13:11:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:28.060 13:11:32 -- common/autotest_common.sh@10 -- # set +x 00:28:28.060 be9c71fe-a97a-486e-a3e0-30d7a68cae8f 00:28:28.060 13:11:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:28.060 13:11:32 -- host/fio.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:28:28.060 13:11:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:28.060 13:11:32 -- common/autotest_common.sh@10 -- # set +x 00:28:28.060 13:11:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:28.060 13:11:32 -- host/fio.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:28:28.060 13:11:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:28.060 13:11:32 -- common/autotest_common.sh@10 -- # set +x 00:28:28.060 13:11:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:28.060 13:11:32 -- host/fio.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:28:28.060 13:11:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:28.060 13:11:32 -- common/autotest_common.sh@10 -- # set +x 00:28:28.060 13:11:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:28.060 13:11:32 -- host/fio.sh@57 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:28.060 13:11:32 -- common/autotest_common.sh@1346 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:28.060 13:11:32 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:28:28.060 13:11:32 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:28.060 13:11:32 -- common/autotest_common.sh@1325 -- # local sanitizers 00:28:28.060 13:11:32 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:28.060 13:11:32 -- common/autotest_common.sh@1327 -- # shift 00:28:28.060 13:11:32 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:28:28.060 13:11:32 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:28:28.060 13:11:32 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:28.060 13:11:32 -- common/autotest_common.sh@1331 -- # grep libasan 00:28:28.060 13:11:32 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:28:28.060 13:11:33 -- common/autotest_common.sh@1331 -- # asan_lib= 00:28:28.060 13:11:33 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:28:28.060 13:11:33 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:28:28.060 13:11:33 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:28.060 13:11:33 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:28:28.060 13:11:33 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:28:28.060 13:11:33 -- common/autotest_common.sh@1331 -- # asan_lib= 00:28:28.060 13:11:33 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:28:28.060 13:11:33 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:28:28.060 13:11:33 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:28.325 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:28:28.325 fio-3.35 00:28:28.325 Starting 1 thread 00:28:28.586 EAL: No free 2048 kB hugepages reported on node 1 00:28:31.129 00:28:31.129 test: (groupid=0, jobs=1): err= 0: pid=4139350: Fri Apr 26 13:11:35 2024 00:28:31.129 read: IOPS=10.3k, BW=40.3MiB/s (42.3MB/s)(80.9MiB/2005msec) 00:28:31.129 slat (usec): min=2, max=113, avg= 2.23, stdev= 1.06 00:28:31.129 clat (usec): min=1932, max=11859, avg=6835.47, stdev=514.32 00:28:31.129 lat (usec): min=1950, max=11861, avg=6837.70, stdev=514.27 00:28:31.129 clat percentiles (usec): 00:28:31.129 | 1.00th=[ 5669], 5.00th=[ 5997], 10.00th=[ 6194], 20.00th=[ 6456], 00:28:31.129 | 30.00th=[ 6587], 40.00th=[ 6718], 50.00th=[ 6849], 60.00th=[ 6980], 00:28:31.129 | 70.00th=[ 7111], 80.00th=[ 7242], 90.00th=[ 7439], 95.00th=[ 7635], 00:28:31.129 | 99.00th=[ 7963], 99.50th=[ 8160], 99.90th=[ 9503], 99.95th=[10945], 00:28:31.129 | 99.99th=[11731] 00:28:31.129 bw ( KiB/s): min=40168, max=42064, per=99.89%, avg=41274.00, stdev=796.56, samples=4 00:28:31.129 iops : min=10042, max=10516, avg=10318.50, stdev=199.14, samples=4 00:28:31.129 write: IOPS=10.3k, BW=40.4MiB/s (42.4MB/s)(81.0MiB/2005msec); 0 zone resets 00:28:31.129 slat (nsec): min=2145, max=96973, avg=2321.83, stdev=720.87 00:28:31.129 clat (usec): min=1062, max=9656, avg=5465.20, stdev=439.26 00:28:31.129 lat (usec): min=1070, max=9658, avg=5467.52, stdev=439.24 00:28:31.129 clat percentiles (usec): 00:28:31.129 | 1.00th=[ 4424], 5.00th=[ 4817], 10.00th=[ 4948], 20.00th=[ 5145], 00:28:31.129 | 30.00th=[ 5276], 40.00th=[ 5342], 50.00th=[ 5473], 60.00th=[ 5604], 00:28:31.129 | 70.00th=[ 5669], 80.00th=[ 5800], 90.00th=[ 5997], 95.00th=[ 6128], 00:28:31.129 | 99.00th=[ 6456], 99.50th=[ 6587], 99.90th=[ 7832], 99.95th=[ 8979], 00:28:31.129 | 99.99th=[ 9634] 00:28:31.129 bw ( KiB/s): min=40720, max=41728, per=99.99%, avg=41364.00, stdev=461.02, samples=4 00:28:31.129 iops : min=10180, max=10432, avg=10341.00, stdev=115.26, samples=4 00:28:31.129 lat (msec) : 2=0.02%, 4=0.12%, 10=99.82%, 20=0.04% 00:28:31.129 cpu : usr=70.66%, sys=27.84%, ctx=59, majf=0, minf=6 00:28:31.129 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:28:31.129 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.129 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:31.129 issued rwts: total=20711,20735,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:31.129 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:31.129 00:28:31.129 Run status group 0 (all jobs): 00:28:31.129 READ: bw=40.3MiB/s (42.3MB/s), 40.3MiB/s-40.3MiB/s (42.3MB/s-42.3MB/s), io=80.9MiB (84.8MB), run=2005-2005msec 00:28:31.129 WRITE: bw=40.4MiB/s (42.4MB/s), 40.4MiB/s-40.4MiB/s (42.4MB/s-42.4MB/s), io=81.0MiB (84.9MB), run=2005-2005msec 00:28:31.129 13:11:35 -- host/fio.sh@59 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:28:31.129 13:11:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:31.129 13:11:35 -- common/autotest_common.sh@10 -- # set +x 00:28:31.129 13:11:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:31.129 13:11:35 -- host/fio.sh@62 -- # rpc_cmd bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:28:31.129 13:11:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:31.129 13:11:35 -- common/autotest_common.sh@10 -- # set +x 00:28:31.702 13:11:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:31.702 13:11:36 -- host/fio.sh@62 -- # ls_nested_guid=5d9aae34-8746-4f66-9509-c2e442f1215a 00:28:31.702 13:11:36 -- host/fio.sh@63 -- # get_lvs_free_mb 5d9aae34-8746-4f66-9509-c2e442f1215a 00:28:31.702 13:11:36 -- common/autotest_common.sh@1350 -- # local lvs_uuid=5d9aae34-8746-4f66-9509-c2e442f1215a 00:28:31.702 13:11:36 -- common/autotest_common.sh@1351 -- # local lvs_info 00:28:31.702 13:11:36 -- common/autotest_common.sh@1352 -- # local fc 00:28:31.702 13:11:36 -- common/autotest_common.sh@1353 -- # local cs 00:28:31.702 13:11:36 -- common/autotest_common.sh@1354 -- # rpc_cmd bdev_lvol_get_lvstores 00:28:31.702 13:11:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:31.702 13:11:36 -- common/autotest_common.sh@10 -- # set +x 00:28:31.702 13:11:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:31.702 13:11:36 -- common/autotest_common.sh@1354 -- # lvs_info='[ 00:28:31.702 { 00:28:31.702 "uuid": "946fad3a-bc63-44a9-b6fa-2bdd0719d903", 00:28:31.702 "name": "lvs_0", 00:28:31.702 "base_bdev": "Nvme0n1", 00:28:31.702 "total_data_clusters": 1787, 00:28:31.702 "free_clusters": 0, 00:28:31.702 "block_size": 512, 00:28:31.702 "cluster_size": 1073741824 00:28:31.702 }, 00:28:31.702 { 00:28:31.702 "uuid": "5d9aae34-8746-4f66-9509-c2e442f1215a", 00:28:31.702 "name": "lvs_n_0", 00:28:31.702 "base_bdev": "be9c71fe-a97a-486e-a3e0-30d7a68cae8f", 00:28:31.702 "total_data_clusters": 457025, 00:28:31.702 "free_clusters": 457025, 00:28:31.702 "block_size": 512, 00:28:31.702 "cluster_size": 4194304 00:28:31.702 } 00:28:31.702 ]' 00:28:31.702 13:11:36 -- common/autotest_common.sh@1355 -- # jq '.[] | select(.uuid=="5d9aae34-8746-4f66-9509-c2e442f1215a") .free_clusters' 00:28:31.702 13:11:36 -- common/autotest_common.sh@1355 -- # fc=457025 00:28:31.702 13:11:36 -- common/autotest_common.sh@1356 -- # jq '.[] | select(.uuid=="5d9aae34-8746-4f66-9509-c2e442f1215a") .cluster_size' 00:28:31.702 13:11:36 -- common/autotest_common.sh@1356 -- # cs=4194304 00:28:31.702 13:11:36 -- common/autotest_common.sh@1359 -- # free_mb=1828100 00:28:31.702 13:11:36 -- common/autotest_common.sh@1360 -- # echo 1828100 00:28:31.702 1828100 00:28:31.702 13:11:36 -- host/fio.sh@64 -- # rpc_cmd bdev_lvol_create -l lvs_n_0 lbd_nest_0 1828100 00:28:31.702 13:11:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:31.702 13:11:36 -- common/autotest_common.sh@10 -- # set +x 00:28:32.644 19abf71e-7141-4570-90b6-85cdc58b1b5a 00:28:32.644 13:11:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:32.644 13:11:37 -- host/fio.sh@65 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:28:32.644 13:11:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:32.644 13:11:37 -- common/autotest_common.sh@10 -- # set +x 00:28:32.644 13:11:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:32.644 13:11:37 -- host/fio.sh@66 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:28:32.644 13:11:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:32.644 13:11:37 -- common/autotest_common.sh@10 -- # set +x 00:28:32.644 13:11:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:32.644 13:11:37 -- host/fio.sh@67 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:28:32.644 13:11:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:32.644 13:11:37 -- common/autotest_common.sh@10 -- # set +x 00:28:32.644 13:11:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:32.644 13:11:37 -- host/fio.sh@68 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:32.644 13:11:37 -- common/autotest_common.sh@1346 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:32.644 13:11:37 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:28:32.644 13:11:37 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:32.644 13:11:37 -- common/autotest_common.sh@1325 -- # local sanitizers 00:28:32.644 13:11:37 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:32.644 13:11:37 -- common/autotest_common.sh@1327 -- # shift 00:28:32.644 13:11:37 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:28:32.644 13:11:37 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:28:32.644 13:11:37 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:32.644 13:11:37 -- common/autotest_common.sh@1331 -- # grep libasan 00:28:32.644 13:11:37 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:28:32.644 13:11:37 -- common/autotest_common.sh@1331 -- # asan_lib= 00:28:32.644 13:11:37 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:28:32.644 13:11:37 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:28:32.644 13:11:37 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:32.644 13:11:37 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:28:32.644 13:11:37 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:28:32.644 13:11:37 -- common/autotest_common.sh@1331 -- # asan_lib= 00:28:32.644 13:11:37 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:28:32.644 13:11:37 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:28:32.644 13:11:37 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:32.905 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:28:32.905 fio-3.35 00:28:32.905 Starting 1 thread 00:28:32.905 EAL: No free 2048 kB hugepages reported on node 1 00:28:35.452 00:28:35.452 test: (groupid=0, jobs=1): err= 0: pid=4140244: Fri Apr 26 13:11:40 2024 00:28:35.452 read: IOPS=9139, BW=35.7MiB/s (37.4MB/s)(71.6MiB/2006msec) 00:28:35.452 slat (usec): min=2, max=109, avg= 2.25, stdev= 1.22 00:28:35.452 clat (usec): min=2859, max=12690, avg=7747.48, stdev=604.88 00:28:35.452 lat (usec): min=2876, max=12692, avg=7749.73, stdev=604.82 00:28:35.452 clat percentiles (usec): 00:28:35.452 | 1.00th=[ 6390], 5.00th=[ 6783], 10.00th=[ 7046], 20.00th=[ 7242], 00:28:35.452 | 30.00th=[ 7439], 40.00th=[ 7635], 50.00th=[ 7767], 60.00th=[ 7898], 00:28:35.452 | 70.00th=[ 8029], 80.00th=[ 8225], 90.00th=[ 8455], 95.00th=[ 8717], 00:28:35.452 | 99.00th=[ 9110], 99.50th=[ 9241], 99.90th=[10814], 99.95th=[11600], 00:28:35.452 | 99.99th=[12649] 00:28:35.452 bw ( KiB/s): min=35640, max=37064, per=99.90%, avg=36522.00, stdev=619.90, samples=4 00:28:35.452 iops : min= 8910, max= 9266, avg=9130.50, stdev=154.97, samples=4 00:28:35.452 write: IOPS=9148, BW=35.7MiB/s (37.5MB/s)(71.7MiB/2006msec); 0 zone resets 00:28:35.452 slat (nsec): min=2155, max=122610, avg=2337.15, stdev=938.38 00:28:35.452 clat (usec): min=1065, max=10993, avg=6175.73, stdev=522.34 00:28:35.452 lat (usec): min=1072, max=10995, avg=6178.07, stdev=522.32 00:28:35.452 clat percentiles (usec): 00:28:35.452 | 1.00th=[ 4948], 5.00th=[ 5407], 10.00th=[ 5538], 20.00th=[ 5800], 00:28:35.452 | 30.00th=[ 5932], 40.00th=[ 6063], 50.00th=[ 6194], 60.00th=[ 6325], 00:28:35.452 | 70.00th=[ 6456], 80.00th=[ 6587], 90.00th=[ 6783], 95.00th=[ 6980], 00:28:35.452 | 99.00th=[ 7308], 99.50th=[ 7504], 99.90th=[ 9634], 99.95th=[10421], 00:28:35.452 | 99.99th=[10945] 00:28:35.452 bw ( KiB/s): min=36432, max=36800, per=100.00%, avg=36596.00, stdev=171.02, samples=4 00:28:35.452 iops : min= 9108, max= 9200, avg=9149.00, stdev=42.76, samples=4 00:28:35.452 lat (msec) : 2=0.01%, 4=0.10%, 10=99.75%, 20=0.14% 00:28:35.452 cpu : usr=72.27%, sys=26.28%, ctx=55, majf=0, minf=6 00:28:35.452 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:28:35.452 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:35.452 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:35.452 issued rwts: total=18334,18352,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:35.452 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:35.452 00:28:35.452 Run status group 0 (all jobs): 00:28:35.452 READ: bw=35.7MiB/s (37.4MB/s), 35.7MiB/s-35.7MiB/s (37.4MB/s-37.4MB/s), io=71.6MiB (75.1MB), run=2006-2006msec 00:28:35.452 WRITE: bw=35.7MiB/s (37.5MB/s), 35.7MiB/s-35.7MiB/s (37.5MB/s-37.5MB/s), io=71.7MiB (75.2MB), run=2006-2006msec 00:28:35.452 13:11:40 -- host/fio.sh@70 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:28:35.452 13:11:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:35.452 13:11:40 -- common/autotest_common.sh@10 -- # set +x 00:28:35.452 13:11:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:35.452 13:11:40 -- host/fio.sh@72 -- # sync 00:28:35.452 13:11:40 -- host/fio.sh@74 -- # rpc_cmd bdev_lvol_delete lvs_n_0/lbd_nest_0 00:28:35.452 13:11:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:35.452 13:11:40 -- common/autotest_common.sh@10 -- # set +x 00:28:37.368 13:11:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:37.368 13:11:42 -- host/fio.sh@75 -- # rpc_cmd bdev_lvol_delete_lvstore -l lvs_n_0 00:28:37.368 13:11:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:37.368 13:11:42 -- common/autotest_common.sh@10 -- # set +x 00:28:37.368 13:11:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:37.368 13:11:42 -- host/fio.sh@76 -- # rpc_cmd bdev_lvol_delete lvs_0/lbd_0 00:28:37.368 13:11:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:37.368 13:11:42 -- common/autotest_common.sh@10 -- # set +x 00:28:37.629 13:11:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:37.629 13:11:42 -- host/fio.sh@77 -- # rpc_cmd bdev_lvol_delete_lvstore -l lvs_0 00:28:37.629 13:11:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:37.629 13:11:42 -- common/autotest_common.sh@10 -- # set +x 00:28:37.629 13:11:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:37.629 13:11:42 -- host/fio.sh@78 -- # rpc_cmd bdev_nvme_detach_controller Nvme0 00:28:37.629 13:11:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:37.629 13:11:42 -- common/autotest_common.sh@10 -- # set +x 00:28:39.544 13:11:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:39.544 13:11:44 -- host/fio.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:28:39.544 13:11:44 -- host/fio.sh@83 -- # rm -f ./local-test-0-verify.state 00:28:39.544 13:11:44 -- host/fio.sh@84 -- # nvmftestfini 00:28:39.544 13:11:44 -- nvmf/common.sh@477 -- # nvmfcleanup 00:28:39.544 13:11:44 -- nvmf/common.sh@117 -- # sync 00:28:39.544 13:11:44 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:39.544 13:11:44 -- nvmf/common.sh@120 -- # set +e 00:28:39.544 13:11:44 -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:39.544 13:11:44 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:39.544 rmmod nvme_tcp 00:28:39.544 rmmod nvme_fabrics 00:28:39.544 rmmod nvme_keyring 00:28:39.544 13:11:44 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:39.544 13:11:44 -- nvmf/common.sh@124 -- # set -e 00:28:39.544 13:11:44 -- nvmf/common.sh@125 -- # return 0 00:28:39.544 13:11:44 -- nvmf/common.sh@478 -- # '[' -n 4137192 ']' 00:28:39.544 13:11:44 -- nvmf/common.sh@479 -- # killprocess 4137192 00:28:39.544 13:11:44 -- common/autotest_common.sh@936 -- # '[' -z 4137192 ']' 00:28:39.544 13:11:44 -- common/autotest_common.sh@940 -- # kill -0 4137192 00:28:39.544 13:11:44 -- common/autotest_common.sh@941 -- # uname 00:28:39.544 13:11:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:39.544 13:11:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4137192 00:28:39.544 13:11:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:28:39.544 13:11:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:28:39.544 13:11:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4137192' 00:28:39.544 killing process with pid 4137192 00:28:39.544 13:11:44 -- common/autotest_common.sh@955 -- # kill 4137192 00:28:39.544 13:11:44 -- common/autotest_common.sh@960 -- # wait 4137192 00:28:39.805 13:11:44 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:28:39.805 13:11:44 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:28:39.805 13:11:44 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:28:39.805 13:11:44 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:39.805 13:11:44 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:39.805 13:11:44 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:39.805 13:11:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:39.805 13:11:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:42.352 13:11:46 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:42.352 00:28:42.352 real 0m28.635s 00:28:42.352 user 2m21.565s 00:28:42.352 sys 0m8.834s 00:28:42.352 13:11:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:28:42.352 13:11:46 -- common/autotest_common.sh@10 -- # set +x 00:28:42.352 ************************************ 00:28:42.352 END TEST nvmf_fio_host 00:28:42.352 ************************************ 00:28:42.352 13:11:46 -- nvmf/nvmf.sh@98 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:28:42.352 13:11:46 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:28:42.352 13:11:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:42.352 13:11:46 -- common/autotest_common.sh@10 -- # set +x 00:28:42.352 ************************************ 00:28:42.352 START TEST nvmf_failover 00:28:42.352 ************************************ 00:28:42.352 13:11:47 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:28:42.352 * Looking for test storage... 00:28:42.352 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:42.352 13:11:47 -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:42.352 13:11:47 -- nvmf/common.sh@7 -- # uname -s 00:28:42.352 13:11:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:42.352 13:11:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:42.352 13:11:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:42.352 13:11:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:42.352 13:11:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:42.352 13:11:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:42.352 13:11:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:42.352 13:11:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:42.352 13:11:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:42.352 13:11:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:42.352 13:11:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:42.352 13:11:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:42.352 13:11:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:42.352 13:11:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:42.352 13:11:47 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:42.352 13:11:47 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:42.352 13:11:47 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:42.352 13:11:47 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:42.352 13:11:47 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:42.352 13:11:47 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:42.353 13:11:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:42.353 13:11:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:42.353 13:11:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:42.353 13:11:47 -- paths/export.sh@5 -- # export PATH 00:28:42.353 13:11:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:42.353 13:11:47 -- nvmf/common.sh@47 -- # : 0 00:28:42.353 13:11:47 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:42.353 13:11:47 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:42.353 13:11:47 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:42.353 13:11:47 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:42.353 13:11:47 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:42.353 13:11:47 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:42.353 13:11:47 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:42.353 13:11:47 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:42.353 13:11:47 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:42.353 13:11:47 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:42.353 13:11:47 -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:42.353 13:11:47 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:28:42.353 13:11:47 -- host/failover.sh@18 -- # nvmftestinit 00:28:42.353 13:11:47 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:28:42.353 13:11:47 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:42.353 13:11:47 -- nvmf/common.sh@437 -- # prepare_net_devs 00:28:42.353 13:11:47 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:28:42.353 13:11:47 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:28:42.353 13:11:47 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:42.353 13:11:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:42.353 13:11:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:42.353 13:11:47 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:28:42.353 13:11:47 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:28:42.353 13:11:47 -- nvmf/common.sh@285 -- # xtrace_disable 00:28:42.353 13:11:47 -- common/autotest_common.sh@10 -- # set +x 00:28:50.495 13:11:54 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:28:50.495 13:11:54 -- nvmf/common.sh@291 -- # pci_devs=() 00:28:50.495 13:11:54 -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:50.495 13:11:54 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:50.495 13:11:54 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:50.495 13:11:54 -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:50.495 13:11:54 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:50.495 13:11:54 -- nvmf/common.sh@295 -- # net_devs=() 00:28:50.495 13:11:54 -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:50.495 13:11:54 -- nvmf/common.sh@296 -- # e810=() 00:28:50.495 13:11:54 -- nvmf/common.sh@296 -- # local -ga e810 00:28:50.495 13:11:54 -- nvmf/common.sh@297 -- # x722=() 00:28:50.495 13:11:54 -- nvmf/common.sh@297 -- # local -ga x722 00:28:50.495 13:11:54 -- nvmf/common.sh@298 -- # mlx=() 00:28:50.495 13:11:54 -- nvmf/common.sh@298 -- # local -ga mlx 00:28:50.495 13:11:54 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:50.495 13:11:54 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:50.495 13:11:54 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:50.495 13:11:54 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:50.495 13:11:54 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:50.495 13:11:54 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:50.495 13:11:54 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:50.495 13:11:54 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:50.495 13:11:54 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:50.495 13:11:54 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:50.495 13:11:54 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:50.495 13:11:54 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:50.495 13:11:54 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:50.495 13:11:54 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:50.495 13:11:54 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:50.495 13:11:54 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:50.495 13:11:54 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:50.495 13:11:54 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:50.495 13:11:54 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:28:50.495 Found 0000:31:00.0 (0x8086 - 0x159b) 00:28:50.495 13:11:54 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:50.495 13:11:54 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:50.495 13:11:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:50.495 13:11:54 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:50.495 13:11:54 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:50.496 13:11:54 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:50.496 13:11:54 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:28:50.496 Found 0000:31:00.1 (0x8086 - 0x159b) 00:28:50.496 13:11:54 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:50.496 13:11:54 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:50.496 13:11:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:50.496 13:11:54 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:50.496 13:11:54 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:50.496 13:11:54 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:50.496 13:11:54 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:50.496 13:11:54 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:50.496 13:11:54 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:50.496 13:11:54 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:50.496 13:11:54 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:28:50.496 13:11:54 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:50.496 13:11:54 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:28:50.496 Found net devices under 0000:31:00.0: cvl_0_0 00:28:50.496 13:11:54 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:28:50.496 13:11:54 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:50.496 13:11:54 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:50.496 13:11:54 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:28:50.496 13:11:54 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:50.496 13:11:54 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:28:50.496 Found net devices under 0000:31:00.1: cvl_0_1 00:28:50.496 13:11:54 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:28:50.496 13:11:54 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:28:50.496 13:11:54 -- nvmf/common.sh@403 -- # is_hw=yes 00:28:50.496 13:11:54 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:28:50.496 13:11:54 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:28:50.496 13:11:54 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:28:50.496 13:11:54 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:50.496 13:11:54 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:50.496 13:11:54 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:50.496 13:11:54 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:50.496 13:11:54 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:50.496 13:11:54 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:50.496 13:11:54 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:50.496 13:11:54 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:50.496 13:11:54 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:50.496 13:11:54 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:50.496 13:11:54 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:50.496 13:11:54 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:50.496 13:11:54 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:50.496 13:11:54 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:50.496 13:11:54 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:50.496 13:11:54 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:50.496 13:11:54 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:50.496 13:11:54 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:50.496 13:11:54 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:50.496 13:11:54 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:50.496 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:50.496 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.669 ms 00:28:50.496 00:28:50.496 --- 10.0.0.2 ping statistics --- 00:28:50.496 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:50.496 rtt min/avg/max/mdev = 0.669/0.669/0.669/0.000 ms 00:28:50.496 13:11:54 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:50.496 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:50.496 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.337 ms 00:28:50.496 00:28:50.496 --- 10.0.0.1 ping statistics --- 00:28:50.496 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:50.496 rtt min/avg/max/mdev = 0.337/0.337/0.337/0.000 ms 00:28:50.496 13:11:54 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:50.496 13:11:54 -- nvmf/common.sh@411 -- # return 0 00:28:50.496 13:11:54 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:28:50.496 13:11:54 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:50.496 13:11:54 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:28:50.496 13:11:54 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:28:50.496 13:11:54 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:50.496 13:11:54 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:28:50.496 13:11:54 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:28:50.496 13:11:54 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:28:50.496 13:11:54 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:28:50.496 13:11:54 -- common/autotest_common.sh@710 -- # xtrace_disable 00:28:50.496 13:11:54 -- common/autotest_common.sh@10 -- # set +x 00:28:50.496 13:11:54 -- nvmf/common.sh@470 -- # nvmfpid=4145643 00:28:50.496 13:11:54 -- nvmf/common.sh@471 -- # waitforlisten 4145643 00:28:50.496 13:11:54 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:50.496 13:11:54 -- common/autotest_common.sh@817 -- # '[' -z 4145643 ']' 00:28:50.496 13:11:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:50.496 13:11:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:28:50.496 13:11:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:50.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:50.496 13:11:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:28:50.496 13:11:54 -- common/autotest_common.sh@10 -- # set +x 00:28:50.496 [2024-04-26 13:11:54.508267] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:28:50.496 [2024-04-26 13:11:54.508338] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:50.496 EAL: No free 2048 kB hugepages reported on node 1 00:28:50.496 [2024-04-26 13:11:54.597348] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:50.496 [2024-04-26 13:11:54.689420] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:50.496 [2024-04-26 13:11:54.689480] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:50.496 [2024-04-26 13:11:54.689488] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:50.496 [2024-04-26 13:11:54.689495] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:50.496 [2024-04-26 13:11:54.689502] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:50.496 [2024-04-26 13:11:54.689662] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:50.496 [2024-04-26 13:11:54.689833] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:50.496 [2024-04-26 13:11:54.689833] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:50.496 13:11:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:28:50.496 13:11:55 -- common/autotest_common.sh@850 -- # return 0 00:28:50.496 13:11:55 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:28:50.496 13:11:55 -- common/autotest_common.sh@716 -- # xtrace_disable 00:28:50.496 13:11:55 -- common/autotest_common.sh@10 -- # set +x 00:28:50.496 13:11:55 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:50.496 13:11:55 -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:50.496 [2024-04-26 13:11:55.444220] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:50.496 13:11:55 -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:28:50.757 Malloc0 00:28:50.757 13:11:55 -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:51.018 13:11:55 -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:51.018 13:11:55 -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:51.279 [2024-04-26 13:11:56.114276] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:51.279 13:11:56 -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:28:51.279 [2024-04-26 13:11:56.274719] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:51.279 13:11:56 -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:28:51.539 [2024-04-26 13:11:56.435223] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:28:51.539 13:11:56 -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:28:51.539 13:11:56 -- host/failover.sh@31 -- # bdevperf_pid=4146045 00:28:51.539 13:11:56 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:51.539 13:11:56 -- host/failover.sh@34 -- # waitforlisten 4146045 /var/tmp/bdevperf.sock 00:28:51.539 13:11:56 -- common/autotest_common.sh@817 -- # '[' -z 4146045 ']' 00:28:51.539 13:11:56 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:51.539 13:11:56 -- common/autotest_common.sh@822 -- # local max_retries=100 00:28:51.539 13:11:56 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:51.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:51.539 13:11:56 -- common/autotest_common.sh@826 -- # xtrace_disable 00:28:51.539 13:11:56 -- common/autotest_common.sh@10 -- # set +x 00:28:52.515 13:11:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:28:52.515 13:11:57 -- common/autotest_common.sh@850 -- # return 0 00:28:52.515 13:11:57 -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:52.515 NVMe0n1 00:28:52.515 13:11:57 -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:52.776 00:28:52.776 13:11:57 -- host/failover.sh@39 -- # run_test_pid=4146340 00:28:52.776 13:11:57 -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:52.776 13:11:57 -- host/failover.sh@41 -- # sleep 1 00:28:54.162 13:11:58 -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:54.162 [2024-04-26 13:11:58.949687] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113e210 is same with the state(5) to be set 00:28:54.162 [2024-04-26 13:11:58.949728] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113e210 is same with the state(5) to be set 00:28:54.162 [2024-04-26 13:11:58.949734] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113e210 is same with the state(5) to be set 00:28:54.162 [2024-04-26 13:11:58.949739] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113e210 is same with the state(5) to be set 00:28:54.162 [2024-04-26 13:11:58.949749] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113e210 is same with the state(5) to be set 00:28:54.162 [2024-04-26 13:11:58.949754] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113e210 is same with the state(5) to be set 00:28:54.162 [2024-04-26 13:11:58.949758] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113e210 is same with the state(5) to be set 00:28:54.162 [2024-04-26 13:11:58.949763] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113e210 is same with the state(5) to be set 00:28:54.162 [2024-04-26 13:11:58.949767] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113e210 is same with the state(5) to be set 00:28:54.162 [2024-04-26 13:11:58.949772] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113e210 is same with the state(5) to be set 00:28:54.162 [2024-04-26 13:11:58.949776] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113e210 is same with the state(5) to be set 00:28:54.162 [2024-04-26 13:11:58.949781] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113e210 is same with the state(5) to be set 00:28:54.162 [2024-04-26 13:11:58.949785] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113e210 is same with the state(5) to be set 00:28:54.162 [2024-04-26 13:11:58.949790] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113e210 is same with the state(5) to be set 00:28:54.162 [2024-04-26 13:11:58.949794] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113e210 is same with the state(5) to be set 00:28:54.162 [2024-04-26 13:11:58.949799] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113e210 is same with the state(5) to be set 00:28:54.162 [2024-04-26 13:11:58.949803] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113e210 is same with the state(5) to be set 00:28:54.162 [2024-04-26 13:11:58.949807] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113e210 is same with the state(5) to be set 00:28:54.163 [2024-04-26 13:11:58.949812] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113e210 is same with the state(5) to be set 00:28:54.163 [2024-04-26 13:11:58.949816] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113e210 is same with the state(5) to be set 00:28:54.163 [2024-04-26 13:11:58.949820] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113e210 is same with the state(5) to be set 00:28:54.163 [2024-04-26 13:11:58.949825] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113e210 is same with the state(5) to be set 00:28:54.163 [2024-04-26 13:11:58.949829] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113e210 is same with the state(5) to be set 00:28:54.163 [2024-04-26 13:11:58.949834] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113e210 is same with the state(5) to be set 00:28:54.163 [2024-04-26 13:11:58.949841] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113e210 is same with the state(5) to be set 00:28:54.163 [2024-04-26 13:11:58.949845] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113e210 is same with the state(5) to be set 00:28:54.163 [2024-04-26 13:11:58.949850] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113e210 is same with the state(5) to be set 00:28:54.163 [2024-04-26 13:11:58.949854] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113e210 is same with the state(5) to be set 00:28:54.163 [2024-04-26 13:11:58.949859] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113e210 is same with the state(5) to be set 00:28:54.163 [2024-04-26 13:11:58.949863] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113e210 is same with the state(5) to be set 00:28:54.163 [2024-04-26 13:11:58.949867] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113e210 is same with the state(5) to be set 00:28:54.163 [2024-04-26 13:11:58.949873] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113e210 is same with the state(5) to be set 00:28:54.163 [2024-04-26 13:11:58.949877] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113e210 is same with the state(5) to be set 00:28:54.163 [2024-04-26 13:11:58.949881] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113e210 is same with the state(5) to be set 00:28:54.163 [2024-04-26 13:11:58.949886] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113e210 is same with the state(5) to be set 00:28:54.163 [2024-04-26 13:11:58.949890] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113e210 is same with the state(5) to be set 00:28:54.163 [2024-04-26 13:11:58.949894] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113e210 is same with the state(5) to be set 00:28:54.163 [2024-04-26 13:11:58.949900] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113e210 is same with the state(5) to be set 00:28:54.163 [2024-04-26 13:11:58.949904] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113e210 is same with the state(5) to be set 00:28:54.163 [2024-04-26 13:11:58.949909] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113e210 is same with the state(5) to be set 00:28:54.163 [2024-04-26 13:11:58.949913] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113e210 is same with the state(5) to be set 00:28:54.163 [2024-04-26 13:11:58.949917] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113e210 is same with the state(5) to be set 00:28:54.163 [2024-04-26 13:11:58.949922] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113e210 is same with the state(5) to be set 00:28:54.163 [2024-04-26 13:11:58.949926] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113e210 is same with the state(5) to be set 00:28:54.163 [2024-04-26 13:11:58.949931] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113e210 is same with the state(5) to be set 00:28:54.163 [2024-04-26 13:11:58.949935] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113e210 is same with the state(5) to be set 00:28:54.163 [2024-04-26 13:11:58.949939] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113e210 is same with the state(5) to be set 00:28:54.163 [2024-04-26 13:11:58.949944] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113e210 is same with the state(5) to be set 00:28:54.163 [2024-04-26 13:11:58.949948] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113e210 is same with the state(5) to be set 00:28:54.163 [2024-04-26 13:11:58.949953] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113e210 is same with the state(5) to be set 00:28:54.163 [2024-04-26 13:11:58.949958] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113e210 is same with the state(5) to be set 00:28:54.163 [2024-04-26 13:11:58.949962] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113e210 is same with the state(5) to be set 00:28:54.163 [2024-04-26 13:11:58.949966] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113e210 is same with the state(5) to be set 00:28:54.163 [2024-04-26 13:11:58.949970] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113e210 is same with the state(5) to be set 00:28:54.163 [2024-04-26 13:11:58.949975] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113e210 is same with the state(5) to be set 00:28:54.163 [2024-04-26 13:11:58.949979] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113e210 is same with the state(5) to be set 00:28:54.163 [2024-04-26 13:11:58.949983] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113e210 is same with the state(5) to be set 00:28:54.163 [2024-04-26 13:11:58.949988] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113e210 is same with the state(5) to be set 00:28:54.163 [2024-04-26 13:11:58.949993] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113e210 is same with the state(5) to be set 00:28:54.163 [2024-04-26 13:11:58.949997] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113e210 is same with the state(5) to be set 00:28:54.163 [2024-04-26 13:11:58.950001] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113e210 is same with the state(5) to be set 00:28:54.163 [2024-04-26 13:11:58.950006] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113e210 is same with the state(5) to be set 00:28:54.163 [2024-04-26 13:11:58.950010] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113e210 is same with the state(5) to be set 00:28:54.163 [2024-04-26 13:11:58.950015] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113e210 is same with the state(5) to be set 00:28:54.163 [2024-04-26 13:11:58.950019] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113e210 is same with the state(5) to be set 00:28:54.163 [2024-04-26 13:11:58.950024] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113e210 is same with the state(5) to be set 00:28:54.163 [2024-04-26 13:11:58.950029] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113e210 is same with the state(5) to be set 00:28:54.163 13:11:58 -- host/failover.sh@45 -- # sleep 3 00:28:57.463 13:12:01 -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:57.464 00:28:57.464 13:12:02 -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:28:57.724 [2024-04-26 13:12:02.525297] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113f0c0 is same with the state(5) to be set 00:28:57.724 [2024-04-26 13:12:02.525335] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113f0c0 is same with the state(5) to be set 00:28:57.724 [2024-04-26 13:12:02.525340] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113f0c0 is same with the state(5) to be set 00:28:57.724 [2024-04-26 13:12:02.525345] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113f0c0 is same with the state(5) to be set 00:28:57.724 [2024-04-26 13:12:02.525350] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113f0c0 is same with the state(5) to be set 00:28:57.724 [2024-04-26 13:12:02.525354] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113f0c0 is same with the state(5) to be set 00:28:57.724 [2024-04-26 13:12:02.525359] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113f0c0 is same with the state(5) to be set 00:28:57.724 [2024-04-26 13:12:02.525363] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113f0c0 is same with the state(5) to be set 00:28:57.724 [2024-04-26 13:12:02.525368] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113f0c0 is same with the state(5) to be set 00:28:57.724 [2024-04-26 13:12:02.525373] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113f0c0 is same with the state(5) to be set 00:28:57.725 [2024-04-26 13:12:02.525377] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113f0c0 is same with the state(5) to be set 00:28:57.725 [2024-04-26 13:12:02.525382] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113f0c0 is same with the state(5) to be set 00:28:57.725 [2024-04-26 13:12:02.525386] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113f0c0 is same with the state(5) to be set 00:28:57.725 [2024-04-26 13:12:02.525391] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113f0c0 is same with the state(5) to be set 00:28:57.725 [2024-04-26 13:12:02.525395] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113f0c0 is same with the state(5) to be set 00:28:57.725 [2024-04-26 13:12:02.525404] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113f0c0 is same with the state(5) to be set 00:28:57.725 [2024-04-26 13:12:02.525409] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113f0c0 is same with the state(5) to be set 00:28:57.725 [2024-04-26 13:12:02.525414] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113f0c0 is same with the state(5) to be set 00:28:57.725 [2024-04-26 13:12:02.525418] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113f0c0 is same with the state(5) to be set 00:28:57.725 [2024-04-26 13:12:02.525423] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113f0c0 is same with the state(5) to be set 00:28:57.725 [2024-04-26 13:12:02.525427] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113f0c0 is same with the state(5) to be set 00:28:57.725 [2024-04-26 13:12:02.525431] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113f0c0 is same with the state(5) to be set 00:28:57.725 [2024-04-26 13:12:02.525436] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113f0c0 is same with the state(5) to be set 00:28:57.725 [2024-04-26 13:12:02.525440] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113f0c0 is same with the state(5) to be set 00:28:57.725 [2024-04-26 13:12:02.525444] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113f0c0 is same with the state(5) to be set 00:28:57.725 [2024-04-26 13:12:02.525449] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113f0c0 is same with the state(5) to be set 00:28:57.725 [2024-04-26 13:12:02.525453] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113f0c0 is same with the state(5) to be set 00:28:57.725 [2024-04-26 13:12:02.525457] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113f0c0 is same with the state(5) to be set 00:28:57.725 [2024-04-26 13:12:02.525462] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113f0c0 is same with the state(5) to be set 00:28:57.725 [2024-04-26 13:12:02.525467] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113f0c0 is same with the state(5) to be set 00:28:57.725 [2024-04-26 13:12:02.525472] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113f0c0 is same with the state(5) to be set 00:28:57.725 [2024-04-26 13:12:02.525476] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113f0c0 is same with the state(5) to be set 00:28:57.725 [2024-04-26 13:12:02.525480] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113f0c0 is same with the state(5) to be set 00:28:57.725 [2024-04-26 13:12:02.525485] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113f0c0 is same with the state(5) to be set 00:28:57.725 [2024-04-26 13:12:02.525489] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113f0c0 is same with the state(5) to be set 00:28:57.725 [2024-04-26 13:12:02.525494] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113f0c0 is same with the state(5) to be set 00:28:57.725 [2024-04-26 13:12:02.525498] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113f0c0 is same with the state(5) to be set 00:28:57.725 [2024-04-26 13:12:02.525502] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113f0c0 is same with the state(5) to be set 00:28:57.725 [2024-04-26 13:12:02.525507] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113f0c0 is same with the state(5) to be set 00:28:57.725 [2024-04-26 13:12:02.525511] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113f0c0 is same with the state(5) to be set 00:28:57.725 [2024-04-26 13:12:02.525515] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113f0c0 is same with the state(5) to be set 00:28:57.725 [2024-04-26 13:12:02.525520] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113f0c0 is same with the state(5) to be set 00:28:57.725 [2024-04-26 13:12:02.525528] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113f0c0 is same with the state(5) to be set 00:28:57.725 [2024-04-26 13:12:02.525532] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113f0c0 is same with the state(5) to be set 00:28:57.725 [2024-04-26 13:12:02.525537] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113f0c0 is same with the state(5) to be set 00:28:57.725 [2024-04-26 13:12:02.525541] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113f0c0 is same with the state(5) to be set 00:28:57.725 [2024-04-26 13:12:02.525546] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113f0c0 is same with the state(5) to be set 00:28:57.725 [2024-04-26 13:12:02.525550] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113f0c0 is same with the state(5) to be set 00:28:57.725 [2024-04-26 13:12:02.525554] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113f0c0 is same with the state(5) to be set 00:28:57.725 [2024-04-26 13:12:02.525559] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113f0c0 is same with the state(5) to be set 00:28:57.725 [2024-04-26 13:12:02.525563] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113f0c0 is same with the state(5) to be set 00:28:57.725 [2024-04-26 13:12:02.525567] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113f0c0 is same with the state(5) to be set 00:28:57.725 [2024-04-26 13:12:02.525572] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113f0c0 is same with the state(5) to be set 00:28:57.725 [2024-04-26 13:12:02.525576] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113f0c0 is same with the state(5) to be set 00:28:57.725 [2024-04-26 13:12:02.525580] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113f0c0 is same with the state(5) to be set 00:28:57.725 [2024-04-26 13:12:02.525585] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113f0c0 is same with the state(5) to be set 00:28:57.725 [2024-04-26 13:12:02.525590] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113f0c0 is same with the state(5) to be set 00:28:57.725 [2024-04-26 13:12:02.525595] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113f0c0 is same with the state(5) to be set 00:28:57.725 [2024-04-26 13:12:02.525600] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113f0c0 is same with the state(5) to be set 00:28:57.725 [2024-04-26 13:12:02.525604] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113f0c0 is same with the state(5) to be set 00:28:57.725 [2024-04-26 13:12:02.525609] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113f0c0 is same with the state(5) to be set 00:28:57.725 [2024-04-26 13:12:02.525613] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113f0c0 is same with the state(5) to be set 00:28:57.725 [2024-04-26 13:12:02.525618] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113f0c0 is same with the state(5) to be set 00:28:57.725 [2024-04-26 13:12:02.525622] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113f0c0 is same with the state(5) to be set 00:28:57.725 [2024-04-26 13:12:02.525627] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113f0c0 is same with the state(5) to be set 00:28:57.725 [2024-04-26 13:12:02.525631] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113f0c0 is same with the state(5) to be set 00:28:57.725 [2024-04-26 13:12:02.525636] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113f0c0 is same with the state(5) to be set 00:28:57.725 [2024-04-26 13:12:02.525641] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113f0c0 is same with the state(5) to be set 00:28:57.725 [2024-04-26 13:12:02.525646] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113f0c0 is same with the state(5) to be set 00:28:57.725 [2024-04-26 13:12:02.525652] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113f0c0 is same with the state(5) to be set 00:28:57.725 [2024-04-26 13:12:02.525656] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113f0c0 is same with the state(5) to be set 00:28:57.725 [2024-04-26 13:12:02.525661] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113f0c0 is same with the state(5) to be set 00:28:57.725 13:12:02 -- host/failover.sh@50 -- # sleep 3 00:29:01.027 13:12:05 -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:01.027 [2024-04-26 13:12:05.698898] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:01.027 13:12:05 -- host/failover.sh@55 -- # sleep 1 00:29:01.969 13:12:06 -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:29:01.969 [2024-04-26 13:12:06.875105] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113fda0 is same with the state(5) to be set 00:29:01.969 [2024-04-26 13:12:06.875145] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113fda0 is same with the state(5) to be set 00:29:01.969 [2024-04-26 13:12:06.875150] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113fda0 is same with the state(5) to be set 00:29:01.969 [2024-04-26 13:12:06.875155] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113fda0 is same with the state(5) to be set 00:29:01.969 [2024-04-26 13:12:06.875160] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113fda0 is same with the state(5) to be set 00:29:01.969 [2024-04-26 13:12:06.875164] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113fda0 is same with the state(5) to be set 00:29:01.969 [2024-04-26 13:12:06.875169] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113fda0 is same with the state(5) to be set 00:29:01.969 [2024-04-26 13:12:06.875173] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113fda0 is same with the state(5) to be set 00:29:01.969 [2024-04-26 13:12:06.875178] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113fda0 is same with the state(5) to be set 00:29:01.969 [2024-04-26 13:12:06.875182] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113fda0 is same with the state(5) to be set 00:29:01.969 [2024-04-26 13:12:06.875187] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113fda0 is same with the state(5) to be set 00:29:01.969 [2024-04-26 13:12:06.875191] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113fda0 is same with the state(5) to be set 00:29:01.969 [2024-04-26 13:12:06.875196] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113fda0 is same with the state(5) to be set 00:29:01.969 [2024-04-26 13:12:06.875200] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113fda0 is same with the state(5) to be set 00:29:01.969 [2024-04-26 13:12:06.875205] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113fda0 is same with the state(5) to be set 00:29:01.969 [2024-04-26 13:12:06.875209] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113fda0 is same with the state(5) to be set 00:29:01.969 [2024-04-26 13:12:06.875213] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113fda0 is same with the state(5) to be set 00:29:01.969 [2024-04-26 13:12:06.875218] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113fda0 is same with the state(5) to be set 00:29:01.969 [2024-04-26 13:12:06.875222] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113fda0 is same with the state(5) to be set 00:29:01.969 [2024-04-26 13:12:06.875227] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113fda0 is same with the state(5) to be set 00:29:01.969 [2024-04-26 13:12:06.875236] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113fda0 is same with the state(5) to be set 00:29:01.969 [2024-04-26 13:12:06.875240] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113fda0 is same with the state(5) to be set 00:29:01.969 [2024-04-26 13:12:06.875245] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113fda0 is same with the state(5) to be set 00:29:01.969 [2024-04-26 13:12:06.875250] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113fda0 is same with the state(5) to be set 00:29:01.969 [2024-04-26 13:12:06.875254] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113fda0 is same with the state(5) to be set 00:29:01.969 [2024-04-26 13:12:06.875258] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113fda0 is same with the state(5) to be set 00:29:01.969 [2024-04-26 13:12:06.875263] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113fda0 is same with the state(5) to be set 00:29:01.969 [2024-04-26 13:12:06.875267] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113fda0 is same with the state(5) to be set 00:29:01.969 [2024-04-26 13:12:06.875272] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113fda0 is same with the state(5) to be set 00:29:01.969 [2024-04-26 13:12:06.875276] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113fda0 is same with the state(5) to be set 00:29:01.969 [2024-04-26 13:12:06.875280] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113fda0 is same with the state(5) to be set 00:29:01.969 [2024-04-26 13:12:06.875284] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113fda0 is same with the state(5) to be set 00:29:01.969 [2024-04-26 13:12:06.875289] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113fda0 is same with the state(5) to be set 00:29:01.969 [2024-04-26 13:12:06.875293] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113fda0 is same with the state(5) to be set 00:29:01.970 [2024-04-26 13:12:06.875297] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113fda0 is same with the state(5) to be set 00:29:01.970 13:12:06 -- host/failover.sh@59 -- # wait 4146340 00:29:08.614 0 00:29:08.614 13:12:12 -- host/failover.sh@61 -- # killprocess 4146045 00:29:08.614 13:12:12 -- common/autotest_common.sh@936 -- # '[' -z 4146045 ']' 00:29:08.614 13:12:12 -- common/autotest_common.sh@940 -- # kill -0 4146045 00:29:08.614 13:12:12 -- common/autotest_common.sh@941 -- # uname 00:29:08.614 13:12:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:08.614 13:12:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4146045 00:29:08.614 13:12:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:29:08.614 13:12:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:29:08.614 13:12:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4146045' 00:29:08.614 killing process with pid 4146045 00:29:08.614 13:12:13 -- common/autotest_common.sh@955 -- # kill 4146045 00:29:08.614 13:12:13 -- common/autotest_common.sh@960 -- # wait 4146045 00:29:08.614 13:12:13 -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:08.614 [2024-04-26 13:11:56.499848] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:29:08.614 [2024-04-26 13:11:56.499903] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4146045 ] 00:29:08.614 EAL: No free 2048 kB hugepages reported on node 1 00:29:08.614 [2024-04-26 13:11:56.559489] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:08.614 [2024-04-26 13:11:56.621644] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:08.614 Running I/O for 15 seconds... 00:29:08.614 [2024-04-26 13:11:58.950269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:93224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.614 [2024-04-26 13:11:58.950304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.614 [2024-04-26 13:11:58.950322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:93232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.614 [2024-04-26 13:11:58.950331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.614 [2024-04-26 13:11:58.950341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:93240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.614 [2024-04-26 13:11:58.950348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.614 [2024-04-26 13:11:58.950357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:93248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.614 [2024-04-26 13:11:58.950364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.614 [2024-04-26 13:11:58.950373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:93256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.614 [2024-04-26 13:11:58.950380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.614 [2024-04-26 13:11:58.950389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:93264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.614 [2024-04-26 13:11:58.950397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.614 [2024-04-26 13:11:58.950407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:93272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.614 [2024-04-26 13:11:58.950414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.614 [2024-04-26 13:11:58.950423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:93280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.614 [2024-04-26 13:11:58.950430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.614 [2024-04-26 13:11:58.950439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:93288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.614 [2024-04-26 13:11:58.950446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.614 [2024-04-26 13:11:58.950455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:93296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.614 [2024-04-26 13:11:58.950463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.614 [2024-04-26 13:11:58.950472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:93304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.614 [2024-04-26 13:11:58.950479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.614 [2024-04-26 13:11:58.950493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:93312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.614 [2024-04-26 13:11:58.950501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.614 [2024-04-26 13:11:58.950510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:93320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.615 [2024-04-26 13:11:58.950517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.615 [2024-04-26 13:11:58.950526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:93328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.615 [2024-04-26 13:11:58.950533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.615 [2024-04-26 13:11:58.950542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:93336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.615 [2024-04-26 13:11:58.950549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.615 [2024-04-26 13:11:58.950558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:93344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.615 [2024-04-26 13:11:58.950565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.615 [2024-04-26 13:11:58.950575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:93352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.615 [2024-04-26 13:11:58.950582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.615 [2024-04-26 13:11:58.950591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:93360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.615 [2024-04-26 13:11:58.950598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.615 [2024-04-26 13:11:58.950607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:93368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.615 [2024-04-26 13:11:58.950614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.615 [2024-04-26 13:11:58.950623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:93376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.615 [2024-04-26 13:11:58.950630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.615 [2024-04-26 13:11:58.950639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:93384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.615 [2024-04-26 13:11:58.950646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.615 [2024-04-26 13:11:58.950655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:93392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.615 [2024-04-26 13:11:58.950662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.615 [2024-04-26 13:11:58.950671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:93400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.615 [2024-04-26 13:11:58.950678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.615 [2024-04-26 13:11:58.950687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:93408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.615 [2024-04-26 13:11:58.950699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.615 [2024-04-26 13:11:58.950709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:93416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.615 [2024-04-26 13:11:58.950716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.615 [2024-04-26 13:11:58.950726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:93424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.615 [2024-04-26 13:11:58.950733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.615 [2024-04-26 13:11:58.950742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:93432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.615 [2024-04-26 13:11:58.950748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.615 [2024-04-26 13:11:58.950757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:93440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.615 [2024-04-26 13:11:58.950764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.615 [2024-04-26 13:11:58.950774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:93448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.615 [2024-04-26 13:11:58.950781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.615 [2024-04-26 13:11:58.950789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:93456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.615 [2024-04-26 13:11:58.950796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.615 [2024-04-26 13:11:58.950806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:93464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.615 [2024-04-26 13:11:58.950813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.615 [2024-04-26 13:11:58.950823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:93472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.615 [2024-04-26 13:11:58.950830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.615 [2024-04-26 13:11:58.950842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:93480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.615 [2024-04-26 13:11:58.950850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.615 [2024-04-26 13:11:58.950859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:93488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.615 [2024-04-26 13:11:58.950866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.615 [2024-04-26 13:11:58.950876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:93496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.615 [2024-04-26 13:11:58.950883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.615 [2024-04-26 13:11:58.950892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:93504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.615 [2024-04-26 13:11:58.950899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.615 [2024-04-26 13:11:58.950909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:93512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.615 [2024-04-26 13:11:58.950916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.615 [2024-04-26 13:11:58.950925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:93520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.615 [2024-04-26 13:11:58.950933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.615 [2024-04-26 13:11:58.950942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:93528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.615 [2024-04-26 13:11:58.950949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.615 [2024-04-26 13:11:58.950958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:93536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.615 [2024-04-26 13:11:58.950965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.615 [2024-04-26 13:11:58.950974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:93544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.615 [2024-04-26 13:11:58.950981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.615 [2024-04-26 13:11:58.950990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:93552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.615 [2024-04-26 13:11:58.950997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.615 [2024-04-26 13:11:58.951006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:93560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.615 [2024-04-26 13:11:58.951013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.615 [2024-04-26 13:11:58.951022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:93568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.615 [2024-04-26 13:11:58.951030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.615 [2024-04-26 13:11:58.951039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:93576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.615 [2024-04-26 13:11:58.951046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.615 [2024-04-26 13:11:58.951056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:93584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.615 [2024-04-26 13:11:58.951063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.615 [2024-04-26 13:11:58.951072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:93592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.615 [2024-04-26 13:11:58.951079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.615 [2024-04-26 13:11:58.951088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:93600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.615 [2024-04-26 13:11:58.951095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.615 [2024-04-26 13:11:58.951104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:93608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.615 [2024-04-26 13:11:58.951112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.615 [2024-04-26 13:11:58.951122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:93616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.615 [2024-04-26 13:11:58.951129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.615 [2024-04-26 13:11:58.951138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:93624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.615 [2024-04-26 13:11:58.951145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.615 [2024-04-26 13:11:58.951154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:93632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.615 [2024-04-26 13:11:58.951161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.616 [2024-04-26 13:11:58.951170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:93640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.616 [2024-04-26 13:11:58.951177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.616 [2024-04-26 13:11:58.951186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:93648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.616 [2024-04-26 13:11:58.951193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.616 [2024-04-26 13:11:58.951202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:93656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.616 [2024-04-26 13:11:58.951209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.616 [2024-04-26 13:11:58.951218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:93664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.616 [2024-04-26 13:11:58.951225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.616 [2024-04-26 13:11:58.951234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:93672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.616 [2024-04-26 13:11:58.951241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.616 [2024-04-26 13:11:58.951250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:93680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.616 [2024-04-26 13:11:58.951257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.616 [2024-04-26 13:11:58.951266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:93688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.616 [2024-04-26 13:11:58.951273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.616 [2024-04-26 13:11:58.951282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:93696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.616 [2024-04-26 13:11:58.951289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.616 [2024-04-26 13:11:58.951298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:93704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.616 [2024-04-26 13:11:58.951305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.616 [2024-04-26 13:11:58.951314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:93712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.616 [2024-04-26 13:11:58.951323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.616 [2024-04-26 13:11:58.951332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:93720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.616 [2024-04-26 13:11:58.951338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.616 [2024-04-26 13:11:58.951347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:93728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.616 [2024-04-26 13:11:58.951355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.616 [2024-04-26 13:11:58.951364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:93736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.616 [2024-04-26 13:11:58.951371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.616 [2024-04-26 13:11:58.951380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:93744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.616 [2024-04-26 13:11:58.951388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.616 [2024-04-26 13:11:58.951397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:93752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.616 [2024-04-26 13:11:58.951404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.616 [2024-04-26 13:11:58.951413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:93760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.616 [2024-04-26 13:11:58.951420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.616 [2024-04-26 13:11:58.951429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:93768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.616 [2024-04-26 13:11:58.951436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.616 [2024-04-26 13:11:58.951445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:93776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.616 [2024-04-26 13:11:58.951453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.616 [2024-04-26 13:11:58.951462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:93784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.616 [2024-04-26 13:11:58.951469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.616 [2024-04-26 13:11:58.951478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:93792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.616 [2024-04-26 13:11:58.951485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.616 [2024-04-26 13:11:58.951494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:93800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.616 [2024-04-26 13:11:58.951501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.616 [2024-04-26 13:11:58.951510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:93808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.616 [2024-04-26 13:11:58.951517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.616 [2024-04-26 13:11:58.951527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:93816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.616 [2024-04-26 13:11:58.951535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.616 [2024-04-26 13:11:58.951544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:93824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.616 [2024-04-26 13:11:58.951551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.616 [2024-04-26 13:11:58.951560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:93832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.616 [2024-04-26 13:11:58.951567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.616 [2024-04-26 13:11:58.951576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:93840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.616 [2024-04-26 13:11:58.951583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.616 [2024-04-26 13:11:58.951591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:93848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.616 [2024-04-26 13:11:58.951599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.616 [2024-04-26 13:11:58.951607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:93856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.616 [2024-04-26 13:11:58.951614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.616 [2024-04-26 13:11:58.951623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:93864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.616 [2024-04-26 13:11:58.951630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.616 [2024-04-26 13:11:58.951639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:93872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.616 [2024-04-26 13:11:58.951647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.616 [2024-04-26 13:11:58.951656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:93880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.616 [2024-04-26 13:11:58.951663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.616 [2024-04-26 13:11:58.951672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:93888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.616 [2024-04-26 13:11:58.951679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.616 [2024-04-26 13:11:58.951688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:93896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.616 [2024-04-26 13:11:58.951695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.616 [2024-04-26 13:11:58.951704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:93904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.616 [2024-04-26 13:11:58.951711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.616 [2024-04-26 13:11:58.951720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:93912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.616 [2024-04-26 13:11:58.951728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.616 [2024-04-26 13:11:58.951737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:93920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.616 [2024-04-26 13:11:58.951745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.616 [2024-04-26 13:11:58.951754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:93928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.616 [2024-04-26 13:11:58.951761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.616 [2024-04-26 13:11:58.951770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:93936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.616 [2024-04-26 13:11:58.951777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.616 [2024-04-26 13:11:58.951786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:93944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.616 [2024-04-26 13:11:58.951793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.616 [2024-04-26 13:11:58.951802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:93952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.617 [2024-04-26 13:11:58.951809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.617 [2024-04-26 13:11:58.951818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:93960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.617 [2024-04-26 13:11:58.951825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.617 [2024-04-26 13:11:58.951834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:93968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.617 [2024-04-26 13:11:58.951845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.617 [2024-04-26 13:11:58.951855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:93976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.617 [2024-04-26 13:11:58.951862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.617 [2024-04-26 13:11:58.951870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:93984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.617 [2024-04-26 13:11:58.951877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.617 [2024-04-26 13:11:58.951887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:93992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.617 [2024-04-26 13:11:58.951896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.617 [2024-04-26 13:11:58.951905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:94000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.617 [2024-04-26 13:11:58.951912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.617 [2024-04-26 13:11:58.951921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:94008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.617 [2024-04-26 13:11:58.951928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.617 [2024-04-26 13:11:58.951939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:94016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.617 [2024-04-26 13:11:58.951946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.617 [2024-04-26 13:11:58.951955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:94024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.617 [2024-04-26 13:11:58.951962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.617 [2024-04-26 13:11:58.951971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:94032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.617 [2024-04-26 13:11:58.951978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.617 [2024-04-26 13:11:58.951987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:94040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.617 [2024-04-26 13:11:58.951994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.617 [2024-04-26 13:11:58.952004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:94048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.617 [2024-04-26 13:11:58.952011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.617 [2024-04-26 13:11:58.952020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:94056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.617 [2024-04-26 13:11:58.952027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.617 [2024-04-26 13:11:58.952036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:94064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.617 [2024-04-26 13:11:58.952043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.617 [2024-04-26 13:11:58.952052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:94072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.617 [2024-04-26 13:11:58.952059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.617 [2024-04-26 13:11:58.952068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:94080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.617 [2024-04-26 13:11:58.952075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.617 [2024-04-26 13:11:58.952084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:94088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.617 [2024-04-26 13:11:58.952091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.617 [2024-04-26 13:11:58.952100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:94096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.617 [2024-04-26 13:11:58.952107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.617 [2024-04-26 13:11:58.952116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:94104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.617 [2024-04-26 13:11:58.952123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.617 [2024-04-26 13:11:58.952132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:94112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.617 [2024-04-26 13:11:58.952141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.617 [2024-04-26 13:11:58.952150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:94120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.617 [2024-04-26 13:11:58.952156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.617 [2024-04-26 13:11:58.952165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:94128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.617 [2024-04-26 13:11:58.952172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.617 [2024-04-26 13:11:58.952182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:94136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.617 [2024-04-26 13:11:58.952189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.617 [2024-04-26 13:11:58.952198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:94144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.617 [2024-04-26 13:11:58.952204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.617 [2024-04-26 13:11:58.952213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:94152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.617 [2024-04-26 13:11:58.952220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.617 [2024-04-26 13:11:58.952230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:94160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.617 [2024-04-26 13:11:58.952237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.617 [2024-04-26 13:11:58.952246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:94168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.617 [2024-04-26 13:11:58.952253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.617 [2024-04-26 13:11:58.952262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:94176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.617 [2024-04-26 13:11:58.952269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.617 [2024-04-26 13:11:58.952278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:94184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.617 [2024-04-26 13:11:58.952285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.617 [2024-04-26 13:11:58.952294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:94192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.617 [2024-04-26 13:11:58.952301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.617 [2024-04-26 13:11:58.952310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:94200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.617 [2024-04-26 13:11:58.952317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.617 [2024-04-26 13:11:58.952326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:94208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.617 [2024-04-26 13:11:58.952333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.617 [2024-04-26 13:11:58.952342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:94216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.617 [2024-04-26 13:11:58.952350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.617 [2024-04-26 13:11:58.952359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:94224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.617 [2024-04-26 13:11:58.952366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.617 [2024-04-26 13:11:58.952375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:94232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.617 [2024-04-26 13:11:58.952382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.617 [2024-04-26 13:11:58.952390] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b32640 is same with the state(5) to be set 00:29:08.617 [2024-04-26 13:11:58.952398] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:08.617 [2024-04-26 13:11:58.952404] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:08.617 [2024-04-26 13:11:58.952413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94240 len:8 PRP1 0x0 PRP2 0x0 00:29:08.617 [2024-04-26 13:11:58.952420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.617 [2024-04-26 13:11:58.952457] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1b32640 was disconnected and freed. reset controller. 00:29:08.617 [2024-04-26 13:11:58.952466] bdev_nvme.c:1857:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:29:08.617 [2024-04-26 13:11:58.952484] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:08.618 [2024-04-26 13:11:58.952492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.618 [2024-04-26 13:11:58.952500] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:08.618 [2024-04-26 13:11:58.952507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.618 [2024-04-26 13:11:58.952515] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:08.618 [2024-04-26 13:11:58.952522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.618 [2024-04-26 13:11:58.952530] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:08.618 [2024-04-26 13:11:58.952536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.618 [2024-04-26 13:11:58.952543] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.618 [2024-04-26 13:11:58.956132] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.618 [2024-04-26 13:11:58.956156] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b13af0 (9): Bad file descriptor 00:29:08.618 [2024-04-26 13:11:59.032801] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:08.618 [2024-04-26 13:12:02.526812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:33408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.618 [2024-04-26 13:12:02.526853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.618 [2024-04-26 13:12:02.526870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:33416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.618 [2024-04-26 13:12:02.526892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.618 [2024-04-26 13:12:02.526902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:33424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.618 [2024-04-26 13:12:02.526909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.618 [2024-04-26 13:12:02.526919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:33432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.618 [2024-04-26 13:12:02.526926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.618 [2024-04-26 13:12:02.526935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:33440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.618 [2024-04-26 13:12:02.526942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.618 [2024-04-26 13:12:02.526951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:33448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.618 [2024-04-26 13:12:02.526958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.618 [2024-04-26 13:12:02.526967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.618 [2024-04-26 13:12:02.526975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.618 [2024-04-26 13:12:02.526984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:33464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.618 [2024-04-26 13:12:02.526991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.618 [2024-04-26 13:12:02.527000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:33472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.618 [2024-04-26 13:12:02.527007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.618 [2024-04-26 13:12:02.527016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:33480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.618 [2024-04-26 13:12:02.527024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.618 [2024-04-26 13:12:02.527033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:33488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.618 [2024-04-26 13:12:02.527040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.618 [2024-04-26 13:12:02.527049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:33496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.618 [2024-04-26 13:12:02.527056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.618 [2024-04-26 13:12:02.527065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:33504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.618 [2024-04-26 13:12:02.527073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.618 [2024-04-26 13:12:02.527082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:33512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.618 [2024-04-26 13:12:02.527089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.618 [2024-04-26 13:12:02.527100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:33520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.618 [2024-04-26 13:12:02.527107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.618 [2024-04-26 13:12:02.527116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:33528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.618 [2024-04-26 13:12:02.527123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.618 [2024-04-26 13:12:02.527132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:33536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.618 [2024-04-26 13:12:02.527139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.618 [2024-04-26 13:12:02.527148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:33544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.618 [2024-04-26 13:12:02.527155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.618 [2024-04-26 13:12:02.527164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:33552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.618 [2024-04-26 13:12:02.527171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.618 [2024-04-26 13:12:02.527180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:33560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.618 [2024-04-26 13:12:02.527187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.618 [2024-04-26 13:12:02.527196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:33568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.618 [2024-04-26 13:12:02.527203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.618 [2024-04-26 13:12:02.527213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:33576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.618 [2024-04-26 13:12:02.527220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.618 [2024-04-26 13:12:02.527229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:33584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.618 [2024-04-26 13:12:02.527235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.618 [2024-04-26 13:12:02.527245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:33592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.618 [2024-04-26 13:12:02.527252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.618 [2024-04-26 13:12:02.527261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:33600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.618 [2024-04-26 13:12:02.527268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.618 [2024-04-26 13:12:02.527278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:33608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.618 [2024-04-26 13:12:02.527285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.618 [2024-04-26 13:12:02.527294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:33616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.618 [2024-04-26 13:12:02.527302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.618 [2024-04-26 13:12:02.527311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:33624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.618 [2024-04-26 13:12:02.527318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.618 [2024-04-26 13:12:02.527327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:33632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.618 [2024-04-26 13:12:02.527334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.618 [2024-04-26 13:12:02.527343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:33640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.618 [2024-04-26 13:12:02.527350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.618 [2024-04-26 13:12:02.527359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:33648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.618 [2024-04-26 13:12:02.527367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.618 [2024-04-26 13:12:02.527375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:33656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.618 [2024-04-26 13:12:02.527382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.619 [2024-04-26 13:12:02.527391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:33664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.619 [2024-04-26 13:12:02.527400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.619 [2024-04-26 13:12:02.527409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:33672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.619 [2024-04-26 13:12:02.527416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.619 [2024-04-26 13:12:02.527425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:33680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.619 [2024-04-26 13:12:02.527432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.619 [2024-04-26 13:12:02.527441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:33688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.619 [2024-04-26 13:12:02.527448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.619 [2024-04-26 13:12:02.527457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:33696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.619 [2024-04-26 13:12:02.527464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.619 [2024-04-26 13:12:02.527473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:33704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.619 [2024-04-26 13:12:02.527480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.619 [2024-04-26 13:12:02.527489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:33712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.619 [2024-04-26 13:12:02.527496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.619 [2024-04-26 13:12:02.527507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:33720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.619 [2024-04-26 13:12:02.527514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.619 [2024-04-26 13:12:02.527523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:33728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.619 [2024-04-26 13:12:02.527530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.619 [2024-04-26 13:12:02.527539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:33736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.619 [2024-04-26 13:12:02.527546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.619 [2024-04-26 13:12:02.527555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:33744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.619 [2024-04-26 13:12:02.527563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.619 [2024-04-26 13:12:02.527572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:33752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.619 [2024-04-26 13:12:02.527578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.619 [2024-04-26 13:12:02.527587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:33760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.619 [2024-04-26 13:12:02.527595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.619 [2024-04-26 13:12:02.527604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:33768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.619 [2024-04-26 13:12:02.527612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.619 [2024-04-26 13:12:02.527621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:33776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.619 [2024-04-26 13:12:02.527628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.619 [2024-04-26 13:12:02.527637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:33784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.619 [2024-04-26 13:12:02.527644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.619 [2024-04-26 13:12:02.527653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:33792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.619 [2024-04-26 13:12:02.527660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.619 [2024-04-26 13:12:02.527669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:33800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.619 [2024-04-26 13:12:02.527676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.619 [2024-04-26 13:12:02.527685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:33808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.619 [2024-04-26 13:12:02.527692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.619 [2024-04-26 13:12:02.527701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:33816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.619 [2024-04-26 13:12:02.527709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.619 [2024-04-26 13:12:02.527718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:33824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.619 [2024-04-26 13:12:02.527725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.619 [2024-04-26 13:12:02.527734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:33832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.619 [2024-04-26 13:12:02.527741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.619 [2024-04-26 13:12:02.527751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:33840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.619 [2024-04-26 13:12:02.527757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.619 [2024-04-26 13:12:02.527766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:33848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.619 [2024-04-26 13:12:02.527773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.619 [2024-04-26 13:12:02.527782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:33856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.619 [2024-04-26 13:12:02.527789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.619 [2024-04-26 13:12:02.527798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:33864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.619 [2024-04-26 13:12:02.527805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.619 [2024-04-26 13:12:02.527814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:33872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.619 [2024-04-26 13:12:02.527821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.619 [2024-04-26 13:12:02.527830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:33880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.619 [2024-04-26 13:12:02.527841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.619 [2024-04-26 13:12:02.527850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:33888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.619 [2024-04-26 13:12:02.527857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.619 [2024-04-26 13:12:02.527866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:33896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.619 [2024-04-26 13:12:02.527873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.619 [2024-04-26 13:12:02.527882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:33904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.619 [2024-04-26 13:12:02.527889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.619 [2024-04-26 13:12:02.527898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:33912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.619 [2024-04-26 13:12:02.527906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.619 [2024-04-26 13:12:02.527914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:33920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.619 [2024-04-26 13:12:02.527929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.619 [2024-04-26 13:12:02.527938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:33928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.619 [2024-04-26 13:12:02.527945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.619 [2024-04-26 13:12:02.527954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:33936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.620 [2024-04-26 13:12:02.527961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.620 [2024-04-26 13:12:02.527970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:33944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.620 [2024-04-26 13:12:02.527977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.620 [2024-04-26 13:12:02.527986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:33952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.620 [2024-04-26 13:12:02.527993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.620 [2024-04-26 13:12:02.528002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:34000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.620 [2024-04-26 13:12:02.528009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.620 [2024-04-26 13:12:02.528018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:34008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.620 [2024-04-26 13:12:02.528025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.620 [2024-04-26 13:12:02.528034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:34016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.620 [2024-04-26 13:12:02.528041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.620 [2024-04-26 13:12:02.528049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:34024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.620 [2024-04-26 13:12:02.528056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.620 [2024-04-26 13:12:02.528065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:34032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.620 [2024-04-26 13:12:02.528072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.620 [2024-04-26 13:12:02.528081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:34040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.620 [2024-04-26 13:12:02.528088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.620 [2024-04-26 13:12:02.528097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:34048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.620 [2024-04-26 13:12:02.528104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.620 [2024-04-26 13:12:02.528113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:33960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.620 [2024-04-26 13:12:02.528120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.620 [2024-04-26 13:12:02.528130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:33968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.620 [2024-04-26 13:12:02.528137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.620 [2024-04-26 13:12:02.528146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:33976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.620 [2024-04-26 13:12:02.528156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.620 [2024-04-26 13:12:02.528165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:33984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.620 [2024-04-26 13:12:02.528172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.620 [2024-04-26 13:12:02.528181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:33992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.620 [2024-04-26 13:12:02.528188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.620 [2024-04-26 13:12:02.528197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:34056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.620 [2024-04-26 13:12:02.528204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.620 [2024-04-26 13:12:02.528213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:34064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.620 [2024-04-26 13:12:02.528221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.620 [2024-04-26 13:12:02.528230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:34072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.620 [2024-04-26 13:12:02.528237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.620 [2024-04-26 13:12:02.528245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:34080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.620 [2024-04-26 13:12:02.528252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.620 [2024-04-26 13:12:02.528261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:34088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.620 [2024-04-26 13:12:02.528268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.620 [2024-04-26 13:12:02.528277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:34096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.620 [2024-04-26 13:12:02.528284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.620 [2024-04-26 13:12:02.528293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:34104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.620 [2024-04-26 13:12:02.528300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.620 [2024-04-26 13:12:02.528308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:34112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.620 [2024-04-26 13:12:02.528315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.620 [2024-04-26 13:12:02.528324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:34120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.620 [2024-04-26 13:12:02.528333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.620 [2024-04-26 13:12:02.528342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:34128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.620 [2024-04-26 13:12:02.528349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.620 [2024-04-26 13:12:02.528358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:34136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.620 [2024-04-26 13:12:02.528365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.620 [2024-04-26 13:12:02.528374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:34144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.620 [2024-04-26 13:12:02.528381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.620 [2024-04-26 13:12:02.528389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:34152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.620 [2024-04-26 13:12:02.528396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.620 [2024-04-26 13:12:02.528405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:34160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.620 [2024-04-26 13:12:02.528412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.620 [2024-04-26 13:12:02.528421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:34168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.620 [2024-04-26 13:12:02.528428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.620 [2024-04-26 13:12:02.528437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:34176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.620 [2024-04-26 13:12:02.528445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.620 [2024-04-26 13:12:02.528454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:34184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.620 [2024-04-26 13:12:02.528461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.620 [2024-04-26 13:12:02.528470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:34192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.620 [2024-04-26 13:12:02.528477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.620 [2024-04-26 13:12:02.528485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:34200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.620 [2024-04-26 13:12:02.528492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.620 [2024-04-26 13:12:02.528501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:34208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.620 [2024-04-26 13:12:02.528508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.620 [2024-04-26 13:12:02.528517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:34216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.620 [2024-04-26 13:12:02.528524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.620 [2024-04-26 13:12:02.528534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:34224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.620 [2024-04-26 13:12:02.528541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.620 [2024-04-26 13:12:02.528550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:34232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.620 [2024-04-26 13:12:02.528557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.620 [2024-04-26 13:12:02.528566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:34240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.620 [2024-04-26 13:12:02.528573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.620 [2024-04-26 13:12:02.528582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:34248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.620 [2024-04-26 13:12:02.528589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.620 [2024-04-26 13:12:02.528598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:34256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.621 [2024-04-26 13:12:02.528605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.621 [2024-04-26 13:12:02.528614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:34264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.621 [2024-04-26 13:12:02.528621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.621 [2024-04-26 13:12:02.528630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:34272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.621 [2024-04-26 13:12:02.528637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.621 [2024-04-26 13:12:02.528646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:34280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.621 [2024-04-26 13:12:02.528653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.621 [2024-04-26 13:12:02.528662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:34288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.621 [2024-04-26 13:12:02.528671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.621 [2024-04-26 13:12:02.528680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:34296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.621 [2024-04-26 13:12:02.528687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.621 [2024-04-26 13:12:02.528696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:34304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.621 [2024-04-26 13:12:02.528703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.621 [2024-04-26 13:12:02.528712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:34312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.621 [2024-04-26 13:12:02.528719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.621 [2024-04-26 13:12:02.528728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:34320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.621 [2024-04-26 13:12:02.528735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.621 [2024-04-26 13:12:02.528745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:34328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.621 [2024-04-26 13:12:02.528752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.621 [2024-04-26 13:12:02.528761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:34336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.621 [2024-04-26 13:12:02.528768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.621 [2024-04-26 13:12:02.528776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:34344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.621 [2024-04-26 13:12:02.528783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.621 [2024-04-26 13:12:02.528792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:34352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.621 [2024-04-26 13:12:02.528799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.621 [2024-04-26 13:12:02.528808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:34360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.621 [2024-04-26 13:12:02.528815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.621 [2024-04-26 13:12:02.528824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:34368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.621 [2024-04-26 13:12:02.528831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.621 [2024-04-26 13:12:02.528843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:34376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.621 [2024-04-26 13:12:02.528850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.621 [2024-04-26 13:12:02.528859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:34384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.621 [2024-04-26 13:12:02.528866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.621 [2024-04-26 13:12:02.528875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:34392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.621 [2024-04-26 13:12:02.528882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.621 [2024-04-26 13:12:02.528891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:34400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.621 [2024-04-26 13:12:02.528898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.621 [2024-04-26 13:12:02.528907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:34408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.621 [2024-04-26 13:12:02.528914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.621 [2024-04-26 13:12:02.528923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:34416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.621 [2024-04-26 13:12:02.528930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.621 [2024-04-26 13:12:02.528950] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:08.621 [2024-04-26 13:12:02.528959] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:08.621 [2024-04-26 13:12:02.528965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34424 len:8 PRP1 0x0 PRP2 0x0 00:29:08.621 [2024-04-26 13:12:02.528973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.621 [2024-04-26 13:12:02.529010] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1cdcf80 was disconnected and freed. reset controller. 00:29:08.621 [2024-04-26 13:12:02.529019] bdev_nvme.c:1857:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:29:08.621 [2024-04-26 13:12:02.529038] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:08.621 [2024-04-26 13:12:02.529046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.621 [2024-04-26 13:12:02.529054] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:08.621 [2024-04-26 13:12:02.529061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.621 [2024-04-26 13:12:02.529068] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:08.621 [2024-04-26 13:12:02.529075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.621 [2024-04-26 13:12:02.529083] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:08.621 [2024-04-26 13:12:02.529090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.621 [2024-04-26 13:12:02.529097] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.621 [2024-04-26 13:12:02.529119] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b13af0 (9): Bad file descriptor 00:29:08.621 [2024-04-26 13:12:02.532663] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.621 [2024-04-26 13:12:02.611289] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:08.621 [2024-04-26 13:12:06.876864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:48712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.621 [2024-04-26 13:12:06.876901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.621 [2024-04-26 13:12:06.876919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:48720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.621 [2024-04-26 13:12:06.876927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.621 [2024-04-26 13:12:06.876937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:48728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.621 [2024-04-26 13:12:06.876944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.621 [2024-04-26 13:12:06.876954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:48736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.621 [2024-04-26 13:12:06.876962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.621 [2024-04-26 13:12:06.876971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:48744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.621 [2024-04-26 13:12:06.876978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.621 [2024-04-26 13:12:06.876991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:48752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.621 [2024-04-26 13:12:06.876999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.621 [2024-04-26 13:12:06.877008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:48760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.621 [2024-04-26 13:12:06.877015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.621 [2024-04-26 13:12:06.877025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:48768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.621 [2024-04-26 13:12:06.877032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.621 [2024-04-26 13:12:06.877041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:48776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.621 [2024-04-26 13:12:06.877048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.621 [2024-04-26 13:12:06.877057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:48784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.621 [2024-04-26 13:12:06.877064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.621 [2024-04-26 13:12:06.877073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:48792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.622 [2024-04-26 13:12:06.877080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.622 [2024-04-26 13:12:06.877089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:48800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.622 [2024-04-26 13:12:06.877096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.622 [2024-04-26 13:12:06.877105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:48808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.622 [2024-04-26 13:12:06.877112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.622 [2024-04-26 13:12:06.877121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:48816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.622 [2024-04-26 13:12:06.877128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.622 [2024-04-26 13:12:06.877137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:48824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.622 [2024-04-26 13:12:06.877144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.622 [2024-04-26 13:12:06.877153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:48832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.622 [2024-04-26 13:12:06.877160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.622 [2024-04-26 13:12:06.877169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:48840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.622 [2024-04-26 13:12:06.877176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.622 [2024-04-26 13:12:06.877185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:48848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.622 [2024-04-26 13:12:06.877193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.622 [2024-04-26 13:12:06.877202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:48856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.622 [2024-04-26 13:12:06.877210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.622 [2024-04-26 13:12:06.877219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:48864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.622 [2024-04-26 13:12:06.877226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.622 [2024-04-26 13:12:06.877235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:48872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.622 [2024-04-26 13:12:06.877242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.622 [2024-04-26 13:12:06.877251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:48880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.622 [2024-04-26 13:12:06.877259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.622 [2024-04-26 13:12:06.877267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:48888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.622 [2024-04-26 13:12:06.877274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.622 [2024-04-26 13:12:06.877283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:48896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.622 [2024-04-26 13:12:06.877290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.622 [2024-04-26 13:12:06.877300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:48904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.622 [2024-04-26 13:12:06.877307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.622 [2024-04-26 13:12:06.877316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:48912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.622 [2024-04-26 13:12:06.877323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.622 [2024-04-26 13:12:06.877331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:48920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.622 [2024-04-26 13:12:06.877338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.622 [2024-04-26 13:12:06.877347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:48928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.622 [2024-04-26 13:12:06.877354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.622 [2024-04-26 13:12:06.877363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:48936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.622 [2024-04-26 13:12:06.877370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.622 [2024-04-26 13:12:06.877379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:48944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.622 [2024-04-26 13:12:06.877386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.622 [2024-04-26 13:12:06.877395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:48464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.622 [2024-04-26 13:12:06.877404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.622 [2024-04-26 13:12:06.877413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:48472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.622 [2024-04-26 13:12:06.877420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.622 [2024-04-26 13:12:06.877429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:48480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.622 [2024-04-26 13:12:06.877437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.622 [2024-04-26 13:12:06.877446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:48488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.622 [2024-04-26 13:12:06.877453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.622 [2024-04-26 13:12:06.877462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:48496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.622 [2024-04-26 13:12:06.877469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.622 [2024-04-26 13:12:06.877478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:48504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.622 [2024-04-26 13:12:06.877485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.622 [2024-04-26 13:12:06.877494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:48512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.622 [2024-04-26 13:12:06.877501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.622 [2024-04-26 13:12:06.877510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:48952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.622 [2024-04-26 13:12:06.877517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.622 [2024-04-26 13:12:06.877526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:48960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.622 [2024-04-26 13:12:06.877533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.622 [2024-04-26 13:12:06.877542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:48968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.622 [2024-04-26 13:12:06.877550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.622 [2024-04-26 13:12:06.877559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:48976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.622 [2024-04-26 13:12:06.877566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.622 [2024-04-26 13:12:06.877574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:48984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.622 [2024-04-26 13:12:06.877582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.622 [2024-04-26 13:12:06.877591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:48992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.622 [2024-04-26 13:12:06.877598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.622 [2024-04-26 13:12:06.877609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:49000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.622 [2024-04-26 13:12:06.877616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.622 [2024-04-26 13:12:06.877626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:49008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.622 [2024-04-26 13:12:06.877633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.622 [2024-04-26 13:12:06.877642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:49016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.622 [2024-04-26 13:12:06.877650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.622 [2024-04-26 13:12:06.877659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:49024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.622 [2024-04-26 13:12:06.877666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.622 [2024-04-26 13:12:06.877675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:49032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.622 [2024-04-26 13:12:06.877683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.622 [2024-04-26 13:12:06.877692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:49040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.622 [2024-04-26 13:12:06.877699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.622 [2024-04-26 13:12:06.877708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:49048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.622 [2024-04-26 13:12:06.877715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.622 [2024-04-26 13:12:06.877725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:49056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.623 [2024-04-26 13:12:06.877732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.623 [2024-04-26 13:12:06.877740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:49064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.623 [2024-04-26 13:12:06.877747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.623 [2024-04-26 13:12:06.877756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:49072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.623 [2024-04-26 13:12:06.877763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.623 [2024-04-26 13:12:06.877772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:49080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.623 [2024-04-26 13:12:06.877779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.623 [2024-04-26 13:12:06.877788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:49088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.623 [2024-04-26 13:12:06.877795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.623 [2024-04-26 13:12:06.877804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:49096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.623 [2024-04-26 13:12:06.877812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.623 [2024-04-26 13:12:06.877821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:49104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.623 [2024-04-26 13:12:06.877828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.623 [2024-04-26 13:12:06.877842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:49112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.623 [2024-04-26 13:12:06.877849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.623 [2024-04-26 13:12:06.877859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:49120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.623 [2024-04-26 13:12:06.877866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.623 [2024-04-26 13:12:06.877874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:49128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.623 [2024-04-26 13:12:06.877881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.623 [2024-04-26 13:12:06.877890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:49136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.623 [2024-04-26 13:12:06.877897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.623 [2024-04-26 13:12:06.877906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:49144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.623 [2024-04-26 13:12:06.877913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.623 [2024-04-26 13:12:06.877922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:49152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.623 [2024-04-26 13:12:06.877929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.623 [2024-04-26 13:12:06.877938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:49160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.623 [2024-04-26 13:12:06.877945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.623 [2024-04-26 13:12:06.877954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:49168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.623 [2024-04-26 13:12:06.877961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.623 [2024-04-26 13:12:06.877970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:49176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.623 [2024-04-26 13:12:06.877977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.623 [2024-04-26 13:12:06.877986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:49184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.623 [2024-04-26 13:12:06.877993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.623 [2024-04-26 13:12:06.878002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:49192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.623 [2024-04-26 13:12:06.878009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.623 [2024-04-26 13:12:06.878021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:49200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.623 [2024-04-26 13:12:06.878028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.623 [2024-04-26 13:12:06.878037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:49208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.623 [2024-04-26 13:12:06.878044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.623 [2024-04-26 13:12:06.878053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:49216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.623 [2024-04-26 13:12:06.878060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.623 [2024-04-26 13:12:06.878069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:49224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.623 [2024-04-26 13:12:06.878076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.623 [2024-04-26 13:12:06.878085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:49232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.623 [2024-04-26 13:12:06.878092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.623 [2024-04-26 13:12:06.878100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:49240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.623 [2024-04-26 13:12:06.878107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.623 [2024-04-26 13:12:06.878116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:49248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.623 [2024-04-26 13:12:06.878124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.623 [2024-04-26 13:12:06.878132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:49256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.623 [2024-04-26 13:12:06.878139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.623 [2024-04-26 13:12:06.878148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:49264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.623 [2024-04-26 13:12:06.878155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.623 [2024-04-26 13:12:06.878164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:49272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.623 [2024-04-26 13:12:06.878171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.623 [2024-04-26 13:12:06.878180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:49280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.623 [2024-04-26 13:12:06.878187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.623 [2024-04-26 13:12:06.878196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:49288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.623 [2024-04-26 13:12:06.878203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.623 [2024-04-26 13:12:06.878212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:49296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.623 [2024-04-26 13:12:06.878219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.623 [2024-04-26 13:12:06.878229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:49304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.623 [2024-04-26 13:12:06.878236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.623 [2024-04-26 13:12:06.878245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:49312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.623 [2024-04-26 13:12:06.878252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.623 [2024-04-26 13:12:06.878261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:49320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.623 [2024-04-26 13:12:06.878268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.623 [2024-04-26 13:12:06.878277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:49328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.623 [2024-04-26 13:12:06.878284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.624 [2024-04-26 13:12:06.878293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:49336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.624 [2024-04-26 13:12:06.878300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.624 [2024-04-26 13:12:06.878309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:48520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.624 [2024-04-26 13:12:06.878316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.624 [2024-04-26 13:12:06.878325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:48528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.624 [2024-04-26 13:12:06.878332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.624 [2024-04-26 13:12:06.878341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:48536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.624 [2024-04-26 13:12:06.878348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.624 [2024-04-26 13:12:06.878357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:48544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.624 [2024-04-26 13:12:06.878364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.624 [2024-04-26 13:12:06.878374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:48552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.624 [2024-04-26 13:12:06.878381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.624 [2024-04-26 13:12:06.878389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:48560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.624 [2024-04-26 13:12:06.878396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.624 [2024-04-26 13:12:06.878405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:48568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.624 [2024-04-26 13:12:06.878412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.624 [2024-04-26 13:12:06.878421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:49344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.624 [2024-04-26 13:12:06.878429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.624 [2024-04-26 13:12:06.878438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:49352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.624 [2024-04-26 13:12:06.878445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.624 [2024-04-26 13:12:06.878454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:49360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.624 [2024-04-26 13:12:06.878461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.624 [2024-04-26 13:12:06.878470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:49368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.624 [2024-04-26 13:12:06.878477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.624 [2024-04-26 13:12:06.878486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:49376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.624 [2024-04-26 13:12:06.878493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.624 [2024-04-26 13:12:06.878502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:49384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.624 [2024-04-26 13:12:06.878510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.624 [2024-04-26 13:12:06.878518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:49392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.624 [2024-04-26 13:12:06.878526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.624 [2024-04-26 13:12:06.878535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:49400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.624 [2024-04-26 13:12:06.878542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.624 [2024-04-26 13:12:06.878550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:49408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.624 [2024-04-26 13:12:06.878557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.624 [2024-04-26 13:12:06.878566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:49416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.624 [2024-04-26 13:12:06.878574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.624 [2024-04-26 13:12:06.878583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:49424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.624 [2024-04-26 13:12:06.878590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.624 [2024-04-26 13:12:06.878599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:49432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.624 [2024-04-26 13:12:06.878606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.624 [2024-04-26 13:12:06.878615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:49440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.624 [2024-04-26 13:12:06.878622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.624 [2024-04-26 13:12:06.878633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:49448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.624 [2024-04-26 13:12:06.878640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.624 [2024-04-26 13:12:06.878649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:49456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.624 [2024-04-26 13:12:06.878656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.624 [2024-04-26 13:12:06.878664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:49464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.624 [2024-04-26 13:12:06.878672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.624 [2024-04-26 13:12:06.878680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:48576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.624 [2024-04-26 13:12:06.878687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.624 [2024-04-26 13:12:06.878696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:49472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.624 [2024-04-26 13:12:06.878703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.624 [2024-04-26 13:12:06.878712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:48584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.624 [2024-04-26 13:12:06.878719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.624 [2024-04-26 13:12:06.878728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:48592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.624 [2024-04-26 13:12:06.878735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.624 [2024-04-26 13:12:06.878744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:48600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.624 [2024-04-26 13:12:06.878751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.624 [2024-04-26 13:12:06.878760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:48608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.624 [2024-04-26 13:12:06.878767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.624 [2024-04-26 13:12:06.878776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:48616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.624 [2024-04-26 13:12:06.878783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.624 [2024-04-26 13:12:06.878792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:48624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.624 [2024-04-26 13:12:06.878799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.624 [2024-04-26 13:12:06.878808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:48632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.624 [2024-04-26 13:12:06.878815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.624 [2024-04-26 13:12:06.878824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:48640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.624 [2024-04-26 13:12:06.878835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.624 [2024-04-26 13:12:06.878848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:48648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.624 [2024-04-26 13:12:06.878855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.624 [2024-04-26 13:12:06.878864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:48656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.624 [2024-04-26 13:12:06.878871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.624 [2024-04-26 13:12:06.878879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:48664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.624 [2024-04-26 13:12:06.878886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.624 [2024-04-26 13:12:06.878895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:48672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.624 [2024-04-26 13:12:06.878902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.624 [2024-04-26 13:12:06.878911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:48680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.624 [2024-04-26 13:12:06.878918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.624 [2024-04-26 13:12:06.878927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:48688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.624 [2024-04-26 13:12:06.878934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.625 [2024-04-26 13:12:06.878943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:48696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.625 [2024-04-26 13:12:06.878950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.625 [2024-04-26 13:12:06.878959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:48704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.625 [2024-04-26 13:12:06.878967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.625 [2024-04-26 13:12:06.878987] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:08.625 [2024-04-26 13:12:06.878993] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:08.625 [2024-04-26 13:12:06.879000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49480 len:8 PRP1 0x0 PRP2 0x0 00:29:08.625 [2024-04-26 13:12:06.879008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.625 [2024-04-26 13:12:06.879045] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1b1ffe0 was disconnected and freed. reset controller. 00:29:08.625 [2024-04-26 13:12:06.879054] bdev_nvme.c:1857:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:29:08.625 [2024-04-26 13:12:06.879072] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:08.625 [2024-04-26 13:12:06.879081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.625 [2024-04-26 13:12:06.879089] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:08.625 [2024-04-26 13:12:06.879096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.625 [2024-04-26 13:12:06.879107] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:08.625 [2024-04-26 13:12:06.879114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.625 [2024-04-26 13:12:06.879122] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:08.625 [2024-04-26 13:12:06.879129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.625 [2024-04-26 13:12:06.879137] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.625 [2024-04-26 13:12:06.879159] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b13af0 (9): Bad file descriptor 00:29:08.625 [2024-04-26 13:12:06.882677] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.625 [2024-04-26 13:12:06.922170] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:08.625 00:29:08.625 Latency(us) 00:29:08.625 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:08.625 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:08.625 Verification LBA range: start 0x0 length 0x4000 00:29:08.625 NVMe0n1 : 15.01 11066.18 43.23 453.35 0.00 11084.05 532.48 14964.05 00:29:08.625 =================================================================================================================== 00:29:08.625 Total : 11066.18 43.23 453.35 0.00 11084.05 532.48 14964.05 00:29:08.625 Received shutdown signal, test time was about 15.000000 seconds 00:29:08.625 00:29:08.625 Latency(us) 00:29:08.625 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:08.625 =================================================================================================================== 00:29:08.625 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:08.625 13:12:13 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:29:08.625 13:12:13 -- host/failover.sh@65 -- # count=3 00:29:08.625 13:12:13 -- host/failover.sh@67 -- # (( count != 3 )) 00:29:08.625 13:12:13 -- host/failover.sh@73 -- # bdevperf_pid=4149916 00:29:08.625 13:12:13 -- host/failover.sh@75 -- # waitforlisten 4149916 /var/tmp/bdevperf.sock 00:29:08.625 13:12:13 -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:29:08.625 13:12:13 -- common/autotest_common.sh@817 -- # '[' -z 4149916 ']' 00:29:08.625 13:12:13 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:08.625 13:12:13 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:08.625 13:12:13 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:08.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:08.625 13:12:13 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:08.625 13:12:13 -- common/autotest_common.sh@10 -- # set +x 00:29:09.197 13:12:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:09.198 13:12:13 -- common/autotest_common.sh@850 -- # return 0 00:29:09.198 13:12:13 -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:09.198 [2024-04-26 13:12:14.111895] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:09.198 13:12:14 -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:29:09.459 [2024-04-26 13:12:14.272280] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:29:09.459 13:12:14 -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:09.720 NVMe0n1 00:29:09.720 13:12:14 -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:09.981 00:29:09.981 13:12:15 -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:10.242 00:29:10.503 13:12:15 -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:10.503 13:12:15 -- host/failover.sh@82 -- # grep -q NVMe0 00:29:10.503 13:12:15 -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:10.764 13:12:15 -- host/failover.sh@87 -- # sleep 3 00:29:14.067 13:12:18 -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:14.067 13:12:18 -- host/failover.sh@88 -- # grep -q NVMe0 00:29:14.067 13:12:18 -- host/failover.sh@90 -- # run_test_pid=4150938 00:29:14.067 13:12:18 -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:14.067 13:12:18 -- host/failover.sh@92 -- # wait 4150938 00:29:15.006 0 00:29:15.006 13:12:19 -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:15.006 [2024-04-26 13:12:13.193919] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:29:15.006 [2024-04-26 13:12:13.193975] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4149916 ] 00:29:15.006 EAL: No free 2048 kB hugepages reported on node 1 00:29:15.006 [2024-04-26 13:12:13.253828] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:15.006 [2024-04-26 13:12:13.314661] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:15.006 [2024-04-26 13:12:15.618418] bdev_nvme.c:1857:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:29:15.006 [2024-04-26 13:12:15.618461] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:15.006 [2024-04-26 13:12:15.618472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.006 [2024-04-26 13:12:15.618481] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:15.006 [2024-04-26 13:12:15.618488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.006 [2024-04-26 13:12:15.618496] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:15.006 [2024-04-26 13:12:15.618503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.006 [2024-04-26 13:12:15.618511] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:15.006 [2024-04-26 13:12:15.618518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.006 [2024-04-26 13:12:15.618525] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.006 [2024-04-26 13:12:15.618555] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.006 [2024-04-26 13:12:15.618569] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa7baf0 (9): Bad file descriptor 00:29:15.006 [2024-04-26 13:12:15.627004] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:15.006 Running I/O for 1 seconds... 00:29:15.006 00:29:15.006 Latency(us) 00:29:15.006 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:15.006 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:15.006 Verification LBA range: start 0x0 length 0x4000 00:29:15.006 NVMe0n1 : 1.01 11288.68 44.10 0.00 0.00 11272.19 2457.60 9994.24 00:29:15.007 =================================================================================================================== 00:29:15.007 Total : 11288.68 44.10 0.00 0.00 11272.19 2457.60 9994.24 00:29:15.007 13:12:19 -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:15.007 13:12:19 -- host/failover.sh@95 -- # grep -q NVMe0 00:29:15.268 13:12:20 -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:15.268 13:12:20 -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:15.268 13:12:20 -- host/failover.sh@99 -- # grep -q NVMe0 00:29:15.528 13:12:20 -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:15.789 13:12:20 -- host/failover.sh@101 -- # sleep 3 00:29:19.088 13:12:23 -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:19.088 13:12:23 -- host/failover.sh@103 -- # grep -q NVMe0 00:29:19.088 13:12:23 -- host/failover.sh@108 -- # killprocess 4149916 00:29:19.088 13:12:23 -- common/autotest_common.sh@936 -- # '[' -z 4149916 ']' 00:29:19.088 13:12:23 -- common/autotest_common.sh@940 -- # kill -0 4149916 00:29:19.088 13:12:23 -- common/autotest_common.sh@941 -- # uname 00:29:19.088 13:12:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:19.088 13:12:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4149916 00:29:19.088 13:12:23 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:29:19.088 13:12:23 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:29:19.088 13:12:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4149916' 00:29:19.088 killing process with pid 4149916 00:29:19.088 13:12:23 -- common/autotest_common.sh@955 -- # kill 4149916 00:29:19.088 13:12:23 -- common/autotest_common.sh@960 -- # wait 4149916 00:29:19.088 13:12:23 -- host/failover.sh@110 -- # sync 00:29:19.088 13:12:23 -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:19.088 13:12:24 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:29:19.088 13:12:24 -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:19.349 13:12:24 -- host/failover.sh@116 -- # nvmftestfini 00:29:19.349 13:12:24 -- nvmf/common.sh@477 -- # nvmfcleanup 00:29:19.349 13:12:24 -- nvmf/common.sh@117 -- # sync 00:29:19.349 13:12:24 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:19.349 13:12:24 -- nvmf/common.sh@120 -- # set +e 00:29:19.349 13:12:24 -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:19.349 13:12:24 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:19.349 rmmod nvme_tcp 00:29:19.349 rmmod nvme_fabrics 00:29:19.349 rmmod nvme_keyring 00:29:19.349 13:12:24 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:19.349 13:12:24 -- nvmf/common.sh@124 -- # set -e 00:29:19.349 13:12:24 -- nvmf/common.sh@125 -- # return 0 00:29:19.349 13:12:24 -- nvmf/common.sh@478 -- # '[' -n 4145643 ']' 00:29:19.349 13:12:24 -- nvmf/common.sh@479 -- # killprocess 4145643 00:29:19.349 13:12:24 -- common/autotest_common.sh@936 -- # '[' -z 4145643 ']' 00:29:19.349 13:12:24 -- common/autotest_common.sh@940 -- # kill -0 4145643 00:29:19.349 13:12:24 -- common/autotest_common.sh@941 -- # uname 00:29:19.349 13:12:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:19.349 13:12:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4145643 00:29:19.349 13:12:24 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:29:19.349 13:12:24 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:29:19.349 13:12:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4145643' 00:29:19.349 killing process with pid 4145643 00:29:19.349 13:12:24 -- common/autotest_common.sh@955 -- # kill 4145643 00:29:19.349 13:12:24 -- common/autotest_common.sh@960 -- # wait 4145643 00:29:19.349 13:12:24 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:29:19.349 13:12:24 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:29:19.349 13:12:24 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:29:19.349 13:12:24 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:19.349 13:12:24 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:19.349 13:12:24 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:19.349 13:12:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:19.349 13:12:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:21.892 13:12:26 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:21.892 00:29:21.892 real 0m39.454s 00:29:21.892 user 2m1.678s 00:29:21.892 sys 0m8.032s 00:29:21.892 13:12:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:21.892 13:12:26 -- common/autotest_common.sh@10 -- # set +x 00:29:21.892 ************************************ 00:29:21.892 END TEST nvmf_failover 00:29:21.892 ************************************ 00:29:21.892 13:12:26 -- nvmf/nvmf.sh@99 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:29:21.892 13:12:26 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:29:21.892 13:12:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:21.892 13:12:26 -- common/autotest_common.sh@10 -- # set +x 00:29:21.892 ************************************ 00:29:21.892 START TEST nvmf_discovery 00:29:21.892 ************************************ 00:29:21.892 13:12:26 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:29:21.892 * Looking for test storage... 00:29:21.892 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:21.892 13:12:26 -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:21.892 13:12:26 -- nvmf/common.sh@7 -- # uname -s 00:29:21.892 13:12:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:21.892 13:12:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:21.892 13:12:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:21.892 13:12:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:21.892 13:12:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:21.892 13:12:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:21.892 13:12:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:21.892 13:12:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:21.892 13:12:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:21.892 13:12:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:21.892 13:12:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:21.892 13:12:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:21.892 13:12:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:21.892 13:12:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:21.892 13:12:26 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:21.892 13:12:26 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:21.892 13:12:26 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:21.892 13:12:26 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:21.892 13:12:26 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:21.892 13:12:26 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:21.892 13:12:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:21.892 13:12:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:21.892 13:12:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:21.892 13:12:26 -- paths/export.sh@5 -- # export PATH 00:29:21.892 13:12:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:21.892 13:12:26 -- nvmf/common.sh@47 -- # : 0 00:29:21.892 13:12:26 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:21.892 13:12:26 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:21.892 13:12:26 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:21.892 13:12:26 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:21.892 13:12:26 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:21.892 13:12:26 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:21.892 13:12:26 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:21.892 13:12:26 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:21.892 13:12:26 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:29:21.892 13:12:26 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:29:21.892 13:12:26 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:29:21.892 13:12:26 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:29:21.892 13:12:26 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:29:21.892 13:12:26 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:29:21.892 13:12:26 -- host/discovery.sh@25 -- # nvmftestinit 00:29:21.892 13:12:26 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:29:21.892 13:12:26 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:21.892 13:12:26 -- nvmf/common.sh@437 -- # prepare_net_devs 00:29:21.892 13:12:26 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:29:21.892 13:12:26 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:29:21.892 13:12:26 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:21.892 13:12:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:21.892 13:12:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:21.893 13:12:26 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:29:21.893 13:12:26 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:29:21.893 13:12:26 -- nvmf/common.sh@285 -- # xtrace_disable 00:29:21.893 13:12:26 -- common/autotest_common.sh@10 -- # set +x 00:29:30.028 13:12:33 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:29:30.028 13:12:33 -- nvmf/common.sh@291 -- # pci_devs=() 00:29:30.028 13:12:33 -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:30.028 13:12:33 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:30.028 13:12:33 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:30.028 13:12:33 -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:30.028 13:12:33 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:30.028 13:12:33 -- nvmf/common.sh@295 -- # net_devs=() 00:29:30.028 13:12:33 -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:30.028 13:12:33 -- nvmf/common.sh@296 -- # e810=() 00:29:30.029 13:12:33 -- nvmf/common.sh@296 -- # local -ga e810 00:29:30.029 13:12:33 -- nvmf/common.sh@297 -- # x722=() 00:29:30.029 13:12:33 -- nvmf/common.sh@297 -- # local -ga x722 00:29:30.029 13:12:33 -- nvmf/common.sh@298 -- # mlx=() 00:29:30.029 13:12:33 -- nvmf/common.sh@298 -- # local -ga mlx 00:29:30.029 13:12:33 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:30.029 13:12:33 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:30.029 13:12:33 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:30.029 13:12:33 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:30.029 13:12:33 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:30.029 13:12:33 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:30.029 13:12:33 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:30.029 13:12:33 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:30.029 13:12:33 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:30.029 13:12:33 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:30.029 13:12:33 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:30.029 13:12:33 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:30.029 13:12:33 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:30.029 13:12:33 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:30.029 13:12:33 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:30.029 13:12:33 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:30.029 13:12:33 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:30.029 13:12:33 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:30.029 13:12:33 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:29:30.029 Found 0000:31:00.0 (0x8086 - 0x159b) 00:29:30.029 13:12:33 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:30.029 13:12:33 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:30.029 13:12:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:30.029 13:12:33 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:30.029 13:12:33 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:30.029 13:12:33 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:30.029 13:12:33 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:29:30.029 Found 0000:31:00.1 (0x8086 - 0x159b) 00:29:30.029 13:12:33 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:30.029 13:12:33 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:30.029 13:12:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:30.029 13:12:33 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:30.029 13:12:33 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:30.029 13:12:33 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:30.029 13:12:33 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:30.029 13:12:33 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:30.029 13:12:33 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:30.029 13:12:33 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:30.029 13:12:33 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:29:30.029 13:12:33 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:30.029 13:12:33 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:29:30.029 Found net devices under 0000:31:00.0: cvl_0_0 00:29:30.029 13:12:33 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:29:30.029 13:12:33 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:30.029 13:12:33 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:30.029 13:12:33 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:29:30.029 13:12:33 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:30.029 13:12:33 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:29:30.029 Found net devices under 0000:31:00.1: cvl_0_1 00:29:30.029 13:12:33 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:29:30.029 13:12:33 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:29:30.029 13:12:33 -- nvmf/common.sh@403 -- # is_hw=yes 00:29:30.029 13:12:33 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:29:30.029 13:12:33 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:29:30.029 13:12:33 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:29:30.029 13:12:33 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:30.029 13:12:33 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:30.029 13:12:33 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:30.029 13:12:33 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:30.029 13:12:33 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:30.029 13:12:33 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:30.029 13:12:33 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:30.029 13:12:33 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:30.029 13:12:33 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:30.029 13:12:33 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:30.029 13:12:33 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:30.029 13:12:33 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:30.029 13:12:33 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:30.029 13:12:33 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:30.029 13:12:33 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:30.029 13:12:33 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:30.029 13:12:33 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:30.029 13:12:34 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:30.029 13:12:34 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:30.029 13:12:34 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:30.029 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:30.029 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.634 ms 00:29:30.029 00:29:30.029 --- 10.0.0.2 ping statistics --- 00:29:30.029 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:30.029 rtt min/avg/max/mdev = 0.634/0.634/0.634/0.000 ms 00:29:30.029 13:12:34 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:30.029 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:30.029 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.337 ms 00:29:30.029 00:29:30.029 --- 10.0.0.1 ping statistics --- 00:29:30.029 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:30.029 rtt min/avg/max/mdev = 0.337/0.337/0.337/0.000 ms 00:29:30.029 13:12:34 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:30.029 13:12:34 -- nvmf/common.sh@411 -- # return 0 00:29:30.029 13:12:34 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:29:30.029 13:12:34 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:30.029 13:12:34 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:29:30.029 13:12:34 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:29:30.029 13:12:34 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:30.029 13:12:34 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:29:30.029 13:12:34 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:29:30.029 13:12:34 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:29:30.029 13:12:34 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:29:30.029 13:12:34 -- common/autotest_common.sh@710 -- # xtrace_disable 00:29:30.029 13:12:34 -- common/autotest_common.sh@10 -- # set +x 00:29:30.029 13:12:34 -- nvmf/common.sh@470 -- # nvmfpid=4156093 00:29:30.029 13:12:34 -- nvmf/common.sh@471 -- # waitforlisten 4156093 00:29:30.029 13:12:34 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:29:30.029 13:12:34 -- common/autotest_common.sh@817 -- # '[' -z 4156093 ']' 00:29:30.029 13:12:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:30.029 13:12:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:30.029 13:12:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:30.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:30.029 13:12:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:30.029 13:12:34 -- common/autotest_common.sh@10 -- # set +x 00:29:30.029 [2024-04-26 13:12:34.193705] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:29:30.029 [2024-04-26 13:12:34.193766] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:30.029 EAL: No free 2048 kB hugepages reported on node 1 00:29:30.029 [2024-04-26 13:12:34.283336] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:30.029 [2024-04-26 13:12:34.375399] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:30.029 [2024-04-26 13:12:34.375455] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:30.029 [2024-04-26 13:12:34.375463] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:30.029 [2024-04-26 13:12:34.375470] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:30.029 [2024-04-26 13:12:34.375476] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:30.029 [2024-04-26 13:12:34.375514] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:30.029 13:12:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:30.029 13:12:34 -- common/autotest_common.sh@850 -- # return 0 00:29:30.029 13:12:34 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:29:30.029 13:12:34 -- common/autotest_common.sh@716 -- # xtrace_disable 00:29:30.029 13:12:34 -- common/autotest_common.sh@10 -- # set +x 00:29:30.029 13:12:35 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:30.029 13:12:35 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:30.029 13:12:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:30.029 13:12:35 -- common/autotest_common.sh@10 -- # set +x 00:29:30.029 [2024-04-26 13:12:35.027558] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:30.029 13:12:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:30.029 13:12:35 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:29:30.029 13:12:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:30.029 13:12:35 -- common/autotest_common.sh@10 -- # set +x 00:29:30.029 [2024-04-26 13:12:35.039811] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:29:30.029 13:12:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:30.029 13:12:35 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:29:30.029 13:12:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:30.029 13:12:35 -- common/autotest_common.sh@10 -- # set +x 00:29:30.029 null0 00:29:30.029 13:12:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:30.030 13:12:35 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:29:30.030 13:12:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:30.030 13:12:35 -- common/autotest_common.sh@10 -- # set +x 00:29:30.030 null1 00:29:30.030 13:12:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:30.030 13:12:35 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:29:30.030 13:12:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:30.030 13:12:35 -- common/autotest_common.sh@10 -- # set +x 00:29:30.030 13:12:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:30.030 13:12:35 -- host/discovery.sh@45 -- # hostpid=4156371 00:29:30.030 13:12:35 -- host/discovery.sh@46 -- # waitforlisten 4156371 /tmp/host.sock 00:29:30.030 13:12:35 -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:29:30.030 13:12:35 -- common/autotest_common.sh@817 -- # '[' -z 4156371 ']' 00:29:30.030 13:12:35 -- common/autotest_common.sh@821 -- # local rpc_addr=/tmp/host.sock 00:29:30.030 13:12:35 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:30.030 13:12:35 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:29:30.030 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:29:30.030 13:12:35 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:30.030 13:12:35 -- common/autotest_common.sh@10 -- # set +x 00:29:30.290 [2024-04-26 13:12:35.132400] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:29:30.290 [2024-04-26 13:12:35.132461] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4156371 ] 00:29:30.290 EAL: No free 2048 kB hugepages reported on node 1 00:29:30.290 [2024-04-26 13:12:35.197181] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:30.290 [2024-04-26 13:12:35.268484] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:30.860 13:12:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:30.860 13:12:35 -- common/autotest_common.sh@850 -- # return 0 00:29:30.860 13:12:35 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:30.860 13:12:35 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:29:30.860 13:12:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:30.860 13:12:35 -- common/autotest_common.sh@10 -- # set +x 00:29:30.860 13:12:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:30.860 13:12:35 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:29:30.860 13:12:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:30.860 13:12:35 -- common/autotest_common.sh@10 -- # set +x 00:29:30.860 13:12:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:31.121 13:12:35 -- host/discovery.sh@72 -- # notify_id=0 00:29:31.121 13:12:35 -- host/discovery.sh@83 -- # get_subsystem_names 00:29:31.121 13:12:35 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:31.121 13:12:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:31.121 13:12:35 -- common/autotest_common.sh@10 -- # set +x 00:29:31.121 13:12:35 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:31.121 13:12:35 -- host/discovery.sh@59 -- # sort 00:29:31.121 13:12:35 -- host/discovery.sh@59 -- # xargs 00:29:31.121 13:12:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:31.121 13:12:35 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:29:31.121 13:12:35 -- host/discovery.sh@84 -- # get_bdev_list 00:29:31.121 13:12:35 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:31.121 13:12:35 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:31.121 13:12:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:31.121 13:12:35 -- host/discovery.sh@55 -- # sort 00:29:31.121 13:12:35 -- common/autotest_common.sh@10 -- # set +x 00:29:31.121 13:12:35 -- host/discovery.sh@55 -- # xargs 00:29:31.121 13:12:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:31.121 13:12:36 -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:29:31.121 13:12:36 -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:29:31.121 13:12:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:31.121 13:12:36 -- common/autotest_common.sh@10 -- # set +x 00:29:31.121 13:12:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:31.121 13:12:36 -- host/discovery.sh@87 -- # get_subsystem_names 00:29:31.121 13:12:36 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:31.121 13:12:36 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:31.121 13:12:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:31.121 13:12:36 -- host/discovery.sh@59 -- # sort 00:29:31.121 13:12:36 -- common/autotest_common.sh@10 -- # set +x 00:29:31.121 13:12:36 -- host/discovery.sh@59 -- # xargs 00:29:31.121 13:12:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:31.121 13:12:36 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:29:31.121 13:12:36 -- host/discovery.sh@88 -- # get_bdev_list 00:29:31.121 13:12:36 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:31.121 13:12:36 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:31.121 13:12:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:31.121 13:12:36 -- host/discovery.sh@55 -- # sort 00:29:31.121 13:12:36 -- common/autotest_common.sh@10 -- # set +x 00:29:31.121 13:12:36 -- host/discovery.sh@55 -- # xargs 00:29:31.121 13:12:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:31.121 13:12:36 -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:29:31.121 13:12:36 -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:29:31.121 13:12:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:31.121 13:12:36 -- common/autotest_common.sh@10 -- # set +x 00:29:31.121 13:12:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:31.121 13:12:36 -- host/discovery.sh@91 -- # get_subsystem_names 00:29:31.121 13:12:36 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:31.121 13:12:36 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:31.121 13:12:36 -- host/discovery.sh@59 -- # sort 00:29:31.121 13:12:36 -- host/discovery.sh@59 -- # xargs 00:29:31.121 13:12:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:31.121 13:12:36 -- common/autotest_common.sh@10 -- # set +x 00:29:31.121 13:12:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:31.383 13:12:36 -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:29:31.383 13:12:36 -- host/discovery.sh@92 -- # get_bdev_list 00:29:31.383 13:12:36 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:31.383 13:12:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:31.383 13:12:36 -- common/autotest_common.sh@10 -- # set +x 00:29:31.383 13:12:36 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:31.383 13:12:36 -- host/discovery.sh@55 -- # sort 00:29:31.383 13:12:36 -- host/discovery.sh@55 -- # xargs 00:29:31.383 13:12:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:31.383 13:12:36 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:29:31.383 13:12:36 -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:31.383 13:12:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:31.383 13:12:36 -- common/autotest_common.sh@10 -- # set +x 00:29:31.383 [2024-04-26 13:12:36.270908] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:31.383 13:12:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:31.383 13:12:36 -- host/discovery.sh@97 -- # get_subsystem_names 00:29:31.383 13:12:36 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:31.383 13:12:36 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:31.383 13:12:36 -- host/discovery.sh@59 -- # sort 00:29:31.383 13:12:36 -- host/discovery.sh@59 -- # xargs 00:29:31.383 13:12:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:31.383 13:12:36 -- common/autotest_common.sh@10 -- # set +x 00:29:31.383 13:12:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:31.383 13:12:36 -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:29:31.383 13:12:36 -- host/discovery.sh@98 -- # get_bdev_list 00:29:31.383 13:12:36 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:31.383 13:12:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:31.383 13:12:36 -- common/autotest_common.sh@10 -- # set +x 00:29:31.383 13:12:36 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:31.383 13:12:36 -- host/discovery.sh@55 -- # sort 00:29:31.383 13:12:36 -- host/discovery.sh@55 -- # xargs 00:29:31.383 13:12:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:31.383 13:12:36 -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:29:31.383 13:12:36 -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:29:31.383 13:12:36 -- host/discovery.sh@79 -- # expected_count=0 00:29:31.383 13:12:36 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:29:31.383 13:12:36 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:29:31.383 13:12:36 -- common/autotest_common.sh@901 -- # local max=10 00:29:31.383 13:12:36 -- common/autotest_common.sh@902 -- # (( max-- )) 00:29:31.383 13:12:36 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:29:31.383 13:12:36 -- common/autotest_common.sh@903 -- # get_notification_count 00:29:31.383 13:12:36 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:29:31.383 13:12:36 -- host/discovery.sh@74 -- # jq '. | length' 00:29:31.383 13:12:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:31.383 13:12:36 -- common/autotest_common.sh@10 -- # set +x 00:29:31.383 13:12:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:31.383 13:12:36 -- host/discovery.sh@74 -- # notification_count=0 00:29:31.383 13:12:36 -- host/discovery.sh@75 -- # notify_id=0 00:29:31.383 13:12:36 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:29:31.383 13:12:36 -- common/autotest_common.sh@904 -- # return 0 00:29:31.383 13:12:36 -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:29:31.383 13:12:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:31.383 13:12:36 -- common/autotest_common.sh@10 -- # set +x 00:29:31.383 13:12:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:31.383 13:12:36 -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:29:31.383 13:12:36 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:29:31.383 13:12:36 -- common/autotest_common.sh@901 -- # local max=10 00:29:31.383 13:12:36 -- common/autotest_common.sh@902 -- # (( max-- )) 00:29:31.383 13:12:36 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:29:31.383 13:12:36 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:29:31.383 13:12:36 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:31.383 13:12:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:31.383 13:12:36 -- common/autotest_common.sh@10 -- # set +x 00:29:31.383 13:12:36 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:31.383 13:12:36 -- host/discovery.sh@59 -- # sort 00:29:31.643 13:12:36 -- host/discovery.sh@59 -- # xargs 00:29:31.643 13:12:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:31.643 13:12:36 -- common/autotest_common.sh@903 -- # [[ '' == \n\v\m\e\0 ]] 00:29:31.643 13:12:36 -- common/autotest_common.sh@906 -- # sleep 1 00:29:32.214 [2024-04-26 13:12:36.966913] bdev_nvme.c:6923:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:29:32.214 [2024-04-26 13:12:36.966934] bdev_nvme.c:7003:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:29:32.214 [2024-04-26 13:12:36.966952] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:32.214 [2024-04-26 13:12:37.055226] bdev_nvme.c:6852:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:29:32.214 [2024-04-26 13:12:37.157533] bdev_nvme.c:6742:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:29:32.214 [2024-04-26 13:12:37.157553] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:29:32.475 13:12:37 -- common/autotest_common.sh@902 -- # (( max-- )) 00:29:32.475 13:12:37 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:29:32.475 13:12:37 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:29:32.475 13:12:37 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:32.475 13:12:37 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:32.475 13:12:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:32.475 13:12:37 -- host/discovery.sh@59 -- # sort 00:29:32.475 13:12:37 -- common/autotest_common.sh@10 -- # set +x 00:29:32.475 13:12:37 -- host/discovery.sh@59 -- # xargs 00:29:32.475 13:12:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:32.736 13:12:37 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:32.736 13:12:37 -- common/autotest_common.sh@904 -- # return 0 00:29:32.736 13:12:37 -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:29:32.736 13:12:37 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:29:32.736 13:12:37 -- common/autotest_common.sh@901 -- # local max=10 00:29:32.736 13:12:37 -- common/autotest_common.sh@902 -- # (( max-- )) 00:29:32.736 13:12:37 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:29:32.736 13:12:37 -- common/autotest_common.sh@903 -- # get_bdev_list 00:29:32.736 13:12:37 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:32.736 13:12:37 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:32.736 13:12:37 -- host/discovery.sh@55 -- # sort 00:29:32.736 13:12:37 -- host/discovery.sh@55 -- # xargs 00:29:32.736 13:12:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:32.736 13:12:37 -- common/autotest_common.sh@10 -- # set +x 00:29:32.736 13:12:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:32.736 13:12:37 -- common/autotest_common.sh@903 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:29:32.736 13:12:37 -- common/autotest_common.sh@904 -- # return 0 00:29:32.736 13:12:37 -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:29:32.736 13:12:37 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:29:32.736 13:12:37 -- common/autotest_common.sh@901 -- # local max=10 00:29:32.736 13:12:37 -- common/autotest_common.sh@902 -- # (( max-- )) 00:29:32.736 13:12:37 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:29:32.736 13:12:37 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:29:32.736 13:12:37 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:29:32.736 13:12:37 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:29:32.736 13:12:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:32.736 13:12:37 -- common/autotest_common.sh@10 -- # set +x 00:29:32.736 13:12:37 -- host/discovery.sh@63 -- # sort -n 00:29:32.736 13:12:37 -- host/discovery.sh@63 -- # xargs 00:29:32.736 13:12:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:32.736 13:12:37 -- common/autotest_common.sh@903 -- # [[ 4420 == \4\4\2\0 ]] 00:29:32.736 13:12:37 -- common/autotest_common.sh@904 -- # return 0 00:29:32.736 13:12:37 -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:29:32.736 13:12:37 -- host/discovery.sh@79 -- # expected_count=1 00:29:32.736 13:12:37 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:29:32.736 13:12:37 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:29:32.736 13:12:37 -- common/autotest_common.sh@901 -- # local max=10 00:29:32.736 13:12:37 -- common/autotest_common.sh@902 -- # (( max-- )) 00:29:32.736 13:12:37 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:29:32.736 13:12:37 -- common/autotest_common.sh@903 -- # get_notification_count 00:29:32.736 13:12:37 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:29:32.736 13:12:37 -- host/discovery.sh@74 -- # jq '. | length' 00:29:32.736 13:12:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:32.736 13:12:37 -- common/autotest_common.sh@10 -- # set +x 00:29:32.736 13:12:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:32.736 13:12:37 -- host/discovery.sh@74 -- # notification_count=1 00:29:32.736 13:12:37 -- host/discovery.sh@75 -- # notify_id=1 00:29:32.736 13:12:37 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:29:32.736 13:12:37 -- common/autotest_common.sh@904 -- # return 0 00:29:32.736 13:12:37 -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:29:32.736 13:12:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:32.736 13:12:37 -- common/autotest_common.sh@10 -- # set +x 00:29:32.736 13:12:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:32.736 13:12:37 -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:29:32.736 13:12:37 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:29:32.736 13:12:37 -- common/autotest_common.sh@901 -- # local max=10 00:29:32.736 13:12:37 -- common/autotest_common.sh@902 -- # (( max-- )) 00:29:32.736 13:12:37 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:29:32.736 13:12:37 -- common/autotest_common.sh@903 -- # get_bdev_list 00:29:32.736 13:12:37 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:32.736 13:12:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:32.736 13:12:37 -- common/autotest_common.sh@10 -- # set +x 00:29:32.736 13:12:37 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:32.736 13:12:37 -- host/discovery.sh@55 -- # sort 00:29:32.736 13:12:37 -- host/discovery.sh@55 -- # xargs 00:29:32.736 13:12:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:32.736 13:12:37 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:29:32.736 13:12:37 -- common/autotest_common.sh@904 -- # return 0 00:29:32.736 13:12:37 -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:29:32.736 13:12:37 -- host/discovery.sh@79 -- # expected_count=1 00:29:32.736 13:12:37 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:29:32.736 13:12:37 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:29:32.736 13:12:37 -- common/autotest_common.sh@901 -- # local max=10 00:29:32.736 13:12:37 -- common/autotest_common.sh@902 -- # (( max-- )) 00:29:32.736 13:12:37 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:29:32.736 13:12:37 -- common/autotest_common.sh@903 -- # get_notification_count 00:29:32.736 13:12:37 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:29:32.736 13:12:37 -- host/discovery.sh@74 -- # jq '. | length' 00:29:32.736 13:12:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:32.736 13:12:37 -- common/autotest_common.sh@10 -- # set +x 00:29:32.736 13:12:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:32.997 13:12:37 -- host/discovery.sh@74 -- # notification_count=1 00:29:32.997 13:12:37 -- host/discovery.sh@75 -- # notify_id=2 00:29:32.997 13:12:37 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:29:32.997 13:12:37 -- common/autotest_common.sh@904 -- # return 0 00:29:32.997 13:12:37 -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:29:32.997 13:12:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:32.997 13:12:37 -- common/autotest_common.sh@10 -- # set +x 00:29:32.997 [2024-04-26 13:12:37.830989] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:32.997 [2024-04-26 13:12:37.831157] bdev_nvme.c:6905:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:29:32.997 [2024-04-26 13:12:37.831182] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:32.997 13:12:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:32.997 13:12:37 -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:29:32.997 13:12:37 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:29:32.997 13:12:37 -- common/autotest_common.sh@901 -- # local max=10 00:29:32.997 13:12:37 -- common/autotest_common.sh@902 -- # (( max-- )) 00:29:32.997 13:12:37 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:29:32.997 13:12:37 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:29:32.997 13:12:37 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:32.997 13:12:37 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:32.997 13:12:37 -- host/discovery.sh@59 -- # sort 00:29:32.997 13:12:37 -- host/discovery.sh@59 -- # xargs 00:29:32.997 13:12:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:32.997 13:12:37 -- common/autotest_common.sh@10 -- # set +x 00:29:32.997 13:12:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:32.997 13:12:37 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:32.997 13:12:37 -- common/autotest_common.sh@904 -- # return 0 00:29:32.997 13:12:37 -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:29:32.997 13:12:37 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:29:32.997 13:12:37 -- common/autotest_common.sh@901 -- # local max=10 00:29:32.997 13:12:37 -- common/autotest_common.sh@902 -- # (( max-- )) 00:29:32.997 13:12:37 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:29:32.997 13:12:37 -- common/autotest_common.sh@903 -- # get_bdev_list 00:29:32.997 13:12:37 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:32.997 13:12:37 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:32.997 13:12:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:32.997 13:12:37 -- host/discovery.sh@55 -- # sort 00:29:32.997 13:12:37 -- common/autotest_common.sh@10 -- # set +x 00:29:32.997 13:12:37 -- host/discovery.sh@55 -- # xargs 00:29:32.997 [2024-04-26 13:12:37.919530] bdev_nvme.c:6847:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:29:32.997 13:12:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:32.997 13:12:37 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:29:32.997 13:12:37 -- common/autotest_common.sh@904 -- # return 0 00:29:32.997 13:12:37 -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:29:32.997 13:12:37 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:29:32.997 13:12:37 -- common/autotest_common.sh@901 -- # local max=10 00:29:32.997 13:12:37 -- common/autotest_common.sh@902 -- # (( max-- )) 00:29:32.997 13:12:37 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:29:32.997 13:12:37 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:29:32.997 13:12:37 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:29:32.997 13:12:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:32.997 13:12:37 -- common/autotest_common.sh@10 -- # set +x 00:29:32.997 13:12:37 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:29:32.997 13:12:37 -- host/discovery.sh@63 -- # sort -n 00:29:32.997 13:12:37 -- host/discovery.sh@63 -- # xargs 00:29:32.997 13:12:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:32.997 13:12:37 -- common/autotest_common.sh@903 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:29:32.997 13:12:38 -- common/autotest_common.sh@906 -- # sleep 1 00:29:32.997 [2024-04-26 13:12:38.020433] bdev_nvme.c:6742:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:29:32.997 [2024-04-26 13:12:38.020455] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:29:32.997 [2024-04-26 13:12:38.020461] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:29:34.384 13:12:39 -- common/autotest_common.sh@902 -- # (( max-- )) 00:29:34.384 13:12:39 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:29:34.384 13:12:39 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:29:34.384 13:12:39 -- host/discovery.sh@63 -- # xargs 00:29:34.384 13:12:39 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:29:34.384 13:12:39 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:29:34.384 13:12:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:34.384 13:12:39 -- host/discovery.sh@63 -- # sort -n 00:29:34.384 13:12:39 -- common/autotest_common.sh@10 -- # set +x 00:29:34.384 13:12:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:34.384 13:12:39 -- common/autotest_common.sh@903 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:29:34.384 13:12:39 -- common/autotest_common.sh@904 -- # return 0 00:29:34.384 13:12:39 -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:29:34.384 13:12:39 -- host/discovery.sh@79 -- # expected_count=0 00:29:34.384 13:12:39 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:29:34.384 13:12:39 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:29:34.384 13:12:39 -- common/autotest_common.sh@901 -- # local max=10 00:29:34.384 13:12:39 -- common/autotest_common.sh@902 -- # (( max-- )) 00:29:34.384 13:12:39 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:29:34.384 13:12:39 -- common/autotest_common.sh@903 -- # get_notification_count 00:29:34.384 13:12:39 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:29:34.384 13:12:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:34.384 13:12:39 -- common/autotest_common.sh@10 -- # set +x 00:29:34.384 13:12:39 -- host/discovery.sh@74 -- # jq '. | length' 00:29:34.384 13:12:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:34.384 13:12:39 -- host/discovery.sh@74 -- # notification_count=0 00:29:34.384 13:12:39 -- host/discovery.sh@75 -- # notify_id=2 00:29:34.384 13:12:39 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:29:34.384 13:12:39 -- common/autotest_common.sh@904 -- # return 0 00:29:34.384 13:12:39 -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:34.384 13:12:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:34.384 13:12:39 -- common/autotest_common.sh@10 -- # set +x 00:29:34.384 [2024-04-26 13:12:39.110551] bdev_nvme.c:6905:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:29:34.384 [2024-04-26 13:12:39.110573] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:34.384 13:12:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:34.384 13:12:39 -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:29:34.384 13:12:39 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:29:34.384 13:12:39 -- common/autotest_common.sh@901 -- # local max=10 00:29:34.384 13:12:39 -- common/autotest_common.sh@902 -- # (( max-- )) 00:29:34.384 13:12:39 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:29:34.384 13:12:39 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:29:34.384 [2024-04-26 13:12:39.119807] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:34.384 [2024-04-26 13:12:39.119824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.384 [2024-04-26 13:12:39.119834] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:34.384 [2024-04-26 13:12:39.119848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.384 [2024-04-26 13:12:39.119856] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:34.384 13:12:39 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:34.384 [2024-04-26 13:12:39.119863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.384 [2024-04-26 13:12:39.119876] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:34.384 [2024-04-26 13:12:39.119884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.384 [2024-04-26 13:12:39.119891] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3b010 is same with the state(5) to be set 00:29:34.384 13:12:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:34.384 13:12:39 -- common/autotest_common.sh@10 -- # set +x 00:29:34.384 13:12:39 -- host/discovery.sh@59 -- # sort 00:29:34.384 13:12:39 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:34.384 13:12:39 -- host/discovery.sh@59 -- # xargs 00:29:34.384 [2024-04-26 13:12:39.129820] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb3b010 (9): Bad file descriptor 00:29:34.384 13:12:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:34.384 [2024-04-26 13:12:39.139860] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:34.384 [2024-04-26 13:12:39.140345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.384 [2024-04-26 13:12:39.140681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.384 [2024-04-26 13:12:39.140694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb3b010 with addr=10.0.0.2, port=4420 00:29:34.384 [2024-04-26 13:12:39.140703] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3b010 is same with the state(5) to be set 00:29:34.384 [2024-04-26 13:12:39.140721] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb3b010 (9): Bad file descriptor 00:29:34.384 [2024-04-26 13:12:39.140748] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:34.384 [2024-04-26 13:12:39.140756] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:34.384 [2024-04-26 13:12:39.140765] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:34.384 [2024-04-26 13:12:39.140780] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.384 [2024-04-26 13:12:39.149914] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:34.384 [2024-04-26 13:12:39.150249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.384 [2024-04-26 13:12:39.150575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.384 [2024-04-26 13:12:39.150584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb3b010 with addr=10.0.0.2, port=4420 00:29:34.384 [2024-04-26 13:12:39.150591] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3b010 is same with the state(5) to be set 00:29:34.384 [2024-04-26 13:12:39.150602] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb3b010 (9): Bad file descriptor 00:29:34.384 [2024-04-26 13:12:39.150612] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:34.384 [2024-04-26 13:12:39.150618] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:34.384 [2024-04-26 13:12:39.150625] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:34.384 [2024-04-26 13:12:39.150635] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.384 [2024-04-26 13:12:39.159968] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:34.384 [2024-04-26 13:12:39.160281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.384 [2024-04-26 13:12:39.160629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.384 [2024-04-26 13:12:39.160638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb3b010 with addr=10.0.0.2, port=4420 00:29:34.384 [2024-04-26 13:12:39.160649] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3b010 is same with the state(5) to be set 00:29:34.384 [2024-04-26 13:12:39.160661] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb3b010 (9): Bad file descriptor 00:29:34.384 [2024-04-26 13:12:39.160671] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:34.384 [2024-04-26 13:12:39.160677] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:34.384 [2024-04-26 13:12:39.160684] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:34.384 [2024-04-26 13:12:39.160694] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.384 [2024-04-26 13:12:39.170021] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:34.384 [2024-04-26 13:12:39.170341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.384 [2024-04-26 13:12:39.170654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.384 [2024-04-26 13:12:39.170663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb3b010 with addr=10.0.0.2, port=4420 00:29:34.384 [2024-04-26 13:12:39.170670] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3b010 is same with the state(5) to be set 00:29:34.384 [2024-04-26 13:12:39.170680] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb3b010 (9): Bad file descriptor 00:29:34.384 [2024-04-26 13:12:39.170690] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:34.384 [2024-04-26 13:12:39.170696] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:34.384 [2024-04-26 13:12:39.170703] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:34.384 [2024-04-26 13:12:39.170713] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.384 13:12:39 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:34.384 13:12:39 -- common/autotest_common.sh@904 -- # return 0 00:29:34.384 13:12:39 -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:29:34.384 13:12:39 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:29:34.384 13:12:39 -- common/autotest_common.sh@901 -- # local max=10 00:29:34.384 13:12:39 -- common/autotest_common.sh@902 -- # (( max-- )) 00:29:34.384 13:12:39 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:29:34.385 13:12:39 -- common/autotest_common.sh@903 -- # get_bdev_list 00:29:34.385 13:12:39 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:34.385 13:12:39 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:34.385 13:12:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:34.385 13:12:39 -- host/discovery.sh@55 -- # sort 00:29:34.385 13:12:39 -- common/autotest_common.sh@10 -- # set +x 00:29:34.385 13:12:39 -- host/discovery.sh@55 -- # xargs 00:29:34.385 [2024-04-26 13:12:39.180071] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:34.385 [2024-04-26 13:12:39.180384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.385 [2024-04-26 13:12:39.180628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.385 [2024-04-26 13:12:39.180637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb3b010 with addr=10.0.0.2, port=4420 00:29:34.385 [2024-04-26 13:12:39.180645] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3b010 is same with the state(5) to be set 00:29:34.385 [2024-04-26 13:12:39.180656] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb3b010 (9): Bad file descriptor 00:29:34.385 [2024-04-26 13:12:39.180667] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:34.385 [2024-04-26 13:12:39.180676] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:34.385 [2024-04-26 13:12:39.180684] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:34.385 [2024-04-26 13:12:39.180695] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.385 [2024-04-26 13:12:39.190123] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:34.385 [2024-04-26 13:12:39.190439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.385 [2024-04-26 13:12:39.190794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.385 [2024-04-26 13:12:39.190803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb3b010 with addr=10.0.0.2, port=4420 00:29:34.385 [2024-04-26 13:12:39.190810] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3b010 is same with the state(5) to be set 00:29:34.385 [2024-04-26 13:12:39.190821] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb3b010 (9): Bad file descriptor 00:29:34.385 [2024-04-26 13:12:39.190831] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:34.385 [2024-04-26 13:12:39.190842] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:34.385 [2024-04-26 13:12:39.190849] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:34.385 [2024-04-26 13:12:39.190859] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.385 [2024-04-26 13:12:39.197940] bdev_nvme.c:6710:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:29:34.385 [2024-04-26 13:12:39.197958] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:29:34.385 13:12:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:34.385 13:12:39 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:29:34.385 13:12:39 -- common/autotest_common.sh@904 -- # return 0 00:29:34.385 13:12:39 -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:29:34.385 13:12:39 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:29:34.385 13:12:39 -- common/autotest_common.sh@901 -- # local max=10 00:29:34.385 13:12:39 -- common/autotest_common.sh@902 -- # (( max-- )) 00:29:34.385 13:12:39 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:29:34.385 13:12:39 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:29:34.385 13:12:39 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:29:34.385 13:12:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:34.385 13:12:39 -- common/autotest_common.sh@10 -- # set +x 00:29:34.385 13:12:39 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:29:34.385 13:12:39 -- host/discovery.sh@63 -- # sort -n 00:29:34.385 13:12:39 -- host/discovery.sh@63 -- # xargs 00:29:34.385 13:12:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:34.385 13:12:39 -- common/autotest_common.sh@903 -- # [[ 4421 == \4\4\2\1 ]] 00:29:34.385 13:12:39 -- common/autotest_common.sh@904 -- # return 0 00:29:34.385 13:12:39 -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:29:34.385 13:12:39 -- host/discovery.sh@79 -- # expected_count=0 00:29:34.385 13:12:39 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:29:34.385 13:12:39 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:29:34.385 13:12:39 -- common/autotest_common.sh@901 -- # local max=10 00:29:34.385 13:12:39 -- common/autotest_common.sh@902 -- # (( max-- )) 00:29:34.385 13:12:39 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:29:34.385 13:12:39 -- common/autotest_common.sh@903 -- # get_notification_count 00:29:34.385 13:12:39 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:29:34.385 13:12:39 -- host/discovery.sh@74 -- # jq '. | length' 00:29:34.385 13:12:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:34.385 13:12:39 -- common/autotest_common.sh@10 -- # set +x 00:29:34.385 13:12:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:34.385 13:12:39 -- host/discovery.sh@74 -- # notification_count=0 00:29:34.385 13:12:39 -- host/discovery.sh@75 -- # notify_id=2 00:29:34.385 13:12:39 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:29:34.385 13:12:39 -- common/autotest_common.sh@904 -- # return 0 00:29:34.385 13:12:39 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:29:34.385 13:12:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:34.385 13:12:39 -- common/autotest_common.sh@10 -- # set +x 00:29:34.385 13:12:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:34.385 13:12:39 -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:29:34.385 13:12:39 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:29:34.385 13:12:39 -- common/autotest_common.sh@901 -- # local max=10 00:29:34.385 13:12:39 -- common/autotest_common.sh@902 -- # (( max-- )) 00:29:34.385 13:12:39 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:29:34.385 13:12:39 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:29:34.385 13:12:39 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:34.385 13:12:39 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:34.385 13:12:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:34.385 13:12:39 -- host/discovery.sh@59 -- # sort 00:29:34.385 13:12:39 -- common/autotest_common.sh@10 -- # set +x 00:29:34.385 13:12:39 -- host/discovery.sh@59 -- # xargs 00:29:34.385 13:12:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:34.385 13:12:39 -- common/autotest_common.sh@903 -- # [[ '' == '' ]] 00:29:34.385 13:12:39 -- common/autotest_common.sh@904 -- # return 0 00:29:34.385 13:12:39 -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:29:34.385 13:12:39 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:29:34.385 13:12:39 -- common/autotest_common.sh@901 -- # local max=10 00:29:34.385 13:12:39 -- common/autotest_common.sh@902 -- # (( max-- )) 00:29:34.385 13:12:39 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:29:34.385 13:12:39 -- common/autotest_common.sh@903 -- # get_bdev_list 00:29:34.385 13:12:39 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:34.385 13:12:39 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:34.385 13:12:39 -- host/discovery.sh@55 -- # sort 00:29:34.385 13:12:39 -- host/discovery.sh@55 -- # xargs 00:29:34.385 13:12:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:34.385 13:12:39 -- common/autotest_common.sh@10 -- # set +x 00:29:34.385 13:12:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:34.646 13:12:39 -- common/autotest_common.sh@903 -- # [[ '' == '' ]] 00:29:34.646 13:12:39 -- common/autotest_common.sh@904 -- # return 0 00:29:34.646 13:12:39 -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:29:34.646 13:12:39 -- host/discovery.sh@79 -- # expected_count=2 00:29:34.646 13:12:39 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:29:34.646 13:12:39 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:29:34.646 13:12:39 -- common/autotest_common.sh@901 -- # local max=10 00:29:34.646 13:12:39 -- common/autotest_common.sh@902 -- # (( max-- )) 00:29:34.646 13:12:39 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:29:34.646 13:12:39 -- common/autotest_common.sh@903 -- # get_notification_count 00:29:34.646 13:12:39 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:29:34.646 13:12:39 -- host/discovery.sh@74 -- # jq '. | length' 00:29:34.646 13:12:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:34.646 13:12:39 -- common/autotest_common.sh@10 -- # set +x 00:29:34.646 13:12:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:34.646 13:12:39 -- host/discovery.sh@74 -- # notification_count=2 00:29:34.646 13:12:39 -- host/discovery.sh@75 -- # notify_id=4 00:29:34.646 13:12:39 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:29:34.646 13:12:39 -- common/autotest_common.sh@904 -- # return 0 00:29:34.646 13:12:39 -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:34.646 13:12:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:34.646 13:12:39 -- common/autotest_common.sh@10 -- # set +x 00:29:35.588 [2024-04-26 13:12:40.567036] bdev_nvme.c:6923:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:29:35.588 [2024-04-26 13:12:40.567054] bdev_nvme.c:7003:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:29:35.588 [2024-04-26 13:12:40.567067] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:35.849 [2024-04-26 13:12:40.655352] bdev_nvme.c:6852:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:29:36.110 [2024-04-26 13:12:40.966896] bdev_nvme.c:6742:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:29:36.111 [2024-04-26 13:12:40.966926] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:29:36.111 13:12:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:36.111 13:12:40 -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:36.111 13:12:40 -- common/autotest_common.sh@638 -- # local es=0 00:29:36.111 13:12:40 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:36.111 13:12:40 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:29:36.111 13:12:40 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:36.111 13:12:40 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:29:36.111 13:12:40 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:36.111 13:12:40 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:36.111 13:12:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:36.111 13:12:40 -- common/autotest_common.sh@10 -- # set +x 00:29:36.111 request: 00:29:36.111 { 00:29:36.111 "name": "nvme", 00:29:36.111 "trtype": "tcp", 00:29:36.111 "traddr": "10.0.0.2", 00:29:36.111 "hostnqn": "nqn.2021-12.io.spdk:test", 00:29:36.111 "adrfam": "ipv4", 00:29:36.111 "trsvcid": "8009", 00:29:36.111 "wait_for_attach": true, 00:29:36.111 "method": "bdev_nvme_start_discovery", 00:29:36.111 "req_id": 1 00:29:36.111 } 00:29:36.111 Got JSON-RPC error response 00:29:36.111 response: 00:29:36.111 { 00:29:36.111 "code": -17, 00:29:36.111 "message": "File exists" 00:29:36.111 } 00:29:36.111 13:12:40 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:29:36.111 13:12:40 -- common/autotest_common.sh@641 -- # es=1 00:29:36.111 13:12:40 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:29:36.111 13:12:40 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:29:36.111 13:12:40 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:29:36.111 13:12:40 -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:29:36.111 13:12:40 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:29:36.111 13:12:40 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:29:36.111 13:12:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:36.111 13:12:40 -- host/discovery.sh@67 -- # sort 00:29:36.111 13:12:40 -- common/autotest_common.sh@10 -- # set +x 00:29:36.111 13:12:40 -- host/discovery.sh@67 -- # xargs 00:29:36.111 13:12:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:36.111 13:12:41 -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:29:36.111 13:12:41 -- host/discovery.sh@146 -- # get_bdev_list 00:29:36.111 13:12:41 -- host/discovery.sh@55 -- # xargs 00:29:36.111 13:12:41 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:36.111 13:12:41 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:36.111 13:12:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:36.111 13:12:41 -- host/discovery.sh@55 -- # sort 00:29:36.111 13:12:41 -- common/autotest_common.sh@10 -- # set +x 00:29:36.111 13:12:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:36.111 13:12:41 -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:29:36.111 13:12:41 -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:36.111 13:12:41 -- common/autotest_common.sh@638 -- # local es=0 00:29:36.111 13:12:41 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:36.111 13:12:41 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:29:36.111 13:12:41 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:36.111 13:12:41 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:29:36.111 13:12:41 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:36.111 13:12:41 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:36.111 13:12:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:36.111 13:12:41 -- common/autotest_common.sh@10 -- # set +x 00:29:36.111 request: 00:29:36.111 { 00:29:36.111 "name": "nvme_second", 00:29:36.111 "trtype": "tcp", 00:29:36.111 "traddr": "10.0.0.2", 00:29:36.111 "hostnqn": "nqn.2021-12.io.spdk:test", 00:29:36.111 "adrfam": "ipv4", 00:29:36.111 "trsvcid": "8009", 00:29:36.111 "wait_for_attach": true, 00:29:36.111 "method": "bdev_nvme_start_discovery", 00:29:36.111 "req_id": 1 00:29:36.111 } 00:29:36.111 Got JSON-RPC error response 00:29:36.111 response: 00:29:36.111 { 00:29:36.111 "code": -17, 00:29:36.111 "message": "File exists" 00:29:36.111 } 00:29:36.111 13:12:41 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:29:36.111 13:12:41 -- common/autotest_common.sh@641 -- # es=1 00:29:36.111 13:12:41 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:29:36.111 13:12:41 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:29:36.111 13:12:41 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:29:36.111 13:12:41 -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:29:36.111 13:12:41 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:29:36.111 13:12:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:36.111 13:12:41 -- common/autotest_common.sh@10 -- # set +x 00:29:36.111 13:12:41 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:29:36.111 13:12:41 -- host/discovery.sh@67 -- # sort 00:29:36.111 13:12:41 -- host/discovery.sh@67 -- # xargs 00:29:36.111 13:12:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:36.111 13:12:41 -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:29:36.111 13:12:41 -- host/discovery.sh@152 -- # get_bdev_list 00:29:36.111 13:12:41 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:36.111 13:12:41 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:36.111 13:12:41 -- host/discovery.sh@55 -- # sort 00:29:36.111 13:12:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:36.111 13:12:41 -- host/discovery.sh@55 -- # xargs 00:29:36.111 13:12:41 -- common/autotest_common.sh@10 -- # set +x 00:29:36.372 13:12:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:36.372 13:12:41 -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:29:36.372 13:12:41 -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:29:36.372 13:12:41 -- common/autotest_common.sh@638 -- # local es=0 00:29:36.372 13:12:41 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:29:36.372 13:12:41 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:29:36.372 13:12:41 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:36.372 13:12:41 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:29:36.372 13:12:41 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:36.372 13:12:41 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:29:36.372 13:12:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:36.372 13:12:41 -- common/autotest_common.sh@10 -- # set +x 00:29:37.315 [2024-04-26 13:12:42.230826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.315 [2024-04-26 13:12:42.231149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.315 [2024-04-26 13:12:42.231170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb57810 with addr=10.0.0.2, port=8010 00:29:37.315 [2024-04-26 13:12:42.231183] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:29:37.315 [2024-04-26 13:12:42.231194] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:37.315 [2024-04-26 13:12:42.231201] bdev_nvme.c:6985:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:29:38.260 [2024-04-26 13:12:43.233172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.260 [2024-04-26 13:12:43.233530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.260 [2024-04-26 13:12:43.233540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd1aff0 with addr=10.0.0.2, port=8010 00:29:38.260 [2024-04-26 13:12:43.233551] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:29:38.260 [2024-04-26 13:12:43.233558] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:38.260 [2024-04-26 13:12:43.233564] bdev_nvme.c:6985:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:29:39.200 [2024-04-26 13:12:44.235176] bdev_nvme.c:6966:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:29:39.200 request: 00:29:39.200 { 00:29:39.200 "name": "nvme_second", 00:29:39.200 "trtype": "tcp", 00:29:39.200 "traddr": "10.0.0.2", 00:29:39.200 "hostnqn": "nqn.2021-12.io.spdk:test", 00:29:39.200 "adrfam": "ipv4", 00:29:39.200 "trsvcid": "8010", 00:29:39.200 "attach_timeout_ms": 3000, 00:29:39.200 "method": "bdev_nvme_start_discovery", 00:29:39.200 "req_id": 1 00:29:39.200 } 00:29:39.200 Got JSON-RPC error response 00:29:39.200 response: 00:29:39.200 { 00:29:39.200 "code": -110, 00:29:39.200 "message": "Connection timed out" 00:29:39.200 } 00:29:39.200 13:12:44 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:29:39.200 13:12:44 -- common/autotest_common.sh@641 -- # es=1 00:29:39.200 13:12:44 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:29:39.200 13:12:44 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:29:39.200 13:12:44 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:29:39.200 13:12:44 -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:29:39.200 13:12:44 -- host/discovery.sh@67 -- # sort 00:29:39.200 13:12:44 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:29:39.200 13:12:44 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:29:39.200 13:12:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:39.200 13:12:44 -- common/autotest_common.sh@10 -- # set +x 00:29:39.200 13:12:44 -- host/discovery.sh@67 -- # xargs 00:29:39.460 13:12:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:39.460 13:12:44 -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:29:39.460 13:12:44 -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:29:39.460 13:12:44 -- host/discovery.sh@161 -- # kill 4156371 00:29:39.460 13:12:44 -- host/discovery.sh@162 -- # nvmftestfini 00:29:39.460 13:12:44 -- nvmf/common.sh@477 -- # nvmfcleanup 00:29:39.460 13:12:44 -- nvmf/common.sh@117 -- # sync 00:29:39.460 13:12:44 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:39.460 13:12:44 -- nvmf/common.sh@120 -- # set +e 00:29:39.460 13:12:44 -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:39.460 13:12:44 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:39.460 rmmod nvme_tcp 00:29:39.460 rmmod nvme_fabrics 00:29:39.460 rmmod nvme_keyring 00:29:39.460 13:12:44 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:39.460 13:12:44 -- nvmf/common.sh@124 -- # set -e 00:29:39.460 13:12:44 -- nvmf/common.sh@125 -- # return 0 00:29:39.460 13:12:44 -- nvmf/common.sh@478 -- # '[' -n 4156093 ']' 00:29:39.460 13:12:44 -- nvmf/common.sh@479 -- # killprocess 4156093 00:29:39.460 13:12:44 -- common/autotest_common.sh@936 -- # '[' -z 4156093 ']' 00:29:39.460 13:12:44 -- common/autotest_common.sh@940 -- # kill -0 4156093 00:29:39.460 13:12:44 -- common/autotest_common.sh@941 -- # uname 00:29:39.460 13:12:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:39.460 13:12:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4156093 00:29:39.460 13:12:44 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:29:39.460 13:12:44 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:29:39.460 13:12:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4156093' 00:29:39.460 killing process with pid 4156093 00:29:39.460 13:12:44 -- common/autotest_common.sh@955 -- # kill 4156093 00:29:39.460 13:12:44 -- common/autotest_common.sh@960 -- # wait 4156093 00:29:39.721 13:12:44 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:29:39.721 13:12:44 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:29:39.721 13:12:44 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:29:39.721 13:12:44 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:39.721 13:12:44 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:39.721 13:12:44 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:39.721 13:12:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:39.721 13:12:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:41.665 13:12:46 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:41.665 00:29:41.665 real 0m19.958s 00:29:41.665 user 0m23.363s 00:29:41.665 sys 0m6.845s 00:29:41.665 13:12:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:41.665 13:12:46 -- common/autotest_common.sh@10 -- # set +x 00:29:41.665 ************************************ 00:29:41.665 END TEST nvmf_discovery 00:29:41.665 ************************************ 00:29:41.665 13:12:46 -- nvmf/nvmf.sh@100 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:29:41.665 13:12:46 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:29:41.665 13:12:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:41.665 13:12:46 -- common/autotest_common.sh@10 -- # set +x 00:29:41.927 ************************************ 00:29:41.927 START TEST nvmf_discovery_remove_ifc 00:29:41.927 ************************************ 00:29:41.927 13:12:46 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:29:41.927 * Looking for test storage... 00:29:41.927 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:41.927 13:12:46 -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:41.927 13:12:46 -- nvmf/common.sh@7 -- # uname -s 00:29:41.927 13:12:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:41.927 13:12:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:41.927 13:12:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:41.927 13:12:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:41.927 13:12:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:41.927 13:12:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:41.927 13:12:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:41.927 13:12:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:41.927 13:12:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:41.927 13:12:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:41.927 13:12:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:41.927 13:12:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:41.927 13:12:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:41.927 13:12:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:41.927 13:12:46 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:41.927 13:12:46 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:41.927 13:12:46 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:41.927 13:12:46 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:41.927 13:12:46 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:41.927 13:12:46 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:41.927 13:12:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:41.927 13:12:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:41.927 13:12:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:41.927 13:12:46 -- paths/export.sh@5 -- # export PATH 00:29:41.927 13:12:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:41.927 13:12:46 -- nvmf/common.sh@47 -- # : 0 00:29:41.927 13:12:46 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:41.927 13:12:46 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:41.927 13:12:46 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:41.927 13:12:46 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:41.927 13:12:46 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:41.927 13:12:46 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:41.927 13:12:46 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:41.927 13:12:46 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:41.927 13:12:46 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:29:41.927 13:12:46 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:29:41.927 13:12:46 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:29:41.927 13:12:46 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:29:41.927 13:12:46 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:29:41.927 13:12:46 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:29:41.927 13:12:46 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:29:41.927 13:12:46 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:29:41.927 13:12:46 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:41.927 13:12:46 -- nvmf/common.sh@437 -- # prepare_net_devs 00:29:41.927 13:12:46 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:29:41.927 13:12:46 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:29:41.927 13:12:46 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:41.927 13:12:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:41.927 13:12:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:41.927 13:12:46 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:29:41.927 13:12:46 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:29:41.927 13:12:46 -- nvmf/common.sh@285 -- # xtrace_disable 00:29:41.927 13:12:46 -- common/autotest_common.sh@10 -- # set +x 00:29:50.079 13:12:53 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:29:50.079 13:12:53 -- nvmf/common.sh@291 -- # pci_devs=() 00:29:50.079 13:12:53 -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:50.079 13:12:53 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:50.079 13:12:53 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:50.079 13:12:53 -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:50.079 13:12:53 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:50.079 13:12:53 -- nvmf/common.sh@295 -- # net_devs=() 00:29:50.079 13:12:53 -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:50.079 13:12:53 -- nvmf/common.sh@296 -- # e810=() 00:29:50.079 13:12:53 -- nvmf/common.sh@296 -- # local -ga e810 00:29:50.079 13:12:53 -- nvmf/common.sh@297 -- # x722=() 00:29:50.079 13:12:53 -- nvmf/common.sh@297 -- # local -ga x722 00:29:50.079 13:12:53 -- nvmf/common.sh@298 -- # mlx=() 00:29:50.079 13:12:53 -- nvmf/common.sh@298 -- # local -ga mlx 00:29:50.079 13:12:53 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:50.079 13:12:53 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:50.079 13:12:53 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:50.079 13:12:53 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:50.079 13:12:53 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:50.079 13:12:53 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:50.079 13:12:53 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:50.079 13:12:53 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:50.079 13:12:53 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:50.079 13:12:53 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:50.079 13:12:53 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:50.079 13:12:53 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:50.079 13:12:53 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:50.079 13:12:53 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:50.079 13:12:53 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:50.079 13:12:53 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:50.079 13:12:53 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:50.079 13:12:53 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:50.079 13:12:53 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:29:50.079 Found 0000:31:00.0 (0x8086 - 0x159b) 00:29:50.079 13:12:53 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:50.079 13:12:53 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:50.079 13:12:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:50.079 13:12:53 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:50.079 13:12:53 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:50.079 13:12:53 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:50.079 13:12:53 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:29:50.079 Found 0000:31:00.1 (0x8086 - 0x159b) 00:29:50.079 13:12:53 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:50.079 13:12:53 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:50.079 13:12:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:50.079 13:12:53 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:50.079 13:12:53 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:50.079 13:12:53 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:50.079 13:12:53 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:50.079 13:12:53 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:50.079 13:12:53 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:50.079 13:12:53 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:50.079 13:12:53 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:29:50.079 13:12:53 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:50.079 13:12:53 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:29:50.079 Found net devices under 0000:31:00.0: cvl_0_0 00:29:50.079 13:12:53 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:29:50.080 13:12:53 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:50.080 13:12:53 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:50.080 13:12:53 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:29:50.080 13:12:53 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:50.080 13:12:53 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:29:50.080 Found net devices under 0000:31:00.1: cvl_0_1 00:29:50.080 13:12:53 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:29:50.080 13:12:53 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:29:50.080 13:12:53 -- nvmf/common.sh@403 -- # is_hw=yes 00:29:50.080 13:12:53 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:29:50.080 13:12:53 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:29:50.080 13:12:53 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:29:50.080 13:12:53 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:50.080 13:12:53 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:50.080 13:12:53 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:50.080 13:12:53 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:50.080 13:12:53 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:50.080 13:12:53 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:50.080 13:12:53 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:50.080 13:12:53 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:50.080 13:12:53 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:50.080 13:12:53 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:50.080 13:12:53 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:50.080 13:12:53 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:50.080 13:12:53 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:50.080 13:12:54 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:50.080 13:12:54 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:50.080 13:12:54 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:50.080 13:12:54 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:50.080 13:12:54 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:50.080 13:12:54 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:50.080 13:12:54 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:50.080 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:50.080 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.519 ms 00:29:50.080 00:29:50.080 --- 10.0.0.2 ping statistics --- 00:29:50.080 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:50.080 rtt min/avg/max/mdev = 0.519/0.519/0.519/0.000 ms 00:29:50.080 13:12:54 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:50.080 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:50.080 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.332 ms 00:29:50.080 00:29:50.080 --- 10.0.0.1 ping statistics --- 00:29:50.080 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:50.080 rtt min/avg/max/mdev = 0.332/0.332/0.332/0.000 ms 00:29:50.080 13:12:54 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:50.080 13:12:54 -- nvmf/common.sh@411 -- # return 0 00:29:50.080 13:12:54 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:29:50.080 13:12:54 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:50.080 13:12:54 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:29:50.080 13:12:54 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:29:50.080 13:12:54 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:50.080 13:12:54 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:29:50.080 13:12:54 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:29:50.080 13:12:54 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:29:50.080 13:12:54 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:29:50.080 13:12:54 -- common/autotest_common.sh@710 -- # xtrace_disable 00:29:50.080 13:12:54 -- common/autotest_common.sh@10 -- # set +x 00:29:50.080 13:12:54 -- nvmf/common.sh@470 -- # nvmfpid=4162605 00:29:50.080 13:12:54 -- nvmf/common.sh@471 -- # waitforlisten 4162605 00:29:50.080 13:12:54 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:29:50.080 13:12:54 -- common/autotest_common.sh@817 -- # '[' -z 4162605 ']' 00:29:50.080 13:12:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:50.080 13:12:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:50.080 13:12:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:50.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:50.080 13:12:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:50.080 13:12:54 -- common/autotest_common.sh@10 -- # set +x 00:29:50.080 [2024-04-26 13:12:54.268322] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:29:50.080 [2024-04-26 13:12:54.268414] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:50.080 EAL: No free 2048 kB hugepages reported on node 1 00:29:50.080 [2024-04-26 13:12:54.354860] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:50.080 [2024-04-26 13:12:54.426710] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:50.080 [2024-04-26 13:12:54.426759] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:50.080 [2024-04-26 13:12:54.426766] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:50.080 [2024-04-26 13:12:54.426773] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:50.080 [2024-04-26 13:12:54.426778] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:50.080 [2024-04-26 13:12:54.426802] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:50.080 13:12:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:50.080 13:12:55 -- common/autotest_common.sh@850 -- # return 0 00:29:50.080 13:12:55 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:29:50.080 13:12:55 -- common/autotest_common.sh@716 -- # xtrace_disable 00:29:50.080 13:12:55 -- common/autotest_common.sh@10 -- # set +x 00:29:50.080 13:12:55 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:50.080 13:12:55 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:29:50.080 13:12:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:50.080 13:12:55 -- common/autotest_common.sh@10 -- # set +x 00:29:50.080 [2024-04-26 13:12:55.097229] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:50.080 [2024-04-26 13:12:55.105422] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:29:50.080 null0 00:29:50.080 [2024-04-26 13:12:55.137415] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:50.341 13:12:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:50.341 13:12:55 -- host/discovery_remove_ifc.sh@59 -- # hostpid=4162638 00:29:50.341 13:12:55 -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:29:50.341 13:12:55 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 4162638 /tmp/host.sock 00:29:50.341 13:12:55 -- common/autotest_common.sh@817 -- # '[' -z 4162638 ']' 00:29:50.341 13:12:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/tmp/host.sock 00:29:50.341 13:12:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:50.341 13:12:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:29:50.341 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:29:50.341 13:12:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:50.342 13:12:55 -- common/autotest_common.sh@10 -- # set +x 00:29:50.342 [2024-04-26 13:12:55.184453] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:29:50.342 [2024-04-26 13:12:55.184499] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4162638 ] 00:29:50.342 EAL: No free 2048 kB hugepages reported on node 1 00:29:50.342 [2024-04-26 13:12:55.240982] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:50.342 [2024-04-26 13:12:55.307523] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:50.913 13:12:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:50.914 13:12:55 -- common/autotest_common.sh@850 -- # return 0 00:29:50.914 13:12:55 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:50.914 13:12:55 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:29:50.914 13:12:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:50.914 13:12:55 -- common/autotest_common.sh@10 -- # set +x 00:29:50.914 13:12:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:50.914 13:12:55 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:29:50.914 13:12:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:50.914 13:12:55 -- common/autotest_common.sh@10 -- # set +x 00:29:51.174 13:12:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:51.174 13:12:56 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:29:51.174 13:12:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:51.174 13:12:56 -- common/autotest_common.sh@10 -- # set +x 00:29:52.115 [2024-04-26 13:12:57.035218] bdev_nvme.c:6923:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:29:52.115 [2024-04-26 13:12:57.035239] bdev_nvme.c:7003:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:29:52.115 [2024-04-26 13:12:57.035253] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:52.115 [2024-04-26 13:12:57.164677] bdev_nvme.c:6852:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:29:52.376 [2024-04-26 13:12:57.267234] bdev_nvme.c:7713:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:29:52.376 [2024-04-26 13:12:57.267283] bdev_nvme.c:7713:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:29:52.376 [2024-04-26 13:12:57.267307] bdev_nvme.c:7713:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:29:52.376 [2024-04-26 13:12:57.267322] bdev_nvme.c:6742:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:29:52.376 [2024-04-26 13:12:57.267343] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:29:52.376 13:12:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:52.376 13:12:57 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:29:52.376 13:12:57 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:52.376 13:12:57 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:52.376 13:12:57 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:52.376 13:12:57 -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:52.376 13:12:57 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:52.376 13:12:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:52.376 13:12:57 -- common/autotest_common.sh@10 -- # set +x 00:29:52.376 [2024-04-26 13:12:57.274573] bdev_nvme.c:1606:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x9a7900 was disconnected and freed. delete nvme_qpair. 00:29:52.376 13:12:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:52.376 13:12:57 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:29:52.376 13:12:57 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:29:52.376 13:12:57 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:29:52.376 13:12:57 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:29:52.376 13:12:57 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:52.376 13:12:57 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:52.376 13:12:57 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:52.376 13:12:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:52.376 13:12:57 -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:52.376 13:12:57 -- common/autotest_common.sh@10 -- # set +x 00:29:52.376 13:12:57 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:52.636 13:12:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:52.636 13:12:57 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:29:52.636 13:12:57 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:53.578 13:12:58 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:53.578 13:12:58 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:53.578 13:12:58 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:53.578 13:12:58 -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:53.578 13:12:58 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:53.578 13:12:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:53.578 13:12:58 -- common/autotest_common.sh@10 -- # set +x 00:29:53.578 13:12:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:53.578 13:12:58 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:29:53.578 13:12:58 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:54.522 13:12:59 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:54.522 13:12:59 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:54.522 13:12:59 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:54.522 13:12:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:54.522 13:12:59 -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:54.522 13:12:59 -- common/autotest_common.sh@10 -- # set +x 00:29:54.522 13:12:59 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:54.522 13:12:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:54.782 13:12:59 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:29:54.782 13:12:59 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:55.723 13:13:00 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:55.723 13:13:00 -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:55.723 13:13:00 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:55.723 13:13:00 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:55.723 13:13:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:55.723 13:13:00 -- common/autotest_common.sh@10 -- # set +x 00:29:55.723 13:13:00 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:55.723 13:13:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:55.723 13:13:00 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:29:55.723 13:13:00 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:56.663 13:13:01 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:56.663 13:13:01 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:56.663 13:13:01 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:56.663 13:13:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:56.663 13:13:01 -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:56.663 13:13:01 -- common/autotest_common.sh@10 -- # set +x 00:29:56.663 13:13:01 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:56.663 13:13:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:56.663 13:13:01 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:29:56.663 13:13:01 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:58.043 13:13:02 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:58.043 13:13:02 -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:58.043 13:13:02 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:58.043 13:13:02 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:58.043 13:13:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:58.043 13:13:02 -- common/autotest_common.sh@10 -- # set +x 00:29:58.043 13:13:02 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:58.043 [2024-04-26 13:13:02.707866] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:29:58.043 [2024-04-26 13:13:02.707912] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:58.043 [2024-04-26 13:13:02.707923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.043 [2024-04-26 13:13:02.707933] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:58.043 [2024-04-26 13:13:02.707940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.043 [2024-04-26 13:13:02.707948] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:58.043 [2024-04-26 13:13:02.707955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.043 [2024-04-26 13:13:02.707963] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:58.043 [2024-04-26 13:13:02.707970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.043 [2024-04-26 13:13:02.707977] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:29:58.043 [2024-04-26 13:13:02.707984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.043 [2024-04-26 13:13:02.707991] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96dd90 is same with the state(5) to be set 00:29:58.043 [2024-04-26 13:13:02.717887] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x96dd90 (9): Bad file descriptor 00:29:58.043 13:13:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:58.043 [2024-04-26 13:13:02.727928] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:58.043 13:13:02 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:29:58.043 13:13:02 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:58.979 [2024-04-26 13:13:03.752862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:29:58.979 13:13:03 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:58.979 13:13:03 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:58.979 13:13:03 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:58.979 13:13:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:58.979 13:13:03 -- common/autotest_common.sh@10 -- # set +x 00:29:58.979 13:13:03 -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:58.980 13:13:03 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:59.921 [2024-04-26 13:13:04.776868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:29:59.921 [2024-04-26 13:13:04.776906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x96dd90 with addr=10.0.0.2, port=4420 00:29:59.921 [2024-04-26 13:13:04.776917] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96dd90 is same with the state(5) to be set 00:29:59.921 [2024-04-26 13:13:04.777262] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x96dd90 (9): Bad file descriptor 00:29:59.921 [2024-04-26 13:13:04.777283] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:59.921 [2024-04-26 13:13:04.777303] bdev_nvme.c:6674:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:29:59.921 [2024-04-26 13:13:04.777323] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:59.921 [2024-04-26 13:13:04.777333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.921 [2024-04-26 13:13:04.777343] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:59.921 [2024-04-26 13:13:04.777350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.921 [2024-04-26 13:13:04.777359] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:59.921 [2024-04-26 13:13:04.777366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.921 [2024-04-26 13:13:04.777375] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:59.921 [2024-04-26 13:13:04.777382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.921 [2024-04-26 13:13:04.777390] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:29:59.921 [2024-04-26 13:13:04.777397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.921 [2024-04-26 13:13:04.777405] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:29:59.921 [2024-04-26 13:13:04.777938] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x96e1a0 (9): Bad file descriptor 00:29:59.921 [2024-04-26 13:13:04.778948] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:29:59.921 [2024-04-26 13:13:04.778961] nvme_ctrlr.c:1148:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:29:59.921 13:13:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:59.921 13:13:04 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:29:59.921 13:13:04 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:00.861 13:13:05 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:00.861 13:13:05 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:00.861 13:13:05 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:00.861 13:13:05 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:00.861 13:13:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:00.861 13:13:05 -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:00.861 13:13:05 -- common/autotest_common.sh@10 -- # set +x 00:30:00.861 13:13:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:00.861 13:13:05 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:30:00.861 13:13:05 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:00.861 13:13:05 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:01.121 13:13:05 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:30:01.121 13:13:05 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:01.121 13:13:05 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:01.121 13:13:05 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:01.121 13:13:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:01.121 13:13:05 -- common/autotest_common.sh@10 -- # set +x 00:30:01.121 13:13:05 -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:01.121 13:13:05 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:01.121 13:13:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:01.121 13:13:06 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:30:01.121 13:13:06 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:02.063 [2024-04-26 13:13:06.832016] bdev_nvme.c:6923:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:30:02.063 [2024-04-26 13:13:06.832037] bdev_nvme.c:7003:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:30:02.063 [2024-04-26 13:13:06.832051] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:02.063 [2024-04-26 13:13:06.961464] bdev_nvme.c:6852:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:30:02.063 [2024-04-26 13:13:07.019045] bdev_nvme.c:7713:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:30:02.063 [2024-04-26 13:13:07.019084] bdev_nvme.c:7713:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:30:02.063 [2024-04-26 13:13:07.019104] bdev_nvme.c:7713:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:30:02.063 [2024-04-26 13:13:07.019120] bdev_nvme.c:6742:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:30:02.063 [2024-04-26 13:13:07.019128] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:30:02.063 13:13:07 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:02.063 13:13:07 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:02.063 13:13:07 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:02.063 13:13:07 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:02.063 13:13:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:02.063 13:13:07 -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:02.063 13:13:07 -- common/autotest_common.sh@10 -- # set +x 00:30:02.063 13:13:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:02.063 [2024-04-26 13:13:07.069556] bdev_nvme.c:1606:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x9b1e90 was disconnected and freed. delete nvme_qpair. 00:30:02.063 13:13:07 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:30:02.063 13:13:07 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:30:02.063 13:13:07 -- host/discovery_remove_ifc.sh@90 -- # killprocess 4162638 00:30:02.063 13:13:07 -- common/autotest_common.sh@936 -- # '[' -z 4162638 ']' 00:30:02.063 13:13:07 -- common/autotest_common.sh@940 -- # kill -0 4162638 00:30:02.063 13:13:07 -- common/autotest_common.sh@941 -- # uname 00:30:02.063 13:13:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:30:02.063 13:13:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4162638 00:30:02.323 13:13:07 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:30:02.323 13:13:07 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:30:02.323 13:13:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4162638' 00:30:02.323 killing process with pid 4162638 00:30:02.323 13:13:07 -- common/autotest_common.sh@955 -- # kill 4162638 00:30:02.323 13:13:07 -- common/autotest_common.sh@960 -- # wait 4162638 00:30:02.323 13:13:07 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:30:02.323 13:13:07 -- nvmf/common.sh@477 -- # nvmfcleanup 00:30:02.323 13:13:07 -- nvmf/common.sh@117 -- # sync 00:30:02.323 13:13:07 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:02.323 13:13:07 -- nvmf/common.sh@120 -- # set +e 00:30:02.323 13:13:07 -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:02.323 13:13:07 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:02.323 rmmod nvme_tcp 00:30:02.323 rmmod nvme_fabrics 00:30:02.323 rmmod nvme_keyring 00:30:02.323 13:13:07 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:02.323 13:13:07 -- nvmf/common.sh@124 -- # set -e 00:30:02.323 13:13:07 -- nvmf/common.sh@125 -- # return 0 00:30:02.323 13:13:07 -- nvmf/common.sh@478 -- # '[' -n 4162605 ']' 00:30:02.323 13:13:07 -- nvmf/common.sh@479 -- # killprocess 4162605 00:30:02.323 13:13:07 -- common/autotest_common.sh@936 -- # '[' -z 4162605 ']' 00:30:02.323 13:13:07 -- common/autotest_common.sh@940 -- # kill -0 4162605 00:30:02.323 13:13:07 -- common/autotest_common.sh@941 -- # uname 00:30:02.323 13:13:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:30:02.323 13:13:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4162605 00:30:02.323 13:13:07 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:30:02.323 13:13:07 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:30:02.323 13:13:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4162605' 00:30:02.323 killing process with pid 4162605 00:30:02.323 13:13:07 -- common/autotest_common.sh@955 -- # kill 4162605 00:30:02.323 13:13:07 -- common/autotest_common.sh@960 -- # wait 4162605 00:30:02.583 13:13:07 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:30:02.583 13:13:07 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:30:02.583 13:13:07 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:30:02.583 13:13:07 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:02.583 13:13:07 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:02.583 13:13:07 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:02.583 13:13:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:02.583 13:13:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:04.497 13:13:09 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:04.758 00:30:04.758 real 0m22.751s 00:30:04.758 user 0m25.820s 00:30:04.758 sys 0m6.563s 00:30:04.758 13:13:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:30:04.758 13:13:09 -- common/autotest_common.sh@10 -- # set +x 00:30:04.758 ************************************ 00:30:04.758 END TEST nvmf_discovery_remove_ifc 00:30:04.758 ************************************ 00:30:04.758 13:13:09 -- nvmf/nvmf.sh@101 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:30:04.758 13:13:09 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:30:04.758 13:13:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:04.758 13:13:09 -- common/autotest_common.sh@10 -- # set +x 00:30:04.758 ************************************ 00:30:04.758 START TEST nvmf_identify_kernel_target 00:30:04.758 ************************************ 00:30:04.758 13:13:09 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:30:05.019 * Looking for test storage... 00:30:05.019 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:05.019 13:13:09 -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:05.019 13:13:09 -- nvmf/common.sh@7 -- # uname -s 00:30:05.019 13:13:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:05.019 13:13:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:05.020 13:13:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:05.020 13:13:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:05.020 13:13:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:05.020 13:13:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:05.020 13:13:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:05.020 13:13:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:05.020 13:13:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:05.020 13:13:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:05.020 13:13:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:05.020 13:13:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:05.020 13:13:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:05.020 13:13:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:05.020 13:13:09 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:05.020 13:13:09 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:05.020 13:13:09 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:05.020 13:13:09 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:05.020 13:13:09 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:05.020 13:13:09 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:05.020 13:13:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:05.020 13:13:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:05.020 13:13:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:05.020 13:13:09 -- paths/export.sh@5 -- # export PATH 00:30:05.020 13:13:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:05.020 13:13:09 -- nvmf/common.sh@47 -- # : 0 00:30:05.020 13:13:09 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:05.020 13:13:09 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:05.020 13:13:09 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:05.020 13:13:09 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:05.020 13:13:09 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:05.020 13:13:09 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:05.020 13:13:09 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:05.020 13:13:09 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:05.020 13:13:09 -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:30:05.020 13:13:09 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:30:05.020 13:13:09 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:05.020 13:13:09 -- nvmf/common.sh@437 -- # prepare_net_devs 00:30:05.020 13:13:09 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:30:05.020 13:13:09 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:30:05.020 13:13:09 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:05.020 13:13:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:05.020 13:13:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:05.020 13:13:09 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:30:05.020 13:13:09 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:30:05.020 13:13:09 -- nvmf/common.sh@285 -- # xtrace_disable 00:30:05.020 13:13:09 -- common/autotest_common.sh@10 -- # set +x 00:30:13.157 13:13:16 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:30:13.157 13:13:16 -- nvmf/common.sh@291 -- # pci_devs=() 00:30:13.157 13:13:16 -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:13.157 13:13:16 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:13.157 13:13:16 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:13.157 13:13:16 -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:13.157 13:13:16 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:13.157 13:13:16 -- nvmf/common.sh@295 -- # net_devs=() 00:30:13.157 13:13:16 -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:13.157 13:13:16 -- nvmf/common.sh@296 -- # e810=() 00:30:13.157 13:13:16 -- nvmf/common.sh@296 -- # local -ga e810 00:30:13.157 13:13:16 -- nvmf/common.sh@297 -- # x722=() 00:30:13.157 13:13:16 -- nvmf/common.sh@297 -- # local -ga x722 00:30:13.157 13:13:16 -- nvmf/common.sh@298 -- # mlx=() 00:30:13.157 13:13:16 -- nvmf/common.sh@298 -- # local -ga mlx 00:30:13.157 13:13:16 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:13.157 13:13:16 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:13.157 13:13:16 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:13.157 13:13:16 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:13.157 13:13:16 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:13.157 13:13:16 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:13.157 13:13:16 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:13.157 13:13:16 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:13.157 13:13:16 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:13.157 13:13:16 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:13.157 13:13:16 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:13.157 13:13:16 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:13.157 13:13:16 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:13.157 13:13:16 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:13.157 13:13:16 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:13.157 13:13:16 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:13.157 13:13:16 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:13.157 13:13:16 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:13.157 13:13:16 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:13.157 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:13.157 13:13:16 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:13.157 13:13:16 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:13.157 13:13:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:13.157 13:13:16 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:13.157 13:13:16 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:13.157 13:13:16 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:13.157 13:13:16 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:13.157 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:13.157 13:13:16 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:13.158 13:13:16 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:13.158 13:13:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:13.158 13:13:16 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:13.158 13:13:16 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:13.158 13:13:16 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:13.158 13:13:16 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:13.158 13:13:16 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:13.158 13:13:16 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:13.158 13:13:16 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:13.158 13:13:16 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:30:13.158 13:13:16 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:13.158 13:13:16 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:13.158 Found net devices under 0000:31:00.0: cvl_0_0 00:30:13.158 13:13:16 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:30:13.158 13:13:16 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:13.158 13:13:16 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:13.158 13:13:16 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:30:13.158 13:13:16 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:13.158 13:13:16 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:13.158 Found net devices under 0000:31:00.1: cvl_0_1 00:30:13.158 13:13:16 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:30:13.158 13:13:16 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:30:13.158 13:13:16 -- nvmf/common.sh@403 -- # is_hw=yes 00:30:13.158 13:13:16 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:30:13.158 13:13:16 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:30:13.158 13:13:16 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:30:13.158 13:13:16 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:13.158 13:13:16 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:13.158 13:13:16 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:13.158 13:13:16 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:13.158 13:13:16 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:13.158 13:13:16 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:13.158 13:13:16 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:13.158 13:13:16 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:13.158 13:13:16 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:13.158 13:13:16 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:13.158 13:13:16 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:13.158 13:13:16 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:13.158 13:13:16 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:13.158 13:13:16 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:13.158 13:13:16 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:13.158 13:13:16 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:13.158 13:13:16 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:13.158 13:13:16 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:13.158 13:13:17 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:13.158 13:13:17 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:13.158 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:13.158 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.505 ms 00:30:13.158 00:30:13.158 --- 10.0.0.2 ping statistics --- 00:30:13.158 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:13.158 rtt min/avg/max/mdev = 0.505/0.505/0.505/0.000 ms 00:30:13.158 13:13:17 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:13.158 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:13.158 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.262 ms 00:30:13.158 00:30:13.158 --- 10.0.0.1 ping statistics --- 00:30:13.158 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:13.158 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:30:13.158 13:13:17 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:13.158 13:13:17 -- nvmf/common.sh@411 -- # return 0 00:30:13.158 13:13:17 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:30:13.158 13:13:17 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:13.158 13:13:17 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:30:13.158 13:13:17 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:30:13.158 13:13:17 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:13.158 13:13:17 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:30:13.158 13:13:17 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:30:13.158 13:13:17 -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:30:13.158 13:13:17 -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:30:13.158 13:13:17 -- nvmf/common.sh@717 -- # local ip 00:30:13.158 13:13:17 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:13.158 13:13:17 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:13.158 13:13:17 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:13.158 13:13:17 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:13.158 13:13:17 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:13.158 13:13:17 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:13.158 13:13:17 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:13.158 13:13:17 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:13.158 13:13:17 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:13.158 13:13:17 -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:30:13.158 13:13:17 -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:30:13.158 13:13:17 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:30:13.158 13:13:17 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:30:13.158 13:13:17 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:13.158 13:13:17 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:30:13.158 13:13:17 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:30:13.158 13:13:17 -- nvmf/common.sh@628 -- # local block nvme 00:30:13.158 13:13:17 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:30:13.158 13:13:17 -- nvmf/common.sh@631 -- # modprobe nvmet 00:30:13.158 13:13:17 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:30:13.158 13:13:17 -- nvmf/common.sh@636 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:30:15.704 Waiting for block devices as requested 00:30:15.704 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:30:15.704 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:30:15.704 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:30:15.704 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:30:15.704 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:30:15.964 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:30:15.964 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:30:15.965 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:30:16.225 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:30:16.225 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:30:16.485 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:30:16.485 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:30:16.485 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:30:16.485 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:30:16.746 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:30:16.746 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:30:16.746 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:30:17.006 13:13:21 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:30:17.006 13:13:21 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:30:17.006 13:13:21 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:30:17.006 13:13:21 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:30:17.006 13:13:21 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:30:17.006 13:13:21 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:30:17.006 13:13:21 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:30:17.006 13:13:21 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:30:17.006 13:13:21 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:30:17.006 No valid GPT data, bailing 00:30:17.006 13:13:22 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:30:17.006 13:13:22 -- scripts/common.sh@391 -- # pt= 00:30:17.006 13:13:22 -- scripts/common.sh@392 -- # return 1 00:30:17.006 13:13:22 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:30:17.006 13:13:22 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme0n1 ]] 00:30:17.006 13:13:22 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:17.006 13:13:22 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:30:17.267 13:13:22 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:30:17.267 13:13:22 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:30:17.267 13:13:22 -- nvmf/common.sh@656 -- # echo 1 00:30:17.267 13:13:22 -- nvmf/common.sh@657 -- # echo /dev/nvme0n1 00:30:17.267 13:13:22 -- nvmf/common.sh@658 -- # echo 1 00:30:17.267 13:13:22 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:30:17.267 13:13:22 -- nvmf/common.sh@661 -- # echo tcp 00:30:17.267 13:13:22 -- nvmf/common.sh@662 -- # echo 4420 00:30:17.267 13:13:22 -- nvmf/common.sh@663 -- # echo ipv4 00:30:17.267 13:13:22 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:30:17.267 13:13:22 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.1 -t tcp -s 4420 00:30:17.267 00:30:17.267 Discovery Log Number of Records 2, Generation counter 2 00:30:17.267 =====Discovery Log Entry 0====== 00:30:17.267 trtype: tcp 00:30:17.267 adrfam: ipv4 00:30:17.267 subtype: current discovery subsystem 00:30:17.267 treq: not specified, sq flow control disable supported 00:30:17.267 portid: 1 00:30:17.267 trsvcid: 4420 00:30:17.267 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:30:17.267 traddr: 10.0.0.1 00:30:17.267 eflags: none 00:30:17.267 sectype: none 00:30:17.267 =====Discovery Log Entry 1====== 00:30:17.267 trtype: tcp 00:30:17.267 adrfam: ipv4 00:30:17.267 subtype: nvme subsystem 00:30:17.267 treq: not specified, sq flow control disable supported 00:30:17.267 portid: 1 00:30:17.267 trsvcid: 4420 00:30:17.267 subnqn: nqn.2016-06.io.spdk:testnqn 00:30:17.267 traddr: 10.0.0.1 00:30:17.267 eflags: none 00:30:17.267 sectype: none 00:30:17.267 13:13:22 -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:30:17.267 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:30:17.267 EAL: No free 2048 kB hugepages reported on node 1 00:30:17.267 ===================================================== 00:30:17.267 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:30:17.267 ===================================================== 00:30:17.267 Controller Capabilities/Features 00:30:17.267 ================================ 00:30:17.267 Vendor ID: 0000 00:30:17.267 Subsystem Vendor ID: 0000 00:30:17.267 Serial Number: 6c7a6a76627c065e7948 00:30:17.267 Model Number: Linux 00:30:17.267 Firmware Version: 6.7.0-68 00:30:17.267 Recommended Arb Burst: 0 00:30:17.267 IEEE OUI Identifier: 00 00 00 00:30:17.267 Multi-path I/O 00:30:17.267 May have multiple subsystem ports: No 00:30:17.267 May have multiple controllers: No 00:30:17.267 Associated with SR-IOV VF: No 00:30:17.267 Max Data Transfer Size: Unlimited 00:30:17.267 Max Number of Namespaces: 0 00:30:17.267 Max Number of I/O Queues: 1024 00:30:17.267 NVMe Specification Version (VS): 1.3 00:30:17.267 NVMe Specification Version (Identify): 1.3 00:30:17.267 Maximum Queue Entries: 1024 00:30:17.267 Contiguous Queues Required: No 00:30:17.267 Arbitration Mechanisms Supported 00:30:17.267 Weighted Round Robin: Not Supported 00:30:17.267 Vendor Specific: Not Supported 00:30:17.267 Reset Timeout: 7500 ms 00:30:17.267 Doorbell Stride: 4 bytes 00:30:17.267 NVM Subsystem Reset: Not Supported 00:30:17.267 Command Sets Supported 00:30:17.267 NVM Command Set: Supported 00:30:17.267 Boot Partition: Not Supported 00:30:17.267 Memory Page Size Minimum: 4096 bytes 00:30:17.267 Memory Page Size Maximum: 4096 bytes 00:30:17.267 Persistent Memory Region: Not Supported 00:30:17.267 Optional Asynchronous Events Supported 00:30:17.267 Namespace Attribute Notices: Not Supported 00:30:17.267 Firmware Activation Notices: Not Supported 00:30:17.267 ANA Change Notices: Not Supported 00:30:17.267 PLE Aggregate Log Change Notices: Not Supported 00:30:17.268 LBA Status Info Alert Notices: Not Supported 00:30:17.268 EGE Aggregate Log Change Notices: Not Supported 00:30:17.268 Normal NVM Subsystem Shutdown event: Not Supported 00:30:17.268 Zone Descriptor Change Notices: Not Supported 00:30:17.268 Discovery Log Change Notices: Supported 00:30:17.268 Controller Attributes 00:30:17.268 128-bit Host Identifier: Not Supported 00:30:17.268 Non-Operational Permissive Mode: Not Supported 00:30:17.268 NVM Sets: Not Supported 00:30:17.268 Read Recovery Levels: Not Supported 00:30:17.268 Endurance Groups: Not Supported 00:30:17.268 Predictable Latency Mode: Not Supported 00:30:17.268 Traffic Based Keep ALive: Not Supported 00:30:17.268 Namespace Granularity: Not Supported 00:30:17.268 SQ Associations: Not Supported 00:30:17.268 UUID List: Not Supported 00:30:17.268 Multi-Domain Subsystem: Not Supported 00:30:17.268 Fixed Capacity Management: Not Supported 00:30:17.268 Variable Capacity Management: Not Supported 00:30:17.268 Delete Endurance Group: Not Supported 00:30:17.268 Delete NVM Set: Not Supported 00:30:17.268 Extended LBA Formats Supported: Not Supported 00:30:17.268 Flexible Data Placement Supported: Not Supported 00:30:17.268 00:30:17.268 Controller Memory Buffer Support 00:30:17.268 ================================ 00:30:17.268 Supported: No 00:30:17.268 00:30:17.268 Persistent Memory Region Support 00:30:17.268 ================================ 00:30:17.268 Supported: No 00:30:17.268 00:30:17.268 Admin Command Set Attributes 00:30:17.268 ============================ 00:30:17.268 Security Send/Receive: Not Supported 00:30:17.268 Format NVM: Not Supported 00:30:17.268 Firmware Activate/Download: Not Supported 00:30:17.268 Namespace Management: Not Supported 00:30:17.268 Device Self-Test: Not Supported 00:30:17.268 Directives: Not Supported 00:30:17.268 NVMe-MI: Not Supported 00:30:17.268 Virtualization Management: Not Supported 00:30:17.268 Doorbell Buffer Config: Not Supported 00:30:17.268 Get LBA Status Capability: Not Supported 00:30:17.268 Command & Feature Lockdown Capability: Not Supported 00:30:17.268 Abort Command Limit: 1 00:30:17.268 Async Event Request Limit: 1 00:30:17.268 Number of Firmware Slots: N/A 00:30:17.268 Firmware Slot 1 Read-Only: N/A 00:30:17.268 Firmware Activation Without Reset: N/A 00:30:17.268 Multiple Update Detection Support: N/A 00:30:17.268 Firmware Update Granularity: No Information Provided 00:30:17.268 Per-Namespace SMART Log: No 00:30:17.268 Asymmetric Namespace Access Log Page: Not Supported 00:30:17.268 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:30:17.268 Command Effects Log Page: Not Supported 00:30:17.268 Get Log Page Extended Data: Supported 00:30:17.268 Telemetry Log Pages: Not Supported 00:30:17.268 Persistent Event Log Pages: Not Supported 00:30:17.268 Supported Log Pages Log Page: May Support 00:30:17.268 Commands Supported & Effects Log Page: Not Supported 00:30:17.268 Feature Identifiers & Effects Log Page:May Support 00:30:17.268 NVMe-MI Commands & Effects Log Page: May Support 00:30:17.268 Data Area 4 for Telemetry Log: Not Supported 00:30:17.268 Error Log Page Entries Supported: 1 00:30:17.268 Keep Alive: Not Supported 00:30:17.268 00:30:17.268 NVM Command Set Attributes 00:30:17.268 ========================== 00:30:17.268 Submission Queue Entry Size 00:30:17.268 Max: 1 00:30:17.268 Min: 1 00:30:17.268 Completion Queue Entry Size 00:30:17.268 Max: 1 00:30:17.268 Min: 1 00:30:17.268 Number of Namespaces: 0 00:30:17.268 Compare Command: Not Supported 00:30:17.268 Write Uncorrectable Command: Not Supported 00:30:17.268 Dataset Management Command: Not Supported 00:30:17.268 Write Zeroes Command: Not Supported 00:30:17.268 Set Features Save Field: Not Supported 00:30:17.268 Reservations: Not Supported 00:30:17.268 Timestamp: Not Supported 00:30:17.268 Copy: Not Supported 00:30:17.268 Volatile Write Cache: Not Present 00:30:17.268 Atomic Write Unit (Normal): 1 00:30:17.268 Atomic Write Unit (PFail): 1 00:30:17.268 Atomic Compare & Write Unit: 1 00:30:17.268 Fused Compare & Write: Not Supported 00:30:17.268 Scatter-Gather List 00:30:17.268 SGL Command Set: Supported 00:30:17.268 SGL Keyed: Not Supported 00:30:17.268 SGL Bit Bucket Descriptor: Not Supported 00:30:17.268 SGL Metadata Pointer: Not Supported 00:30:17.268 Oversized SGL: Not Supported 00:30:17.268 SGL Metadata Address: Not Supported 00:30:17.268 SGL Offset: Supported 00:30:17.268 Transport SGL Data Block: Not Supported 00:30:17.268 Replay Protected Memory Block: Not Supported 00:30:17.268 00:30:17.268 Firmware Slot Information 00:30:17.268 ========================= 00:30:17.268 Active slot: 0 00:30:17.268 00:30:17.268 00:30:17.268 Error Log 00:30:17.268 ========= 00:30:17.268 00:30:17.268 Active Namespaces 00:30:17.268 ================= 00:30:17.268 Discovery Log Page 00:30:17.268 ================== 00:30:17.268 Generation Counter: 2 00:30:17.268 Number of Records: 2 00:30:17.268 Record Format: 0 00:30:17.268 00:30:17.268 Discovery Log Entry 0 00:30:17.268 ---------------------- 00:30:17.268 Transport Type: 3 (TCP) 00:30:17.268 Address Family: 1 (IPv4) 00:30:17.268 Subsystem Type: 3 (Current Discovery Subsystem) 00:30:17.268 Entry Flags: 00:30:17.268 Duplicate Returned Information: 0 00:30:17.268 Explicit Persistent Connection Support for Discovery: 0 00:30:17.268 Transport Requirements: 00:30:17.268 Secure Channel: Not Specified 00:30:17.268 Port ID: 1 (0x0001) 00:30:17.268 Controller ID: 65535 (0xffff) 00:30:17.268 Admin Max SQ Size: 32 00:30:17.268 Transport Service Identifier: 4420 00:30:17.268 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:30:17.268 Transport Address: 10.0.0.1 00:30:17.268 Discovery Log Entry 1 00:30:17.268 ---------------------- 00:30:17.268 Transport Type: 3 (TCP) 00:30:17.268 Address Family: 1 (IPv4) 00:30:17.268 Subsystem Type: 2 (NVM Subsystem) 00:30:17.268 Entry Flags: 00:30:17.268 Duplicate Returned Information: 0 00:30:17.268 Explicit Persistent Connection Support for Discovery: 0 00:30:17.268 Transport Requirements: 00:30:17.268 Secure Channel: Not Specified 00:30:17.268 Port ID: 1 (0x0001) 00:30:17.268 Controller ID: 65535 (0xffff) 00:30:17.268 Admin Max SQ Size: 32 00:30:17.268 Transport Service Identifier: 4420 00:30:17.268 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:30:17.268 Transport Address: 10.0.0.1 00:30:17.268 13:13:22 -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:17.268 EAL: No free 2048 kB hugepages reported on node 1 00:30:17.268 get_feature(0x01) failed 00:30:17.268 get_feature(0x02) failed 00:30:17.268 get_feature(0x04) failed 00:30:17.268 ===================================================== 00:30:17.268 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:30:17.268 ===================================================== 00:30:17.268 Controller Capabilities/Features 00:30:17.268 ================================ 00:30:17.268 Vendor ID: 0000 00:30:17.268 Subsystem Vendor ID: 0000 00:30:17.268 Serial Number: b47de2b2fc5d5e143723 00:30:17.268 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:30:17.268 Firmware Version: 6.7.0-68 00:30:17.268 Recommended Arb Burst: 6 00:30:17.268 IEEE OUI Identifier: 00 00 00 00:30:17.268 Multi-path I/O 00:30:17.268 May have multiple subsystem ports: Yes 00:30:17.268 May have multiple controllers: Yes 00:30:17.268 Associated with SR-IOV VF: No 00:30:17.268 Max Data Transfer Size: Unlimited 00:30:17.268 Max Number of Namespaces: 1024 00:30:17.268 Max Number of I/O Queues: 128 00:30:17.268 NVMe Specification Version (VS): 1.3 00:30:17.268 NVMe Specification Version (Identify): 1.3 00:30:17.268 Maximum Queue Entries: 1024 00:30:17.268 Contiguous Queues Required: No 00:30:17.268 Arbitration Mechanisms Supported 00:30:17.268 Weighted Round Robin: Not Supported 00:30:17.268 Vendor Specific: Not Supported 00:30:17.268 Reset Timeout: 7500 ms 00:30:17.268 Doorbell Stride: 4 bytes 00:30:17.268 NVM Subsystem Reset: Not Supported 00:30:17.268 Command Sets Supported 00:30:17.268 NVM Command Set: Supported 00:30:17.268 Boot Partition: Not Supported 00:30:17.268 Memory Page Size Minimum: 4096 bytes 00:30:17.268 Memory Page Size Maximum: 4096 bytes 00:30:17.268 Persistent Memory Region: Not Supported 00:30:17.268 Optional Asynchronous Events Supported 00:30:17.269 Namespace Attribute Notices: Supported 00:30:17.269 Firmware Activation Notices: Not Supported 00:30:17.269 ANA Change Notices: Supported 00:30:17.269 PLE Aggregate Log Change Notices: Not Supported 00:30:17.269 LBA Status Info Alert Notices: Not Supported 00:30:17.269 EGE Aggregate Log Change Notices: Not Supported 00:30:17.269 Normal NVM Subsystem Shutdown event: Not Supported 00:30:17.269 Zone Descriptor Change Notices: Not Supported 00:30:17.269 Discovery Log Change Notices: Not Supported 00:30:17.269 Controller Attributes 00:30:17.269 128-bit Host Identifier: Supported 00:30:17.269 Non-Operational Permissive Mode: Not Supported 00:30:17.269 NVM Sets: Not Supported 00:30:17.269 Read Recovery Levels: Not Supported 00:30:17.269 Endurance Groups: Not Supported 00:30:17.269 Predictable Latency Mode: Not Supported 00:30:17.269 Traffic Based Keep ALive: Supported 00:30:17.269 Namespace Granularity: Not Supported 00:30:17.269 SQ Associations: Not Supported 00:30:17.269 UUID List: Not Supported 00:30:17.269 Multi-Domain Subsystem: Not Supported 00:30:17.269 Fixed Capacity Management: Not Supported 00:30:17.269 Variable Capacity Management: Not Supported 00:30:17.269 Delete Endurance Group: Not Supported 00:30:17.269 Delete NVM Set: Not Supported 00:30:17.269 Extended LBA Formats Supported: Not Supported 00:30:17.269 Flexible Data Placement Supported: Not Supported 00:30:17.269 00:30:17.269 Controller Memory Buffer Support 00:30:17.269 ================================ 00:30:17.269 Supported: No 00:30:17.269 00:30:17.269 Persistent Memory Region Support 00:30:17.269 ================================ 00:30:17.269 Supported: No 00:30:17.269 00:30:17.269 Admin Command Set Attributes 00:30:17.269 ============================ 00:30:17.269 Security Send/Receive: Not Supported 00:30:17.269 Format NVM: Not Supported 00:30:17.269 Firmware Activate/Download: Not Supported 00:30:17.269 Namespace Management: Not Supported 00:30:17.269 Device Self-Test: Not Supported 00:30:17.269 Directives: Not Supported 00:30:17.269 NVMe-MI: Not Supported 00:30:17.269 Virtualization Management: Not Supported 00:30:17.269 Doorbell Buffer Config: Not Supported 00:30:17.269 Get LBA Status Capability: Not Supported 00:30:17.269 Command & Feature Lockdown Capability: Not Supported 00:30:17.269 Abort Command Limit: 4 00:30:17.269 Async Event Request Limit: 4 00:30:17.269 Number of Firmware Slots: N/A 00:30:17.269 Firmware Slot 1 Read-Only: N/A 00:30:17.269 Firmware Activation Without Reset: N/A 00:30:17.269 Multiple Update Detection Support: N/A 00:30:17.269 Firmware Update Granularity: No Information Provided 00:30:17.269 Per-Namespace SMART Log: Yes 00:30:17.269 Asymmetric Namespace Access Log Page: Supported 00:30:17.269 ANA Transition Time : 10 sec 00:30:17.269 00:30:17.269 Asymmetric Namespace Access Capabilities 00:30:17.269 ANA Optimized State : Supported 00:30:17.269 ANA Non-Optimized State : Supported 00:30:17.269 ANA Inaccessible State : Supported 00:30:17.269 ANA Persistent Loss State : Supported 00:30:17.269 ANA Change State : Supported 00:30:17.269 ANAGRPID is not changed : No 00:30:17.269 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:30:17.269 00:30:17.269 ANA Group Identifier Maximum : 128 00:30:17.269 Number of ANA Group Identifiers : 128 00:30:17.269 Max Number of Allowed Namespaces : 1024 00:30:17.269 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:30:17.269 Command Effects Log Page: Supported 00:30:17.269 Get Log Page Extended Data: Supported 00:30:17.269 Telemetry Log Pages: Not Supported 00:30:17.269 Persistent Event Log Pages: Not Supported 00:30:17.269 Supported Log Pages Log Page: May Support 00:30:17.269 Commands Supported & Effects Log Page: Not Supported 00:30:17.269 Feature Identifiers & Effects Log Page:May Support 00:30:17.269 NVMe-MI Commands & Effects Log Page: May Support 00:30:17.269 Data Area 4 for Telemetry Log: Not Supported 00:30:17.269 Error Log Page Entries Supported: 128 00:30:17.269 Keep Alive: Supported 00:30:17.269 Keep Alive Granularity: 1000 ms 00:30:17.269 00:30:17.269 NVM Command Set Attributes 00:30:17.269 ========================== 00:30:17.269 Submission Queue Entry Size 00:30:17.269 Max: 64 00:30:17.269 Min: 64 00:30:17.269 Completion Queue Entry Size 00:30:17.269 Max: 16 00:30:17.269 Min: 16 00:30:17.269 Number of Namespaces: 1024 00:30:17.269 Compare Command: Not Supported 00:30:17.269 Write Uncorrectable Command: Not Supported 00:30:17.269 Dataset Management Command: Supported 00:30:17.269 Write Zeroes Command: Supported 00:30:17.269 Set Features Save Field: Not Supported 00:30:17.269 Reservations: Not Supported 00:30:17.269 Timestamp: Not Supported 00:30:17.269 Copy: Not Supported 00:30:17.269 Volatile Write Cache: Present 00:30:17.269 Atomic Write Unit (Normal): 1 00:30:17.269 Atomic Write Unit (PFail): 1 00:30:17.269 Atomic Compare & Write Unit: 1 00:30:17.269 Fused Compare & Write: Not Supported 00:30:17.269 Scatter-Gather List 00:30:17.269 SGL Command Set: Supported 00:30:17.269 SGL Keyed: Not Supported 00:30:17.269 SGL Bit Bucket Descriptor: Not Supported 00:30:17.269 SGL Metadata Pointer: Not Supported 00:30:17.269 Oversized SGL: Not Supported 00:30:17.269 SGL Metadata Address: Not Supported 00:30:17.269 SGL Offset: Supported 00:30:17.269 Transport SGL Data Block: Not Supported 00:30:17.269 Replay Protected Memory Block: Not Supported 00:30:17.269 00:30:17.269 Firmware Slot Information 00:30:17.269 ========================= 00:30:17.269 Active slot: 0 00:30:17.269 00:30:17.269 Asymmetric Namespace Access 00:30:17.269 =========================== 00:30:17.269 Change Count : 0 00:30:17.269 Number of ANA Group Descriptors : 1 00:30:17.269 ANA Group Descriptor : 0 00:30:17.269 ANA Group ID : 1 00:30:17.269 Number of NSID Values : 1 00:30:17.269 Change Count : 0 00:30:17.269 ANA State : 1 00:30:17.269 Namespace Identifier : 1 00:30:17.269 00:30:17.269 Commands Supported and Effects 00:30:17.269 ============================== 00:30:17.269 Admin Commands 00:30:17.269 -------------- 00:30:17.269 Get Log Page (02h): Supported 00:30:17.269 Identify (06h): Supported 00:30:17.269 Abort (08h): Supported 00:30:17.269 Set Features (09h): Supported 00:30:17.269 Get Features (0Ah): Supported 00:30:17.269 Asynchronous Event Request (0Ch): Supported 00:30:17.269 Keep Alive (18h): Supported 00:30:17.269 I/O Commands 00:30:17.269 ------------ 00:30:17.269 Flush (00h): Supported 00:30:17.269 Write (01h): Supported LBA-Change 00:30:17.269 Read (02h): Supported 00:30:17.269 Write Zeroes (08h): Supported LBA-Change 00:30:17.269 Dataset Management (09h): Supported 00:30:17.269 00:30:17.269 Error Log 00:30:17.269 ========= 00:30:17.269 Entry: 0 00:30:17.269 Error Count: 0x3 00:30:17.269 Submission Queue Id: 0x0 00:30:17.269 Command Id: 0x5 00:30:17.269 Phase Bit: 0 00:30:17.269 Status Code: 0x2 00:30:17.269 Status Code Type: 0x0 00:30:17.269 Do Not Retry: 1 00:30:17.269 Error Location: 0x28 00:30:17.269 LBA: 0x0 00:30:17.269 Namespace: 0x0 00:30:17.269 Vendor Log Page: 0x0 00:30:17.269 ----------- 00:30:17.269 Entry: 1 00:30:17.269 Error Count: 0x2 00:30:17.269 Submission Queue Id: 0x0 00:30:17.269 Command Id: 0x5 00:30:17.269 Phase Bit: 0 00:30:17.269 Status Code: 0x2 00:30:17.269 Status Code Type: 0x0 00:30:17.269 Do Not Retry: 1 00:30:17.269 Error Location: 0x28 00:30:17.269 LBA: 0x0 00:30:17.269 Namespace: 0x0 00:30:17.269 Vendor Log Page: 0x0 00:30:17.269 ----------- 00:30:17.269 Entry: 2 00:30:17.269 Error Count: 0x1 00:30:17.269 Submission Queue Id: 0x0 00:30:17.269 Command Id: 0x4 00:30:17.269 Phase Bit: 0 00:30:17.269 Status Code: 0x2 00:30:17.269 Status Code Type: 0x0 00:30:17.269 Do Not Retry: 1 00:30:17.269 Error Location: 0x28 00:30:17.269 LBA: 0x0 00:30:17.269 Namespace: 0x0 00:30:17.269 Vendor Log Page: 0x0 00:30:17.269 00:30:17.269 Number of Queues 00:30:17.269 ================ 00:30:17.269 Number of I/O Submission Queues: 128 00:30:17.269 Number of I/O Completion Queues: 128 00:30:17.269 00:30:17.269 ZNS Specific Controller Data 00:30:17.269 ============================ 00:30:17.269 Zone Append Size Limit: 0 00:30:17.269 00:30:17.269 00:30:17.269 Active Namespaces 00:30:17.269 ================= 00:30:17.269 get_feature(0x05) failed 00:30:17.269 Namespace ID:1 00:30:17.269 Command Set Identifier: NVM (00h) 00:30:17.269 Deallocate: Supported 00:30:17.269 Deallocated/Unwritten Error: Not Supported 00:30:17.270 Deallocated Read Value: Unknown 00:30:17.270 Deallocate in Write Zeroes: Not Supported 00:30:17.270 Deallocated Guard Field: 0xFFFF 00:30:17.270 Flush: Supported 00:30:17.270 Reservation: Not Supported 00:30:17.270 Namespace Sharing Capabilities: Multiple Controllers 00:30:17.270 Size (in LBAs): 3750748848 (1788GiB) 00:30:17.270 Capacity (in LBAs): 3750748848 (1788GiB) 00:30:17.270 Utilization (in LBAs): 3750748848 (1788GiB) 00:30:17.270 UUID: 0eef0c46-8137-4103-a565-698dc3162258 00:30:17.270 Thin Provisioning: Not Supported 00:30:17.270 Per-NS Atomic Units: Yes 00:30:17.270 Atomic Write Unit (Normal): 8 00:30:17.270 Atomic Write Unit (PFail): 8 00:30:17.270 Preferred Write Granularity: 8 00:30:17.270 Atomic Compare & Write Unit: 8 00:30:17.270 Atomic Boundary Size (Normal): 0 00:30:17.270 Atomic Boundary Size (PFail): 0 00:30:17.270 Atomic Boundary Offset: 0 00:30:17.270 NGUID/EUI64 Never Reused: No 00:30:17.270 ANA group ID: 1 00:30:17.270 Namespace Write Protected: No 00:30:17.270 Number of LBA Formats: 1 00:30:17.270 Current LBA Format: LBA Format #00 00:30:17.270 LBA Format #00: Data Size: 512 Metadata Size: 0 00:30:17.270 00:30:17.270 13:13:22 -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:30:17.530 13:13:22 -- nvmf/common.sh@477 -- # nvmfcleanup 00:30:17.530 13:13:22 -- nvmf/common.sh@117 -- # sync 00:30:17.530 13:13:22 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:17.530 13:13:22 -- nvmf/common.sh@120 -- # set +e 00:30:17.530 13:13:22 -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:17.530 13:13:22 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:17.530 rmmod nvme_tcp 00:30:17.530 rmmod nvme_fabrics 00:30:17.530 13:13:22 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:17.530 13:13:22 -- nvmf/common.sh@124 -- # set -e 00:30:17.530 13:13:22 -- nvmf/common.sh@125 -- # return 0 00:30:17.530 13:13:22 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:30:17.530 13:13:22 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:30:17.530 13:13:22 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:30:17.530 13:13:22 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:30:17.530 13:13:22 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:17.530 13:13:22 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:17.530 13:13:22 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:17.530 13:13:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:17.530 13:13:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:19.512 13:13:24 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:19.512 13:13:24 -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:30:19.512 13:13:24 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:30:19.512 13:13:24 -- nvmf/common.sh@675 -- # echo 0 00:30:19.512 13:13:24 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:19.512 13:13:24 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:30:19.512 13:13:24 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:30:19.512 13:13:24 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:19.512 13:13:24 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:30:19.512 13:13:24 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:30:19.512 13:13:24 -- nvmf/common.sh@687 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:30:23.714 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:30:23.714 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:30:23.714 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:30:23.714 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:30:23.714 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:30:23.714 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:30:23.714 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:30:23.714 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:30:23.714 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:30:23.714 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:30:23.714 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:30:23.714 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:30:23.714 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:30:23.714 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:30:23.714 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:30:23.714 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:30:23.714 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:30:23.714 00:30:23.714 real 0m18.690s 00:30:23.714 user 0m5.151s 00:30:23.714 sys 0m10.554s 00:30:23.714 13:13:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:30:23.714 13:13:28 -- common/autotest_common.sh@10 -- # set +x 00:30:23.714 ************************************ 00:30:23.714 END TEST nvmf_identify_kernel_target 00:30:23.714 ************************************ 00:30:23.714 13:13:28 -- nvmf/nvmf.sh@102 -- # run_test nvmf_auth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:30:23.714 13:13:28 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:30:23.714 13:13:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:23.714 13:13:28 -- common/autotest_common.sh@10 -- # set +x 00:30:23.714 ************************************ 00:30:23.714 START TEST nvmf_auth 00:30:23.714 ************************************ 00:30:23.714 13:13:28 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:30:23.714 * Looking for test storage... 00:30:23.714 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:23.714 13:13:28 -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:23.714 13:13:28 -- nvmf/common.sh@7 -- # uname -s 00:30:23.714 13:13:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:23.714 13:13:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:23.714 13:13:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:23.714 13:13:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:23.714 13:13:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:23.714 13:13:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:23.714 13:13:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:23.714 13:13:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:23.714 13:13:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:23.714 13:13:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:23.974 13:13:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:23.974 13:13:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:23.974 13:13:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:23.974 13:13:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:23.974 13:13:28 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:23.975 13:13:28 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:23.975 13:13:28 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:23.975 13:13:28 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:23.975 13:13:28 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:23.975 13:13:28 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:23.975 13:13:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:23.975 13:13:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:23.975 13:13:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:23.975 13:13:28 -- paths/export.sh@5 -- # export PATH 00:30:23.975 13:13:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:23.975 13:13:28 -- nvmf/common.sh@47 -- # : 0 00:30:23.975 13:13:28 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:23.975 13:13:28 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:23.975 13:13:28 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:23.975 13:13:28 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:23.975 13:13:28 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:23.975 13:13:28 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:23.975 13:13:28 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:23.975 13:13:28 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:23.975 13:13:28 -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:30:23.975 13:13:28 -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:30:23.975 13:13:28 -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:30:23.975 13:13:28 -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:30:23.975 13:13:28 -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:30:23.975 13:13:28 -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:30:23.975 13:13:28 -- host/auth.sh@21 -- # keys=() 00:30:23.975 13:13:28 -- host/auth.sh@77 -- # nvmftestinit 00:30:23.975 13:13:28 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:30:23.975 13:13:28 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:23.975 13:13:28 -- nvmf/common.sh@437 -- # prepare_net_devs 00:30:23.975 13:13:28 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:30:23.975 13:13:28 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:30:23.975 13:13:28 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:23.975 13:13:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:23.975 13:13:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:23.975 13:13:28 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:30:23.975 13:13:28 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:30:23.975 13:13:28 -- nvmf/common.sh@285 -- # xtrace_disable 00:30:23.975 13:13:28 -- common/autotest_common.sh@10 -- # set +x 00:30:30.556 13:13:35 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:30:30.556 13:13:35 -- nvmf/common.sh@291 -- # pci_devs=() 00:30:30.556 13:13:35 -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:30.556 13:13:35 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:30.556 13:13:35 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:30.556 13:13:35 -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:30.556 13:13:35 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:30.556 13:13:35 -- nvmf/common.sh@295 -- # net_devs=() 00:30:30.556 13:13:35 -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:30.556 13:13:35 -- nvmf/common.sh@296 -- # e810=() 00:30:30.556 13:13:35 -- nvmf/common.sh@296 -- # local -ga e810 00:30:30.556 13:13:35 -- nvmf/common.sh@297 -- # x722=() 00:30:30.556 13:13:35 -- nvmf/common.sh@297 -- # local -ga x722 00:30:30.556 13:13:35 -- nvmf/common.sh@298 -- # mlx=() 00:30:30.556 13:13:35 -- nvmf/common.sh@298 -- # local -ga mlx 00:30:30.556 13:13:35 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:30.556 13:13:35 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:30.556 13:13:35 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:30.556 13:13:35 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:30.556 13:13:35 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:30.556 13:13:35 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:30.556 13:13:35 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:30.556 13:13:35 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:30.556 13:13:35 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:30.556 13:13:35 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:30.556 13:13:35 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:30.556 13:13:35 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:30.556 13:13:35 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:30.556 13:13:35 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:30.556 13:13:35 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:30.556 13:13:35 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:30.556 13:13:35 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:30.556 13:13:35 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:30.556 13:13:35 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:30.556 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:30.556 13:13:35 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:30.556 13:13:35 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:30.556 13:13:35 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:30.556 13:13:35 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:30.556 13:13:35 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:30.556 13:13:35 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:30.556 13:13:35 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:30.556 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:30.556 13:13:35 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:30.556 13:13:35 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:30.556 13:13:35 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:30.556 13:13:35 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:30.556 13:13:35 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:30.556 13:13:35 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:30.556 13:13:35 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:30.556 13:13:35 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:30.556 13:13:35 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:30.556 13:13:35 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:30.556 13:13:35 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:30:30.556 13:13:35 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:30.556 13:13:35 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:30.556 Found net devices under 0000:31:00.0: cvl_0_0 00:30:30.556 13:13:35 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:30:30.556 13:13:35 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:30.556 13:13:35 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:30.556 13:13:35 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:30:30.556 13:13:35 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:30.556 13:13:35 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:30.556 Found net devices under 0000:31:00.1: cvl_0_1 00:30:30.556 13:13:35 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:30:30.556 13:13:35 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:30:30.556 13:13:35 -- nvmf/common.sh@403 -- # is_hw=yes 00:30:30.556 13:13:35 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:30:30.556 13:13:35 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:30:30.556 13:13:35 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:30:30.556 13:13:35 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:30.556 13:13:35 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:30.556 13:13:35 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:30.556 13:13:35 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:30.556 13:13:35 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:30.556 13:13:35 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:30.556 13:13:35 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:30.556 13:13:35 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:30.556 13:13:35 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:30.556 13:13:35 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:30.556 13:13:35 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:30.816 13:13:35 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:30.816 13:13:35 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:30.816 13:13:35 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:30.816 13:13:35 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:30.816 13:13:35 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:30.816 13:13:35 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:31.076 13:13:35 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:31.076 13:13:35 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:31.076 13:13:35 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:31.076 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:31.076 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.697 ms 00:30:31.076 00:30:31.076 --- 10.0.0.2 ping statistics --- 00:30:31.076 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:31.076 rtt min/avg/max/mdev = 0.697/0.697/0.697/0.000 ms 00:30:31.076 13:13:35 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:31.076 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:31.076 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.284 ms 00:30:31.076 00:30:31.076 --- 10.0.0.1 ping statistics --- 00:30:31.076 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:31.076 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:30:31.076 13:13:35 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:31.076 13:13:35 -- nvmf/common.sh@411 -- # return 0 00:30:31.076 13:13:35 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:30:31.076 13:13:35 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:31.076 13:13:35 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:30:31.076 13:13:35 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:30:31.076 13:13:35 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:31.076 13:13:35 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:30:31.076 13:13:35 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:30:31.076 13:13:35 -- host/auth.sh@78 -- # nvmfappstart -L nvme_auth 00:30:31.076 13:13:35 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:30:31.076 13:13:35 -- common/autotest_common.sh@710 -- # xtrace_disable 00:30:31.076 13:13:35 -- common/autotest_common.sh@10 -- # set +x 00:30:31.076 13:13:35 -- nvmf/common.sh@470 -- # nvmfpid=4176968 00:30:31.076 13:13:35 -- nvmf/common.sh@471 -- # waitforlisten 4176968 00:30:31.076 13:13:35 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:30:31.076 13:13:35 -- common/autotest_common.sh@817 -- # '[' -z 4176968 ']' 00:30:31.076 13:13:35 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:31.076 13:13:35 -- common/autotest_common.sh@822 -- # local max_retries=100 00:30:31.076 13:13:35 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:31.076 13:13:35 -- common/autotest_common.sh@826 -- # xtrace_disable 00:30:31.076 13:13:35 -- common/autotest_common.sh@10 -- # set +x 00:30:32.016 13:13:36 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:30:32.016 13:13:36 -- common/autotest_common.sh@850 -- # return 0 00:30:32.016 13:13:36 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:30:32.016 13:13:36 -- common/autotest_common.sh@716 -- # xtrace_disable 00:30:32.016 13:13:36 -- common/autotest_common.sh@10 -- # set +x 00:30:32.016 13:13:36 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:32.016 13:13:36 -- host/auth.sh@79 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:30:32.016 13:13:36 -- host/auth.sh@81 -- # gen_key null 32 00:30:32.016 13:13:36 -- host/auth.sh@53 -- # local digest len file key 00:30:32.016 13:13:36 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:30:32.016 13:13:36 -- host/auth.sh@54 -- # local -A digests 00:30:32.016 13:13:36 -- host/auth.sh@56 -- # digest=null 00:30:32.016 13:13:36 -- host/auth.sh@56 -- # len=32 00:30:32.016 13:13:36 -- host/auth.sh@57 -- # xxd -p -c0 -l 16 /dev/urandom 00:30:32.016 13:13:36 -- host/auth.sh@57 -- # key=23973962fbbb3bcab216c5efd98addd5 00:30:32.016 13:13:36 -- host/auth.sh@58 -- # mktemp -t spdk.key-null.XXX 00:30:32.016 13:13:36 -- host/auth.sh@58 -- # file=/tmp/spdk.key-null.h1h 00:30:32.016 13:13:36 -- host/auth.sh@59 -- # format_dhchap_key 23973962fbbb3bcab216c5efd98addd5 0 00:30:32.016 13:13:36 -- nvmf/common.sh@708 -- # format_key DHHC-1 23973962fbbb3bcab216c5efd98addd5 0 00:30:32.016 13:13:36 -- nvmf/common.sh@691 -- # local prefix key digest 00:30:32.016 13:13:36 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:30:32.016 13:13:36 -- nvmf/common.sh@693 -- # key=23973962fbbb3bcab216c5efd98addd5 00:30:32.016 13:13:36 -- nvmf/common.sh@693 -- # digest=0 00:30:32.016 13:13:36 -- nvmf/common.sh@694 -- # python - 00:30:32.016 13:13:36 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-null.h1h 00:30:32.016 13:13:36 -- host/auth.sh@62 -- # echo /tmp/spdk.key-null.h1h 00:30:32.016 13:13:36 -- host/auth.sh@81 -- # keys[0]=/tmp/spdk.key-null.h1h 00:30:32.016 13:13:36 -- host/auth.sh@82 -- # gen_key null 48 00:30:32.016 13:13:36 -- host/auth.sh@53 -- # local digest len file key 00:30:32.016 13:13:36 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:30:32.016 13:13:36 -- host/auth.sh@54 -- # local -A digests 00:30:32.016 13:13:36 -- host/auth.sh@56 -- # digest=null 00:30:32.016 13:13:36 -- host/auth.sh@56 -- # len=48 00:30:32.016 13:13:36 -- host/auth.sh@57 -- # xxd -p -c0 -l 24 /dev/urandom 00:30:32.016 13:13:36 -- host/auth.sh@57 -- # key=1bb489a4176e8af709ac4ec00736d3e42943156d91d9ee2e 00:30:32.016 13:13:36 -- host/auth.sh@58 -- # mktemp -t spdk.key-null.XXX 00:30:32.016 13:13:36 -- host/auth.sh@58 -- # file=/tmp/spdk.key-null.Hpy 00:30:32.016 13:13:36 -- host/auth.sh@59 -- # format_dhchap_key 1bb489a4176e8af709ac4ec00736d3e42943156d91d9ee2e 0 00:30:32.016 13:13:36 -- nvmf/common.sh@708 -- # format_key DHHC-1 1bb489a4176e8af709ac4ec00736d3e42943156d91d9ee2e 0 00:30:32.016 13:13:36 -- nvmf/common.sh@691 -- # local prefix key digest 00:30:32.016 13:13:36 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:30:32.016 13:13:36 -- nvmf/common.sh@693 -- # key=1bb489a4176e8af709ac4ec00736d3e42943156d91d9ee2e 00:30:32.016 13:13:36 -- nvmf/common.sh@693 -- # digest=0 00:30:32.016 13:13:36 -- nvmf/common.sh@694 -- # python - 00:30:32.016 13:13:36 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-null.Hpy 00:30:32.016 13:13:36 -- host/auth.sh@62 -- # echo /tmp/spdk.key-null.Hpy 00:30:32.016 13:13:36 -- host/auth.sh@82 -- # keys[1]=/tmp/spdk.key-null.Hpy 00:30:32.016 13:13:36 -- host/auth.sh@83 -- # gen_key sha256 32 00:30:32.016 13:13:36 -- host/auth.sh@53 -- # local digest len file key 00:30:32.016 13:13:36 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:30:32.016 13:13:36 -- host/auth.sh@54 -- # local -A digests 00:30:32.016 13:13:36 -- host/auth.sh@56 -- # digest=sha256 00:30:32.016 13:13:36 -- host/auth.sh@56 -- # len=32 00:30:32.016 13:13:36 -- host/auth.sh@57 -- # xxd -p -c0 -l 16 /dev/urandom 00:30:32.016 13:13:36 -- host/auth.sh@57 -- # key=b7888b73a2928fd93f673db5088742a1 00:30:32.016 13:13:36 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha256.XXX 00:30:32.016 13:13:36 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha256.oAQ 00:30:32.016 13:13:36 -- host/auth.sh@59 -- # format_dhchap_key b7888b73a2928fd93f673db5088742a1 1 00:30:32.016 13:13:36 -- nvmf/common.sh@708 -- # format_key DHHC-1 b7888b73a2928fd93f673db5088742a1 1 00:30:32.016 13:13:36 -- nvmf/common.sh@691 -- # local prefix key digest 00:30:32.016 13:13:36 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:30:32.016 13:13:36 -- nvmf/common.sh@693 -- # key=b7888b73a2928fd93f673db5088742a1 00:30:32.016 13:13:36 -- nvmf/common.sh@693 -- # digest=1 00:30:32.016 13:13:36 -- nvmf/common.sh@694 -- # python - 00:30:32.016 13:13:36 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha256.oAQ 00:30:32.016 13:13:36 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha256.oAQ 00:30:32.016 13:13:36 -- host/auth.sh@83 -- # keys[2]=/tmp/spdk.key-sha256.oAQ 00:30:32.016 13:13:36 -- host/auth.sh@84 -- # gen_key sha384 48 00:30:32.016 13:13:36 -- host/auth.sh@53 -- # local digest len file key 00:30:32.016 13:13:36 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:30:32.016 13:13:36 -- host/auth.sh@54 -- # local -A digests 00:30:32.016 13:13:36 -- host/auth.sh@56 -- # digest=sha384 00:30:32.016 13:13:36 -- host/auth.sh@56 -- # len=48 00:30:32.016 13:13:37 -- host/auth.sh@57 -- # xxd -p -c0 -l 24 /dev/urandom 00:30:32.016 13:13:37 -- host/auth.sh@57 -- # key=9d257b02c888c11d9e19693f914566e5b75e38a342a51b76 00:30:32.016 13:13:37 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha384.XXX 00:30:32.016 13:13:37 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha384.aVR 00:30:32.016 13:13:37 -- host/auth.sh@59 -- # format_dhchap_key 9d257b02c888c11d9e19693f914566e5b75e38a342a51b76 2 00:30:32.016 13:13:37 -- nvmf/common.sh@708 -- # format_key DHHC-1 9d257b02c888c11d9e19693f914566e5b75e38a342a51b76 2 00:30:32.016 13:13:37 -- nvmf/common.sh@691 -- # local prefix key digest 00:30:32.016 13:13:37 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:30:32.016 13:13:37 -- nvmf/common.sh@693 -- # key=9d257b02c888c11d9e19693f914566e5b75e38a342a51b76 00:30:32.016 13:13:37 -- nvmf/common.sh@693 -- # digest=2 00:30:32.016 13:13:37 -- nvmf/common.sh@694 -- # python - 00:30:32.016 13:13:37 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha384.aVR 00:30:32.016 13:13:37 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha384.aVR 00:30:32.016 13:13:37 -- host/auth.sh@84 -- # keys[3]=/tmp/spdk.key-sha384.aVR 00:30:32.016 13:13:37 -- host/auth.sh@85 -- # gen_key sha512 64 00:30:32.016 13:13:37 -- host/auth.sh@53 -- # local digest len file key 00:30:32.016 13:13:37 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:30:32.016 13:13:37 -- host/auth.sh@54 -- # local -A digests 00:30:32.016 13:13:37 -- host/auth.sh@56 -- # digest=sha512 00:30:32.016 13:13:37 -- host/auth.sh@56 -- # len=64 00:30:32.016 13:13:37 -- host/auth.sh@57 -- # xxd -p -c0 -l 32 /dev/urandom 00:30:32.016 13:13:37 -- host/auth.sh@57 -- # key=007ef1585fcf53198913840f9f92d18d3e6f869d025ac893f6b6f8e39f5235a6 00:30:32.017 13:13:37 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha512.XXX 00:30:32.017 13:13:37 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha512.pz4 00:30:32.277 13:13:37 -- host/auth.sh@59 -- # format_dhchap_key 007ef1585fcf53198913840f9f92d18d3e6f869d025ac893f6b6f8e39f5235a6 3 00:30:32.277 13:13:37 -- nvmf/common.sh@708 -- # format_key DHHC-1 007ef1585fcf53198913840f9f92d18d3e6f869d025ac893f6b6f8e39f5235a6 3 00:30:32.277 13:13:37 -- nvmf/common.sh@691 -- # local prefix key digest 00:30:32.277 13:13:37 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:30:32.277 13:13:37 -- nvmf/common.sh@693 -- # key=007ef1585fcf53198913840f9f92d18d3e6f869d025ac893f6b6f8e39f5235a6 00:30:32.277 13:13:37 -- nvmf/common.sh@693 -- # digest=3 00:30:32.277 13:13:37 -- nvmf/common.sh@694 -- # python - 00:30:32.277 13:13:37 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha512.pz4 00:30:32.277 13:13:37 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha512.pz4 00:30:32.277 13:13:37 -- host/auth.sh@85 -- # keys[4]=/tmp/spdk.key-sha512.pz4 00:30:32.277 13:13:37 -- host/auth.sh@87 -- # waitforlisten 4176968 00:30:32.277 13:13:37 -- common/autotest_common.sh@817 -- # '[' -z 4176968 ']' 00:30:32.277 13:13:37 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:32.277 13:13:37 -- common/autotest_common.sh@822 -- # local max_retries=100 00:30:32.277 13:13:37 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:32.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:32.277 13:13:37 -- common/autotest_common.sh@826 -- # xtrace_disable 00:30:32.277 13:13:37 -- common/autotest_common.sh@10 -- # set +x 00:30:32.277 13:13:37 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:30:32.277 13:13:37 -- common/autotest_common.sh@850 -- # return 0 00:30:32.277 13:13:37 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:30:32.277 13:13:37 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.h1h 00:30:32.277 13:13:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:32.277 13:13:37 -- common/autotest_common.sh@10 -- # set +x 00:30:32.277 13:13:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:32.277 13:13:37 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:30:32.277 13:13:37 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.Hpy 00:30:32.277 13:13:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:32.277 13:13:37 -- common/autotest_common.sh@10 -- # set +x 00:30:32.277 13:13:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:32.277 13:13:37 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:30:32.277 13:13:37 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.oAQ 00:30:32.277 13:13:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:32.277 13:13:37 -- common/autotest_common.sh@10 -- # set +x 00:30:32.277 13:13:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:32.277 13:13:37 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:30:32.277 13:13:37 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.aVR 00:30:32.277 13:13:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:32.277 13:13:37 -- common/autotest_common.sh@10 -- # set +x 00:30:32.277 13:13:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:32.277 13:13:37 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:30:32.277 13:13:37 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.pz4 00:30:32.277 13:13:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:32.277 13:13:37 -- common/autotest_common.sh@10 -- # set +x 00:30:32.537 13:13:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:32.537 13:13:37 -- host/auth.sh@92 -- # nvmet_auth_init 00:30:32.537 13:13:37 -- host/auth.sh@35 -- # get_main_ns_ip 00:30:32.537 13:13:37 -- nvmf/common.sh@717 -- # local ip 00:30:32.537 13:13:37 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:32.537 13:13:37 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:32.537 13:13:37 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:32.537 13:13:37 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:32.537 13:13:37 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:32.537 13:13:37 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:32.537 13:13:37 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:32.537 13:13:37 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:32.537 13:13:37 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:32.537 13:13:37 -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:30:32.537 13:13:37 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:30:32.537 13:13:37 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:30:32.537 13:13:37 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:30:32.537 13:13:37 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:30:32.537 13:13:37 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:30:32.537 13:13:37 -- nvmf/common.sh@628 -- # local block nvme 00:30:32.537 13:13:37 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:30:32.537 13:13:37 -- nvmf/common.sh@631 -- # modprobe nvmet 00:30:32.537 13:13:37 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:30:32.537 13:13:37 -- nvmf/common.sh@636 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:30:35.838 Waiting for block devices as requested 00:30:35.838 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:30:35.838 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:30:35.838 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:30:35.839 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:30:36.099 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:30:36.099 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:30:36.099 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:30:36.099 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:30:36.361 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:30:36.361 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:30:36.622 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:30:36.622 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:30:36.622 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:30:36.622 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:30:36.883 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:30:36.883 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:30:36.883 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:30:37.827 13:13:42 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:30:37.827 13:13:42 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:30:37.827 13:13:42 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:30:37.827 13:13:42 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:30:37.827 13:13:42 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:30:37.827 13:13:42 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:30:37.827 13:13:42 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:30:37.827 13:13:42 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:30:37.827 13:13:42 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:30:37.827 No valid GPT data, bailing 00:30:37.827 13:13:42 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:30:37.827 13:13:42 -- scripts/common.sh@391 -- # pt= 00:30:37.827 13:13:42 -- scripts/common.sh@392 -- # return 1 00:30:37.827 13:13:42 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:30:37.827 13:13:42 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme0n1 ]] 00:30:37.827 13:13:42 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:30:37.827 13:13:42 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:30:37.827 13:13:42 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:30:37.827 13:13:42 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:30:37.827 13:13:42 -- nvmf/common.sh@656 -- # echo 1 00:30:37.827 13:13:42 -- nvmf/common.sh@657 -- # echo /dev/nvme0n1 00:30:37.827 13:13:42 -- nvmf/common.sh@658 -- # echo 1 00:30:37.827 13:13:42 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:30:37.827 13:13:42 -- nvmf/common.sh@661 -- # echo tcp 00:30:37.827 13:13:42 -- nvmf/common.sh@662 -- # echo 4420 00:30:37.827 13:13:42 -- nvmf/common.sh@663 -- # echo ipv4 00:30:37.827 13:13:42 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:30:37.827 13:13:42 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.1 -t tcp -s 4420 00:30:38.089 00:30:38.089 Discovery Log Number of Records 2, Generation counter 2 00:30:38.089 =====Discovery Log Entry 0====== 00:30:38.089 trtype: tcp 00:30:38.089 adrfam: ipv4 00:30:38.089 subtype: current discovery subsystem 00:30:38.089 treq: not specified, sq flow control disable supported 00:30:38.089 portid: 1 00:30:38.089 trsvcid: 4420 00:30:38.089 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:30:38.089 traddr: 10.0.0.1 00:30:38.089 eflags: none 00:30:38.089 sectype: none 00:30:38.089 =====Discovery Log Entry 1====== 00:30:38.089 trtype: tcp 00:30:38.089 adrfam: ipv4 00:30:38.089 subtype: nvme subsystem 00:30:38.089 treq: not specified, sq flow control disable supported 00:30:38.089 portid: 1 00:30:38.089 trsvcid: 4420 00:30:38.089 subnqn: nqn.2024-02.io.spdk:cnode0 00:30:38.089 traddr: 10.0.0.1 00:30:38.089 eflags: none 00:30:38.089 sectype: none 00:30:38.089 13:13:42 -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:30:38.089 13:13:42 -- host/auth.sh@37 -- # echo 0 00:30:38.089 13:13:42 -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:30:38.089 13:13:42 -- host/auth.sh@95 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:30:38.089 13:13:42 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:38.089 13:13:42 -- host/auth.sh@44 -- # digest=sha256 00:30:38.089 13:13:42 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:38.089 13:13:42 -- host/auth.sh@44 -- # keyid=1 00:30:38.089 13:13:42 -- host/auth.sh@45 -- # key=DHHC-1:00:MWJiNDg5YTQxNzZlOGFmNzA5YWM0ZWMwMDczNmQzZTQyOTQzMTU2ZDkxZDllZTJllnVCJw==: 00:30:38.089 13:13:42 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:30:38.089 13:13:42 -- host/auth.sh@48 -- # echo ffdhe2048 00:30:38.089 13:13:42 -- host/auth.sh@49 -- # echo DHHC-1:00:MWJiNDg5YTQxNzZlOGFmNzA5YWM0ZWMwMDczNmQzZTQyOTQzMTU2ZDkxZDllZTJllnVCJw==: 00:30:38.089 13:13:42 -- host/auth.sh@100 -- # IFS=, 00:30:38.089 13:13:42 -- host/auth.sh@101 -- # printf %s sha256,sha384,sha512 00:30:38.089 13:13:42 -- host/auth.sh@100 -- # IFS=, 00:30:38.089 13:13:42 -- host/auth.sh@101 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:30:38.089 13:13:42 -- host/auth.sh@100 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:30:38.089 13:13:42 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:38.089 13:13:42 -- host/auth.sh@68 -- # digest=sha256,sha384,sha512 00:30:38.089 13:13:42 -- host/auth.sh@68 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:30:38.089 13:13:42 -- host/auth.sh@68 -- # keyid=1 00:30:38.089 13:13:42 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:30:38.089 13:13:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:38.089 13:13:42 -- common/autotest_common.sh@10 -- # set +x 00:30:38.089 13:13:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:38.089 13:13:42 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:38.089 13:13:42 -- nvmf/common.sh@717 -- # local ip 00:30:38.089 13:13:42 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:38.089 13:13:42 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:38.089 13:13:42 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:38.089 13:13:42 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:38.089 13:13:42 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:38.089 13:13:42 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:38.089 13:13:42 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:38.089 13:13:42 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:38.089 13:13:42 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:38.089 13:13:42 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:30:38.089 13:13:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:38.089 13:13:42 -- common/autotest_common.sh@10 -- # set +x 00:30:38.089 nvme0n1 00:30:38.089 13:13:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:38.089 13:13:43 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:38.089 13:13:43 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:38.089 13:13:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:38.089 13:13:43 -- common/autotest_common.sh@10 -- # set +x 00:30:38.089 13:13:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:38.089 13:13:43 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:38.089 13:13:43 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:38.090 13:13:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:38.090 13:13:43 -- common/autotest_common.sh@10 -- # set +x 00:30:38.354 13:13:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:38.354 13:13:43 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:30:38.354 13:13:43 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:30:38.354 13:13:43 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:38.354 13:13:43 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:30:38.354 13:13:43 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:38.354 13:13:43 -- host/auth.sh@44 -- # digest=sha256 00:30:38.354 13:13:43 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:38.354 13:13:43 -- host/auth.sh@44 -- # keyid=0 00:30:38.354 13:13:43 -- host/auth.sh@45 -- # key=DHHC-1:00:MjM5NzM5NjJmYmJiM2JjYWIyMTZjNWVmZDk4YWRkZDUswSTO: 00:30:38.354 13:13:43 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:30:38.354 13:13:43 -- host/auth.sh@48 -- # echo ffdhe2048 00:30:38.354 13:13:43 -- host/auth.sh@49 -- # echo DHHC-1:00:MjM5NzM5NjJmYmJiM2JjYWIyMTZjNWVmZDk4YWRkZDUswSTO: 00:30:38.354 13:13:43 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 0 00:30:38.354 13:13:43 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:38.354 13:13:43 -- host/auth.sh@68 -- # digest=sha256 00:30:38.354 13:13:43 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:30:38.354 13:13:43 -- host/auth.sh@68 -- # keyid=0 00:30:38.354 13:13:43 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:30:38.354 13:13:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:38.354 13:13:43 -- common/autotest_common.sh@10 -- # set +x 00:30:38.354 13:13:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:38.354 13:13:43 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:38.354 13:13:43 -- nvmf/common.sh@717 -- # local ip 00:30:38.354 13:13:43 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:38.354 13:13:43 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:38.354 13:13:43 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:38.354 13:13:43 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:38.354 13:13:43 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:38.354 13:13:43 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:38.354 13:13:43 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:38.354 13:13:43 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:38.354 13:13:43 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:38.354 13:13:43 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:30:38.354 13:13:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:38.354 13:13:43 -- common/autotest_common.sh@10 -- # set +x 00:30:38.354 nvme0n1 00:30:38.354 13:13:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:38.354 13:13:43 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:38.354 13:13:43 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:38.354 13:13:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:38.354 13:13:43 -- common/autotest_common.sh@10 -- # set +x 00:30:38.354 13:13:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:38.354 13:13:43 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:38.354 13:13:43 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:38.354 13:13:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:38.354 13:13:43 -- common/autotest_common.sh@10 -- # set +x 00:30:38.354 13:13:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:38.354 13:13:43 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:38.354 13:13:43 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:30:38.354 13:13:43 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:38.354 13:13:43 -- host/auth.sh@44 -- # digest=sha256 00:30:38.354 13:13:43 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:38.354 13:13:43 -- host/auth.sh@44 -- # keyid=1 00:30:38.354 13:13:43 -- host/auth.sh@45 -- # key=DHHC-1:00:MWJiNDg5YTQxNzZlOGFmNzA5YWM0ZWMwMDczNmQzZTQyOTQzMTU2ZDkxZDllZTJllnVCJw==: 00:30:38.354 13:13:43 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:30:38.354 13:13:43 -- host/auth.sh@48 -- # echo ffdhe2048 00:30:38.354 13:13:43 -- host/auth.sh@49 -- # echo DHHC-1:00:MWJiNDg5YTQxNzZlOGFmNzA5YWM0ZWMwMDczNmQzZTQyOTQzMTU2ZDkxZDllZTJllnVCJw==: 00:30:38.354 13:13:43 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 1 00:30:38.354 13:13:43 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:38.354 13:13:43 -- host/auth.sh@68 -- # digest=sha256 00:30:38.354 13:13:43 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:30:38.354 13:13:43 -- host/auth.sh@68 -- # keyid=1 00:30:38.354 13:13:43 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:30:38.354 13:13:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:38.354 13:13:43 -- common/autotest_common.sh@10 -- # set +x 00:30:38.354 13:13:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:38.354 13:13:43 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:38.354 13:13:43 -- nvmf/common.sh@717 -- # local ip 00:30:38.354 13:13:43 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:38.354 13:13:43 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:38.354 13:13:43 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:38.354 13:13:43 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:38.354 13:13:43 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:38.354 13:13:43 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:38.354 13:13:43 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:38.354 13:13:43 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:38.354 13:13:43 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:38.615 13:13:43 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:30:38.615 13:13:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:38.615 13:13:43 -- common/autotest_common.sh@10 -- # set +x 00:30:38.615 nvme0n1 00:30:38.615 13:13:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:38.615 13:13:43 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:38.615 13:13:43 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:38.615 13:13:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:38.615 13:13:43 -- common/autotest_common.sh@10 -- # set +x 00:30:38.615 13:13:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:38.615 13:13:43 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:38.615 13:13:43 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:38.615 13:13:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:38.615 13:13:43 -- common/autotest_common.sh@10 -- # set +x 00:30:38.615 13:13:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:38.615 13:13:43 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:38.615 13:13:43 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:30:38.615 13:13:43 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:38.615 13:13:43 -- host/auth.sh@44 -- # digest=sha256 00:30:38.615 13:13:43 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:38.615 13:13:43 -- host/auth.sh@44 -- # keyid=2 00:30:38.615 13:13:43 -- host/auth.sh@45 -- # key=DHHC-1:01:Yjc4ODhiNzNhMjkyOGZkOTNmNjczZGI1MDg4NzQyYTGItMYt: 00:30:38.615 13:13:43 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:30:38.615 13:13:43 -- host/auth.sh@48 -- # echo ffdhe2048 00:30:38.615 13:13:43 -- host/auth.sh@49 -- # echo DHHC-1:01:Yjc4ODhiNzNhMjkyOGZkOTNmNjczZGI1MDg4NzQyYTGItMYt: 00:30:38.615 13:13:43 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 2 00:30:38.615 13:13:43 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:38.615 13:13:43 -- host/auth.sh@68 -- # digest=sha256 00:30:38.615 13:13:43 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:30:38.615 13:13:43 -- host/auth.sh@68 -- # keyid=2 00:30:38.615 13:13:43 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:30:38.615 13:13:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:38.615 13:13:43 -- common/autotest_common.sh@10 -- # set +x 00:30:38.615 13:13:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:38.615 13:13:43 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:38.615 13:13:43 -- nvmf/common.sh@717 -- # local ip 00:30:38.615 13:13:43 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:38.615 13:13:43 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:38.615 13:13:43 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:38.615 13:13:43 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:38.615 13:13:43 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:38.615 13:13:43 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:38.615 13:13:43 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:38.615 13:13:43 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:38.615 13:13:43 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:38.615 13:13:43 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:30:38.615 13:13:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:38.615 13:13:43 -- common/autotest_common.sh@10 -- # set +x 00:30:38.876 nvme0n1 00:30:38.876 13:13:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:38.876 13:13:43 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:38.876 13:13:43 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:38.876 13:13:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:38.876 13:13:43 -- common/autotest_common.sh@10 -- # set +x 00:30:38.876 13:13:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:38.876 13:13:43 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:38.876 13:13:43 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:38.876 13:13:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:38.876 13:13:43 -- common/autotest_common.sh@10 -- # set +x 00:30:38.876 13:13:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:38.876 13:13:43 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:38.876 13:13:43 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:30:38.876 13:13:43 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:38.876 13:13:43 -- host/auth.sh@44 -- # digest=sha256 00:30:38.876 13:13:43 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:38.876 13:13:43 -- host/auth.sh@44 -- # keyid=3 00:30:38.876 13:13:43 -- host/auth.sh@45 -- # key=DHHC-1:02:OWQyNTdiMDJjODg4YzExZDllMTk2OTNmOTE0NTY2ZTViNzVlMzhhMzQyYTUxYjc2YPpu8A==: 00:30:38.876 13:13:43 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:30:38.876 13:13:43 -- host/auth.sh@48 -- # echo ffdhe2048 00:30:38.876 13:13:43 -- host/auth.sh@49 -- # echo DHHC-1:02:OWQyNTdiMDJjODg4YzExZDllMTk2OTNmOTE0NTY2ZTViNzVlMzhhMzQyYTUxYjc2YPpu8A==: 00:30:38.876 13:13:43 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 3 00:30:38.876 13:13:43 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:38.876 13:13:43 -- host/auth.sh@68 -- # digest=sha256 00:30:38.876 13:13:43 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:30:38.876 13:13:43 -- host/auth.sh@68 -- # keyid=3 00:30:38.876 13:13:43 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:30:38.876 13:13:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:38.876 13:13:43 -- common/autotest_common.sh@10 -- # set +x 00:30:38.876 13:13:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:38.876 13:13:43 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:38.876 13:13:43 -- nvmf/common.sh@717 -- # local ip 00:30:38.876 13:13:43 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:38.876 13:13:43 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:38.876 13:13:43 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:38.876 13:13:43 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:38.876 13:13:43 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:38.876 13:13:43 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:38.876 13:13:43 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:38.876 13:13:43 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:38.876 13:13:43 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:38.876 13:13:43 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:30:38.876 13:13:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:38.876 13:13:43 -- common/autotest_common.sh@10 -- # set +x 00:30:39.137 nvme0n1 00:30:39.137 13:13:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:39.137 13:13:44 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:39.137 13:13:44 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:39.137 13:13:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:39.137 13:13:44 -- common/autotest_common.sh@10 -- # set +x 00:30:39.137 13:13:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:39.137 13:13:44 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:39.137 13:13:44 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:39.137 13:13:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:39.137 13:13:44 -- common/autotest_common.sh@10 -- # set +x 00:30:39.137 13:13:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:39.137 13:13:44 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:39.137 13:13:44 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:30:39.137 13:13:44 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:39.137 13:13:44 -- host/auth.sh@44 -- # digest=sha256 00:30:39.137 13:13:44 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:39.137 13:13:44 -- host/auth.sh@44 -- # keyid=4 00:30:39.137 13:13:44 -- host/auth.sh@45 -- # key=DHHC-1:03:MDA3ZWYxNTg1ZmNmNTMxOTg5MTM4NDBmOWY5MmQxOGQzZTZmODY5ZDAyNWFjODkzZjZiNmY4ZTM5ZjUyMzVhNkfxi6g=: 00:30:39.137 13:13:44 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:30:39.137 13:13:44 -- host/auth.sh@48 -- # echo ffdhe2048 00:30:39.137 13:13:44 -- host/auth.sh@49 -- # echo DHHC-1:03:MDA3ZWYxNTg1ZmNmNTMxOTg5MTM4NDBmOWY5MmQxOGQzZTZmODY5ZDAyNWFjODkzZjZiNmY4ZTM5ZjUyMzVhNkfxi6g=: 00:30:39.137 13:13:44 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 4 00:30:39.137 13:13:44 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:39.137 13:13:44 -- host/auth.sh@68 -- # digest=sha256 00:30:39.137 13:13:44 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:30:39.137 13:13:44 -- host/auth.sh@68 -- # keyid=4 00:30:39.137 13:13:44 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:30:39.137 13:13:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:39.137 13:13:44 -- common/autotest_common.sh@10 -- # set +x 00:30:39.137 13:13:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:39.137 13:13:44 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:39.137 13:13:44 -- nvmf/common.sh@717 -- # local ip 00:30:39.137 13:13:44 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:39.137 13:13:44 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:39.137 13:13:44 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:39.137 13:13:44 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:39.137 13:13:44 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:39.137 13:13:44 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:39.137 13:13:44 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:39.137 13:13:44 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:39.137 13:13:44 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:39.137 13:13:44 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:39.137 13:13:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:39.137 13:13:44 -- common/autotest_common.sh@10 -- # set +x 00:30:39.398 nvme0n1 00:30:39.398 13:13:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:39.398 13:13:44 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:39.398 13:13:44 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:39.398 13:13:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:39.398 13:13:44 -- common/autotest_common.sh@10 -- # set +x 00:30:39.398 13:13:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:39.398 13:13:44 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:39.398 13:13:44 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:39.398 13:13:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:39.398 13:13:44 -- common/autotest_common.sh@10 -- # set +x 00:30:39.398 13:13:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:39.398 13:13:44 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:30:39.398 13:13:44 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:39.398 13:13:44 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:30:39.398 13:13:44 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:39.398 13:13:44 -- host/auth.sh@44 -- # digest=sha256 00:30:39.398 13:13:44 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:39.398 13:13:44 -- host/auth.sh@44 -- # keyid=0 00:30:39.398 13:13:44 -- host/auth.sh@45 -- # key=DHHC-1:00:MjM5NzM5NjJmYmJiM2JjYWIyMTZjNWVmZDk4YWRkZDUswSTO: 00:30:39.398 13:13:44 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:30:39.398 13:13:44 -- host/auth.sh@48 -- # echo ffdhe3072 00:30:39.398 13:13:44 -- host/auth.sh@49 -- # echo DHHC-1:00:MjM5NzM5NjJmYmJiM2JjYWIyMTZjNWVmZDk4YWRkZDUswSTO: 00:30:39.398 13:13:44 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 0 00:30:39.398 13:13:44 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:39.398 13:13:44 -- host/auth.sh@68 -- # digest=sha256 00:30:39.398 13:13:44 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:30:39.398 13:13:44 -- host/auth.sh@68 -- # keyid=0 00:30:39.398 13:13:44 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:30:39.398 13:13:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:39.398 13:13:44 -- common/autotest_common.sh@10 -- # set +x 00:30:39.398 13:13:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:39.398 13:13:44 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:39.398 13:13:44 -- nvmf/common.sh@717 -- # local ip 00:30:39.398 13:13:44 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:39.398 13:13:44 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:39.398 13:13:44 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:39.398 13:13:44 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:39.398 13:13:44 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:39.398 13:13:44 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:39.398 13:13:44 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:39.398 13:13:44 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:39.398 13:13:44 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:39.398 13:13:44 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:30:39.398 13:13:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:39.398 13:13:44 -- common/autotest_common.sh@10 -- # set +x 00:30:39.659 nvme0n1 00:30:39.659 13:13:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:39.659 13:13:44 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:39.659 13:13:44 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:39.659 13:13:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:39.659 13:13:44 -- common/autotest_common.sh@10 -- # set +x 00:30:39.659 13:13:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:39.659 13:13:44 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:39.659 13:13:44 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:39.659 13:13:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:39.659 13:13:44 -- common/autotest_common.sh@10 -- # set +x 00:30:39.659 13:13:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:39.659 13:13:44 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:39.659 13:13:44 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:30:39.659 13:13:44 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:39.659 13:13:44 -- host/auth.sh@44 -- # digest=sha256 00:30:39.659 13:13:44 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:39.659 13:13:44 -- host/auth.sh@44 -- # keyid=1 00:30:39.659 13:13:44 -- host/auth.sh@45 -- # key=DHHC-1:00:MWJiNDg5YTQxNzZlOGFmNzA5YWM0ZWMwMDczNmQzZTQyOTQzMTU2ZDkxZDllZTJllnVCJw==: 00:30:39.659 13:13:44 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:30:39.659 13:13:44 -- host/auth.sh@48 -- # echo ffdhe3072 00:30:39.659 13:13:44 -- host/auth.sh@49 -- # echo DHHC-1:00:MWJiNDg5YTQxNzZlOGFmNzA5YWM0ZWMwMDczNmQzZTQyOTQzMTU2ZDkxZDllZTJllnVCJw==: 00:30:39.659 13:13:44 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 1 00:30:39.659 13:13:44 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:39.659 13:13:44 -- host/auth.sh@68 -- # digest=sha256 00:30:39.659 13:13:44 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:30:39.659 13:13:44 -- host/auth.sh@68 -- # keyid=1 00:30:39.659 13:13:44 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:30:39.659 13:13:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:39.659 13:13:44 -- common/autotest_common.sh@10 -- # set +x 00:30:39.659 13:13:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:39.659 13:13:44 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:39.659 13:13:44 -- nvmf/common.sh@717 -- # local ip 00:30:39.659 13:13:44 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:39.659 13:13:44 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:39.659 13:13:44 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:39.659 13:13:44 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:39.659 13:13:44 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:39.659 13:13:44 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:39.659 13:13:44 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:39.659 13:13:44 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:39.659 13:13:44 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:39.659 13:13:44 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:30:39.659 13:13:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:39.659 13:13:44 -- common/autotest_common.sh@10 -- # set +x 00:30:39.920 nvme0n1 00:30:39.920 13:13:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:39.920 13:13:44 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:39.920 13:13:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:39.920 13:13:44 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:39.920 13:13:44 -- common/autotest_common.sh@10 -- # set +x 00:30:39.920 13:13:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:39.920 13:13:44 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:39.920 13:13:44 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:39.920 13:13:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:39.920 13:13:44 -- common/autotest_common.sh@10 -- # set +x 00:30:39.920 13:13:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:39.920 13:13:44 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:39.920 13:13:44 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:30:39.920 13:13:44 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:39.920 13:13:44 -- host/auth.sh@44 -- # digest=sha256 00:30:39.920 13:13:44 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:39.920 13:13:44 -- host/auth.sh@44 -- # keyid=2 00:30:39.920 13:13:44 -- host/auth.sh@45 -- # key=DHHC-1:01:Yjc4ODhiNzNhMjkyOGZkOTNmNjczZGI1MDg4NzQyYTGItMYt: 00:30:39.920 13:13:44 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:30:39.920 13:13:44 -- host/auth.sh@48 -- # echo ffdhe3072 00:30:39.920 13:13:44 -- host/auth.sh@49 -- # echo DHHC-1:01:Yjc4ODhiNzNhMjkyOGZkOTNmNjczZGI1MDg4NzQyYTGItMYt: 00:30:39.920 13:13:44 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 2 00:30:39.920 13:13:44 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:39.920 13:13:44 -- host/auth.sh@68 -- # digest=sha256 00:30:39.920 13:13:44 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:30:39.920 13:13:44 -- host/auth.sh@68 -- # keyid=2 00:30:39.920 13:13:44 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:30:39.920 13:13:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:39.920 13:13:44 -- common/autotest_common.sh@10 -- # set +x 00:30:39.920 13:13:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:39.920 13:13:44 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:39.920 13:13:44 -- nvmf/common.sh@717 -- # local ip 00:30:39.920 13:13:44 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:39.920 13:13:44 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:39.920 13:13:44 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:39.920 13:13:44 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:39.920 13:13:44 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:39.920 13:13:44 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:39.920 13:13:44 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:39.920 13:13:44 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:39.920 13:13:44 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:39.920 13:13:44 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:30:39.920 13:13:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:39.920 13:13:44 -- common/autotest_common.sh@10 -- # set +x 00:30:40.181 nvme0n1 00:30:40.181 13:13:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:40.181 13:13:45 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:40.181 13:13:45 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:40.181 13:13:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:40.181 13:13:45 -- common/autotest_common.sh@10 -- # set +x 00:30:40.181 13:13:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:40.181 13:13:45 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:40.181 13:13:45 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:40.181 13:13:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:40.181 13:13:45 -- common/autotest_common.sh@10 -- # set +x 00:30:40.181 13:13:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:40.181 13:13:45 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:40.181 13:13:45 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:30:40.181 13:13:45 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:40.181 13:13:45 -- host/auth.sh@44 -- # digest=sha256 00:30:40.181 13:13:45 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:40.181 13:13:45 -- host/auth.sh@44 -- # keyid=3 00:30:40.181 13:13:45 -- host/auth.sh@45 -- # key=DHHC-1:02:OWQyNTdiMDJjODg4YzExZDllMTk2OTNmOTE0NTY2ZTViNzVlMzhhMzQyYTUxYjc2YPpu8A==: 00:30:40.181 13:13:45 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:30:40.181 13:13:45 -- host/auth.sh@48 -- # echo ffdhe3072 00:30:40.181 13:13:45 -- host/auth.sh@49 -- # echo DHHC-1:02:OWQyNTdiMDJjODg4YzExZDllMTk2OTNmOTE0NTY2ZTViNzVlMzhhMzQyYTUxYjc2YPpu8A==: 00:30:40.181 13:13:45 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 3 00:30:40.181 13:13:45 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:40.181 13:13:45 -- host/auth.sh@68 -- # digest=sha256 00:30:40.181 13:13:45 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:30:40.181 13:13:45 -- host/auth.sh@68 -- # keyid=3 00:30:40.181 13:13:45 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:30:40.181 13:13:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:40.181 13:13:45 -- common/autotest_common.sh@10 -- # set +x 00:30:40.181 13:13:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:40.181 13:13:45 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:40.181 13:13:45 -- nvmf/common.sh@717 -- # local ip 00:30:40.181 13:13:45 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:40.181 13:13:45 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:40.181 13:13:45 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:40.181 13:13:45 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:40.181 13:13:45 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:40.181 13:13:45 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:40.181 13:13:45 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:40.181 13:13:45 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:40.181 13:13:45 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:40.181 13:13:45 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:30:40.181 13:13:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:40.181 13:13:45 -- common/autotest_common.sh@10 -- # set +x 00:30:40.441 nvme0n1 00:30:40.441 13:13:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:40.441 13:13:45 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:40.441 13:13:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:40.441 13:13:45 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:40.441 13:13:45 -- common/autotest_common.sh@10 -- # set +x 00:30:40.441 13:13:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:40.441 13:13:45 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:40.441 13:13:45 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:40.441 13:13:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:40.441 13:13:45 -- common/autotest_common.sh@10 -- # set +x 00:30:40.441 13:13:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:40.441 13:13:45 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:40.441 13:13:45 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:30:40.441 13:13:45 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:40.441 13:13:45 -- host/auth.sh@44 -- # digest=sha256 00:30:40.441 13:13:45 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:40.441 13:13:45 -- host/auth.sh@44 -- # keyid=4 00:30:40.441 13:13:45 -- host/auth.sh@45 -- # key=DHHC-1:03:MDA3ZWYxNTg1ZmNmNTMxOTg5MTM4NDBmOWY5MmQxOGQzZTZmODY5ZDAyNWFjODkzZjZiNmY4ZTM5ZjUyMzVhNkfxi6g=: 00:30:40.441 13:13:45 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:30:40.441 13:13:45 -- host/auth.sh@48 -- # echo ffdhe3072 00:30:40.441 13:13:45 -- host/auth.sh@49 -- # echo DHHC-1:03:MDA3ZWYxNTg1ZmNmNTMxOTg5MTM4NDBmOWY5MmQxOGQzZTZmODY5ZDAyNWFjODkzZjZiNmY4ZTM5ZjUyMzVhNkfxi6g=: 00:30:40.441 13:13:45 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 4 00:30:40.441 13:13:45 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:40.441 13:13:45 -- host/auth.sh@68 -- # digest=sha256 00:30:40.441 13:13:45 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:30:40.441 13:13:45 -- host/auth.sh@68 -- # keyid=4 00:30:40.441 13:13:45 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:30:40.441 13:13:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:40.441 13:13:45 -- common/autotest_common.sh@10 -- # set +x 00:30:40.441 13:13:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:40.441 13:13:45 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:40.441 13:13:45 -- nvmf/common.sh@717 -- # local ip 00:30:40.441 13:13:45 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:40.441 13:13:45 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:40.441 13:13:45 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:40.441 13:13:45 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:40.441 13:13:45 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:40.441 13:13:45 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:40.441 13:13:45 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:40.441 13:13:45 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:40.441 13:13:45 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:40.441 13:13:45 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:40.441 13:13:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:40.441 13:13:45 -- common/autotest_common.sh@10 -- # set +x 00:30:40.702 nvme0n1 00:30:40.702 13:13:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:40.702 13:13:45 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:40.702 13:13:45 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:40.702 13:13:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:40.702 13:13:45 -- common/autotest_common.sh@10 -- # set +x 00:30:40.702 13:13:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:40.702 13:13:45 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:40.702 13:13:45 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:40.702 13:13:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:40.702 13:13:45 -- common/autotest_common.sh@10 -- # set +x 00:30:40.702 13:13:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:40.702 13:13:45 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:30:40.702 13:13:45 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:40.702 13:13:45 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:30:40.702 13:13:45 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:40.702 13:13:45 -- host/auth.sh@44 -- # digest=sha256 00:30:40.702 13:13:45 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:40.702 13:13:45 -- host/auth.sh@44 -- # keyid=0 00:30:40.702 13:13:45 -- host/auth.sh@45 -- # key=DHHC-1:00:MjM5NzM5NjJmYmJiM2JjYWIyMTZjNWVmZDk4YWRkZDUswSTO: 00:30:40.702 13:13:45 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:30:40.702 13:13:45 -- host/auth.sh@48 -- # echo ffdhe4096 00:30:40.702 13:13:45 -- host/auth.sh@49 -- # echo DHHC-1:00:MjM5NzM5NjJmYmJiM2JjYWIyMTZjNWVmZDk4YWRkZDUswSTO: 00:30:40.702 13:13:45 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 0 00:30:40.702 13:13:45 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:40.702 13:13:45 -- host/auth.sh@68 -- # digest=sha256 00:30:40.702 13:13:45 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:30:40.702 13:13:45 -- host/auth.sh@68 -- # keyid=0 00:30:40.702 13:13:45 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:30:40.702 13:13:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:40.702 13:13:45 -- common/autotest_common.sh@10 -- # set +x 00:30:40.702 13:13:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:40.702 13:13:45 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:40.702 13:13:45 -- nvmf/common.sh@717 -- # local ip 00:30:40.702 13:13:45 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:40.702 13:13:45 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:40.702 13:13:45 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:40.702 13:13:45 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:40.702 13:13:45 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:40.702 13:13:45 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:40.702 13:13:45 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:40.702 13:13:45 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:40.702 13:13:45 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:40.702 13:13:45 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:30:40.702 13:13:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:40.702 13:13:45 -- common/autotest_common.sh@10 -- # set +x 00:30:40.963 nvme0n1 00:30:40.963 13:13:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:40.963 13:13:45 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:40.963 13:13:45 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:40.963 13:13:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:40.963 13:13:45 -- common/autotest_common.sh@10 -- # set +x 00:30:40.963 13:13:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:40.963 13:13:45 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:40.963 13:13:45 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:40.963 13:13:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:40.963 13:13:45 -- common/autotest_common.sh@10 -- # set +x 00:30:40.963 13:13:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:40.963 13:13:45 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:40.963 13:13:45 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:30:40.963 13:13:45 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:40.963 13:13:45 -- host/auth.sh@44 -- # digest=sha256 00:30:40.963 13:13:45 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:40.963 13:13:45 -- host/auth.sh@44 -- # keyid=1 00:30:40.963 13:13:45 -- host/auth.sh@45 -- # key=DHHC-1:00:MWJiNDg5YTQxNzZlOGFmNzA5YWM0ZWMwMDczNmQzZTQyOTQzMTU2ZDkxZDllZTJllnVCJw==: 00:30:40.963 13:13:45 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:30:40.963 13:13:45 -- host/auth.sh@48 -- # echo ffdhe4096 00:30:40.963 13:13:45 -- host/auth.sh@49 -- # echo DHHC-1:00:MWJiNDg5YTQxNzZlOGFmNzA5YWM0ZWMwMDczNmQzZTQyOTQzMTU2ZDkxZDllZTJllnVCJw==: 00:30:40.963 13:13:45 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 1 00:30:40.963 13:13:45 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:40.963 13:13:45 -- host/auth.sh@68 -- # digest=sha256 00:30:40.963 13:13:45 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:30:40.963 13:13:45 -- host/auth.sh@68 -- # keyid=1 00:30:40.963 13:13:45 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:30:40.963 13:13:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:40.963 13:13:45 -- common/autotest_common.sh@10 -- # set +x 00:30:40.963 13:13:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:40.963 13:13:46 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:40.963 13:13:46 -- nvmf/common.sh@717 -- # local ip 00:30:40.963 13:13:46 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:40.963 13:13:46 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:40.963 13:13:46 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:40.963 13:13:46 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:40.963 13:13:46 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:40.963 13:13:46 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:40.963 13:13:46 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:40.963 13:13:46 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:40.963 13:13:46 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:40.963 13:13:46 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:30:40.963 13:13:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:40.963 13:13:46 -- common/autotest_common.sh@10 -- # set +x 00:30:41.223 nvme0n1 00:30:41.223 13:13:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:41.223 13:13:46 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:41.223 13:13:46 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:41.223 13:13:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:41.223 13:13:46 -- common/autotest_common.sh@10 -- # set +x 00:30:41.484 13:13:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:41.484 13:13:46 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:41.484 13:13:46 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:41.484 13:13:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:41.484 13:13:46 -- common/autotest_common.sh@10 -- # set +x 00:30:41.484 13:13:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:41.484 13:13:46 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:41.484 13:13:46 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:30:41.484 13:13:46 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:41.484 13:13:46 -- host/auth.sh@44 -- # digest=sha256 00:30:41.484 13:13:46 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:41.484 13:13:46 -- host/auth.sh@44 -- # keyid=2 00:30:41.484 13:13:46 -- host/auth.sh@45 -- # key=DHHC-1:01:Yjc4ODhiNzNhMjkyOGZkOTNmNjczZGI1MDg4NzQyYTGItMYt: 00:30:41.484 13:13:46 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:30:41.484 13:13:46 -- host/auth.sh@48 -- # echo ffdhe4096 00:30:41.484 13:13:46 -- host/auth.sh@49 -- # echo DHHC-1:01:Yjc4ODhiNzNhMjkyOGZkOTNmNjczZGI1MDg4NzQyYTGItMYt: 00:30:41.484 13:13:46 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 2 00:30:41.484 13:13:46 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:41.484 13:13:46 -- host/auth.sh@68 -- # digest=sha256 00:30:41.484 13:13:46 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:30:41.484 13:13:46 -- host/auth.sh@68 -- # keyid=2 00:30:41.484 13:13:46 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:30:41.484 13:13:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:41.484 13:13:46 -- common/autotest_common.sh@10 -- # set +x 00:30:41.484 13:13:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:41.484 13:13:46 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:41.484 13:13:46 -- nvmf/common.sh@717 -- # local ip 00:30:41.484 13:13:46 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:41.484 13:13:46 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:41.484 13:13:46 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:41.484 13:13:46 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:41.484 13:13:46 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:41.484 13:13:46 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:41.484 13:13:46 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:41.484 13:13:46 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:41.484 13:13:46 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:41.484 13:13:46 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:30:41.484 13:13:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:41.484 13:13:46 -- common/autotest_common.sh@10 -- # set +x 00:30:41.745 nvme0n1 00:30:41.745 13:13:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:41.745 13:13:46 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:41.745 13:13:46 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:41.745 13:13:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:41.745 13:13:46 -- common/autotest_common.sh@10 -- # set +x 00:30:41.745 13:13:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:41.745 13:13:46 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:41.745 13:13:46 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:41.745 13:13:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:41.745 13:13:46 -- common/autotest_common.sh@10 -- # set +x 00:30:41.745 13:13:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:41.745 13:13:46 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:41.745 13:13:46 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:30:41.745 13:13:46 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:41.745 13:13:46 -- host/auth.sh@44 -- # digest=sha256 00:30:41.745 13:13:46 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:41.745 13:13:46 -- host/auth.sh@44 -- # keyid=3 00:30:41.745 13:13:46 -- host/auth.sh@45 -- # key=DHHC-1:02:OWQyNTdiMDJjODg4YzExZDllMTk2OTNmOTE0NTY2ZTViNzVlMzhhMzQyYTUxYjc2YPpu8A==: 00:30:41.745 13:13:46 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:30:41.745 13:13:46 -- host/auth.sh@48 -- # echo ffdhe4096 00:30:41.745 13:13:46 -- host/auth.sh@49 -- # echo DHHC-1:02:OWQyNTdiMDJjODg4YzExZDllMTk2OTNmOTE0NTY2ZTViNzVlMzhhMzQyYTUxYjc2YPpu8A==: 00:30:41.745 13:13:46 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 3 00:30:41.745 13:13:46 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:41.745 13:13:46 -- host/auth.sh@68 -- # digest=sha256 00:30:41.745 13:13:46 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:30:41.745 13:13:46 -- host/auth.sh@68 -- # keyid=3 00:30:41.745 13:13:46 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:30:41.745 13:13:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:41.745 13:13:46 -- common/autotest_common.sh@10 -- # set +x 00:30:41.745 13:13:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:41.745 13:13:46 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:41.745 13:13:46 -- nvmf/common.sh@717 -- # local ip 00:30:41.745 13:13:46 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:41.745 13:13:46 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:41.745 13:13:46 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:41.745 13:13:46 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:41.745 13:13:46 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:41.745 13:13:46 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:41.745 13:13:46 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:41.745 13:13:46 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:41.745 13:13:46 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:41.745 13:13:46 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:30:41.745 13:13:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:41.745 13:13:46 -- common/autotest_common.sh@10 -- # set +x 00:30:42.005 nvme0n1 00:30:42.005 13:13:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:42.005 13:13:46 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:42.005 13:13:46 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:42.005 13:13:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:42.005 13:13:46 -- common/autotest_common.sh@10 -- # set +x 00:30:42.005 13:13:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:42.005 13:13:47 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:42.005 13:13:47 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:42.005 13:13:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:42.005 13:13:47 -- common/autotest_common.sh@10 -- # set +x 00:30:42.005 13:13:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:42.005 13:13:47 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:42.005 13:13:47 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:30:42.005 13:13:47 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:42.005 13:13:47 -- host/auth.sh@44 -- # digest=sha256 00:30:42.005 13:13:47 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:42.005 13:13:47 -- host/auth.sh@44 -- # keyid=4 00:30:42.005 13:13:47 -- host/auth.sh@45 -- # key=DHHC-1:03:MDA3ZWYxNTg1ZmNmNTMxOTg5MTM4NDBmOWY5MmQxOGQzZTZmODY5ZDAyNWFjODkzZjZiNmY4ZTM5ZjUyMzVhNkfxi6g=: 00:30:42.005 13:13:47 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:30:42.005 13:13:47 -- host/auth.sh@48 -- # echo ffdhe4096 00:30:42.005 13:13:47 -- host/auth.sh@49 -- # echo DHHC-1:03:MDA3ZWYxNTg1ZmNmNTMxOTg5MTM4NDBmOWY5MmQxOGQzZTZmODY5ZDAyNWFjODkzZjZiNmY4ZTM5ZjUyMzVhNkfxi6g=: 00:30:42.005 13:13:47 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 4 00:30:42.005 13:13:47 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:42.005 13:13:47 -- host/auth.sh@68 -- # digest=sha256 00:30:42.005 13:13:47 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:30:42.005 13:13:47 -- host/auth.sh@68 -- # keyid=4 00:30:42.005 13:13:47 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:30:42.005 13:13:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:42.005 13:13:47 -- common/autotest_common.sh@10 -- # set +x 00:30:42.265 13:13:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:42.265 13:13:47 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:42.265 13:13:47 -- nvmf/common.sh@717 -- # local ip 00:30:42.265 13:13:47 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:42.265 13:13:47 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:42.265 13:13:47 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:42.265 13:13:47 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:42.265 13:13:47 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:42.265 13:13:47 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:42.265 13:13:47 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:42.265 13:13:47 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:42.265 13:13:47 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:42.265 13:13:47 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:42.265 13:13:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:42.265 13:13:47 -- common/autotest_common.sh@10 -- # set +x 00:30:42.525 nvme0n1 00:30:42.525 13:13:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:42.525 13:13:47 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:42.525 13:13:47 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:42.525 13:13:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:42.525 13:13:47 -- common/autotest_common.sh@10 -- # set +x 00:30:42.525 13:13:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:42.525 13:13:47 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:42.525 13:13:47 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:42.525 13:13:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:42.525 13:13:47 -- common/autotest_common.sh@10 -- # set +x 00:30:42.525 13:13:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:42.525 13:13:47 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:30:42.525 13:13:47 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:42.525 13:13:47 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:30:42.525 13:13:47 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:42.525 13:13:47 -- host/auth.sh@44 -- # digest=sha256 00:30:42.525 13:13:47 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:42.525 13:13:47 -- host/auth.sh@44 -- # keyid=0 00:30:42.525 13:13:47 -- host/auth.sh@45 -- # key=DHHC-1:00:MjM5NzM5NjJmYmJiM2JjYWIyMTZjNWVmZDk4YWRkZDUswSTO: 00:30:42.525 13:13:47 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:30:42.525 13:13:47 -- host/auth.sh@48 -- # echo ffdhe6144 00:30:42.525 13:13:47 -- host/auth.sh@49 -- # echo DHHC-1:00:MjM5NzM5NjJmYmJiM2JjYWIyMTZjNWVmZDk4YWRkZDUswSTO: 00:30:42.525 13:13:47 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 0 00:30:42.525 13:13:47 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:42.525 13:13:47 -- host/auth.sh@68 -- # digest=sha256 00:30:42.525 13:13:47 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:30:42.525 13:13:47 -- host/auth.sh@68 -- # keyid=0 00:30:42.525 13:13:47 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:30:42.525 13:13:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:42.525 13:13:47 -- common/autotest_common.sh@10 -- # set +x 00:30:42.525 13:13:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:42.525 13:13:47 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:42.525 13:13:47 -- nvmf/common.sh@717 -- # local ip 00:30:42.525 13:13:47 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:42.525 13:13:47 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:42.525 13:13:47 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:42.525 13:13:47 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:42.525 13:13:47 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:42.525 13:13:47 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:42.525 13:13:47 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:42.525 13:13:47 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:42.525 13:13:47 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:42.525 13:13:47 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:30:42.525 13:13:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:42.525 13:13:47 -- common/autotest_common.sh@10 -- # set +x 00:30:43.096 nvme0n1 00:30:43.096 13:13:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:43.096 13:13:47 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:43.096 13:13:47 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:43.096 13:13:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:43.096 13:13:47 -- common/autotest_common.sh@10 -- # set +x 00:30:43.096 13:13:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:43.096 13:13:47 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:43.096 13:13:47 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:43.096 13:13:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:43.096 13:13:47 -- common/autotest_common.sh@10 -- # set +x 00:30:43.096 13:13:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:43.096 13:13:47 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:43.096 13:13:47 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:30:43.096 13:13:47 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:43.096 13:13:47 -- host/auth.sh@44 -- # digest=sha256 00:30:43.096 13:13:47 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:43.096 13:13:47 -- host/auth.sh@44 -- # keyid=1 00:30:43.096 13:13:47 -- host/auth.sh@45 -- # key=DHHC-1:00:MWJiNDg5YTQxNzZlOGFmNzA5YWM0ZWMwMDczNmQzZTQyOTQzMTU2ZDkxZDllZTJllnVCJw==: 00:30:43.096 13:13:47 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:30:43.096 13:13:47 -- host/auth.sh@48 -- # echo ffdhe6144 00:30:43.096 13:13:47 -- host/auth.sh@49 -- # echo DHHC-1:00:MWJiNDg5YTQxNzZlOGFmNzA5YWM0ZWMwMDczNmQzZTQyOTQzMTU2ZDkxZDllZTJllnVCJw==: 00:30:43.096 13:13:47 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 1 00:30:43.096 13:13:47 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:43.096 13:13:47 -- host/auth.sh@68 -- # digest=sha256 00:30:43.096 13:13:47 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:30:43.096 13:13:47 -- host/auth.sh@68 -- # keyid=1 00:30:43.096 13:13:47 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:30:43.096 13:13:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:43.096 13:13:47 -- common/autotest_common.sh@10 -- # set +x 00:30:43.096 13:13:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:43.096 13:13:47 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:43.096 13:13:47 -- nvmf/common.sh@717 -- # local ip 00:30:43.096 13:13:47 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:43.096 13:13:47 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:43.096 13:13:47 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:43.096 13:13:47 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:43.096 13:13:47 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:43.096 13:13:47 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:43.096 13:13:47 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:43.096 13:13:47 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:43.096 13:13:47 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:43.096 13:13:47 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:30:43.096 13:13:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:43.096 13:13:47 -- common/autotest_common.sh@10 -- # set +x 00:30:43.664 nvme0n1 00:30:43.664 13:13:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:43.664 13:13:48 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:43.664 13:13:48 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:43.664 13:13:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:43.664 13:13:48 -- common/autotest_common.sh@10 -- # set +x 00:30:43.664 13:13:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:43.664 13:13:48 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:43.664 13:13:48 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:43.664 13:13:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:43.664 13:13:48 -- common/autotest_common.sh@10 -- # set +x 00:30:43.664 13:13:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:43.664 13:13:48 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:43.665 13:13:48 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:30:43.665 13:13:48 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:43.665 13:13:48 -- host/auth.sh@44 -- # digest=sha256 00:30:43.665 13:13:48 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:43.665 13:13:48 -- host/auth.sh@44 -- # keyid=2 00:30:43.665 13:13:48 -- host/auth.sh@45 -- # key=DHHC-1:01:Yjc4ODhiNzNhMjkyOGZkOTNmNjczZGI1MDg4NzQyYTGItMYt: 00:30:43.665 13:13:48 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:30:43.665 13:13:48 -- host/auth.sh@48 -- # echo ffdhe6144 00:30:43.665 13:13:48 -- host/auth.sh@49 -- # echo DHHC-1:01:Yjc4ODhiNzNhMjkyOGZkOTNmNjczZGI1MDg4NzQyYTGItMYt: 00:30:43.665 13:13:48 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 2 00:30:43.665 13:13:48 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:43.665 13:13:48 -- host/auth.sh@68 -- # digest=sha256 00:30:43.665 13:13:48 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:30:43.665 13:13:48 -- host/auth.sh@68 -- # keyid=2 00:30:43.665 13:13:48 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:30:43.665 13:13:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:43.665 13:13:48 -- common/autotest_common.sh@10 -- # set +x 00:30:43.665 13:13:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:43.665 13:13:48 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:43.665 13:13:48 -- nvmf/common.sh@717 -- # local ip 00:30:43.665 13:13:48 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:43.665 13:13:48 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:43.665 13:13:48 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:43.665 13:13:48 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:43.665 13:13:48 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:43.665 13:13:48 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:43.665 13:13:48 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:43.665 13:13:48 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:43.665 13:13:48 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:43.665 13:13:48 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:30:43.665 13:13:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:43.665 13:13:48 -- common/autotest_common.sh@10 -- # set +x 00:30:43.924 nvme0n1 00:30:43.924 13:13:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:43.924 13:13:48 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:43.924 13:13:48 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:43.924 13:13:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:43.924 13:13:48 -- common/autotest_common.sh@10 -- # set +x 00:30:43.924 13:13:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:44.184 13:13:49 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:44.184 13:13:49 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:44.184 13:13:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:44.184 13:13:49 -- common/autotest_common.sh@10 -- # set +x 00:30:44.184 13:13:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:44.184 13:13:49 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:44.184 13:13:49 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:30:44.184 13:13:49 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:44.184 13:13:49 -- host/auth.sh@44 -- # digest=sha256 00:30:44.184 13:13:49 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:44.184 13:13:49 -- host/auth.sh@44 -- # keyid=3 00:30:44.184 13:13:49 -- host/auth.sh@45 -- # key=DHHC-1:02:OWQyNTdiMDJjODg4YzExZDllMTk2OTNmOTE0NTY2ZTViNzVlMzhhMzQyYTUxYjc2YPpu8A==: 00:30:44.184 13:13:49 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:30:44.184 13:13:49 -- host/auth.sh@48 -- # echo ffdhe6144 00:30:44.184 13:13:49 -- host/auth.sh@49 -- # echo DHHC-1:02:OWQyNTdiMDJjODg4YzExZDllMTk2OTNmOTE0NTY2ZTViNzVlMzhhMzQyYTUxYjc2YPpu8A==: 00:30:44.184 13:13:49 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 3 00:30:44.184 13:13:49 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:44.184 13:13:49 -- host/auth.sh@68 -- # digest=sha256 00:30:44.184 13:13:49 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:30:44.184 13:13:49 -- host/auth.sh@68 -- # keyid=3 00:30:44.184 13:13:49 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:30:44.184 13:13:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:44.184 13:13:49 -- common/autotest_common.sh@10 -- # set +x 00:30:44.184 13:13:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:44.184 13:13:49 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:44.184 13:13:49 -- nvmf/common.sh@717 -- # local ip 00:30:44.184 13:13:49 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:44.184 13:13:49 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:44.184 13:13:49 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:44.184 13:13:49 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:44.184 13:13:49 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:44.184 13:13:49 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:44.184 13:13:49 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:44.184 13:13:49 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:44.184 13:13:49 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:44.184 13:13:49 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:30:44.184 13:13:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:44.184 13:13:49 -- common/autotest_common.sh@10 -- # set +x 00:30:44.754 nvme0n1 00:30:44.754 13:13:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:44.754 13:13:49 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:44.754 13:13:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:44.754 13:13:49 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:44.754 13:13:49 -- common/autotest_common.sh@10 -- # set +x 00:30:44.754 13:13:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:44.754 13:13:49 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:44.754 13:13:49 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:44.754 13:13:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:44.754 13:13:49 -- common/autotest_common.sh@10 -- # set +x 00:30:44.754 13:13:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:44.754 13:13:49 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:44.754 13:13:49 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:30:44.754 13:13:49 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:44.754 13:13:49 -- host/auth.sh@44 -- # digest=sha256 00:30:44.754 13:13:49 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:44.754 13:13:49 -- host/auth.sh@44 -- # keyid=4 00:30:44.754 13:13:49 -- host/auth.sh@45 -- # key=DHHC-1:03:MDA3ZWYxNTg1ZmNmNTMxOTg5MTM4NDBmOWY5MmQxOGQzZTZmODY5ZDAyNWFjODkzZjZiNmY4ZTM5ZjUyMzVhNkfxi6g=: 00:30:44.754 13:13:49 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:30:44.754 13:13:49 -- host/auth.sh@48 -- # echo ffdhe6144 00:30:44.754 13:13:49 -- host/auth.sh@49 -- # echo DHHC-1:03:MDA3ZWYxNTg1ZmNmNTMxOTg5MTM4NDBmOWY5MmQxOGQzZTZmODY5ZDAyNWFjODkzZjZiNmY4ZTM5ZjUyMzVhNkfxi6g=: 00:30:44.754 13:13:49 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 4 00:30:44.754 13:13:49 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:44.754 13:13:49 -- host/auth.sh@68 -- # digest=sha256 00:30:44.754 13:13:49 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:30:44.754 13:13:49 -- host/auth.sh@68 -- # keyid=4 00:30:44.754 13:13:49 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:30:44.754 13:13:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:44.754 13:13:49 -- common/autotest_common.sh@10 -- # set +x 00:30:44.754 13:13:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:44.754 13:13:49 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:44.754 13:13:49 -- nvmf/common.sh@717 -- # local ip 00:30:44.754 13:13:49 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:44.754 13:13:49 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:44.754 13:13:49 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:44.754 13:13:49 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:44.754 13:13:49 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:44.754 13:13:49 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:44.754 13:13:49 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:44.754 13:13:49 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:44.754 13:13:49 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:44.754 13:13:49 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:44.754 13:13:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:44.754 13:13:49 -- common/autotest_common.sh@10 -- # set +x 00:30:45.014 nvme0n1 00:30:45.014 13:13:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:45.014 13:13:50 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:45.014 13:13:50 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:45.014 13:13:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:45.014 13:13:50 -- common/autotest_common.sh@10 -- # set +x 00:30:45.014 13:13:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:45.274 13:13:50 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:45.274 13:13:50 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:45.274 13:13:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:45.274 13:13:50 -- common/autotest_common.sh@10 -- # set +x 00:30:45.274 13:13:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:45.274 13:13:50 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:30:45.274 13:13:50 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:45.274 13:13:50 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:30:45.274 13:13:50 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:45.274 13:13:50 -- host/auth.sh@44 -- # digest=sha256 00:30:45.274 13:13:50 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:45.274 13:13:50 -- host/auth.sh@44 -- # keyid=0 00:30:45.274 13:13:50 -- host/auth.sh@45 -- # key=DHHC-1:00:MjM5NzM5NjJmYmJiM2JjYWIyMTZjNWVmZDk4YWRkZDUswSTO: 00:30:45.274 13:13:50 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:30:45.274 13:13:50 -- host/auth.sh@48 -- # echo ffdhe8192 00:30:45.274 13:13:50 -- host/auth.sh@49 -- # echo DHHC-1:00:MjM5NzM5NjJmYmJiM2JjYWIyMTZjNWVmZDk4YWRkZDUswSTO: 00:30:45.274 13:13:50 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 0 00:30:45.274 13:13:50 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:45.274 13:13:50 -- host/auth.sh@68 -- # digest=sha256 00:30:45.274 13:13:50 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:30:45.274 13:13:50 -- host/auth.sh@68 -- # keyid=0 00:30:45.274 13:13:50 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:30:45.274 13:13:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:45.274 13:13:50 -- common/autotest_common.sh@10 -- # set +x 00:30:45.274 13:13:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:45.274 13:13:50 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:45.274 13:13:50 -- nvmf/common.sh@717 -- # local ip 00:30:45.274 13:13:50 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:45.274 13:13:50 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:45.274 13:13:50 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:45.274 13:13:50 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:45.274 13:13:50 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:45.274 13:13:50 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:45.274 13:13:50 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:45.274 13:13:50 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:45.274 13:13:50 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:45.274 13:13:50 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:30:45.274 13:13:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:45.274 13:13:50 -- common/autotest_common.sh@10 -- # set +x 00:30:45.843 nvme0n1 00:30:45.843 13:13:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:45.843 13:13:50 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:45.843 13:13:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:45.843 13:13:50 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:45.843 13:13:50 -- common/autotest_common.sh@10 -- # set +x 00:30:45.843 13:13:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:46.103 13:13:50 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:46.103 13:13:50 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:46.103 13:13:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:46.103 13:13:50 -- common/autotest_common.sh@10 -- # set +x 00:30:46.103 13:13:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:46.103 13:13:50 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:46.103 13:13:50 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:30:46.103 13:13:50 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:46.103 13:13:50 -- host/auth.sh@44 -- # digest=sha256 00:30:46.103 13:13:50 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:46.103 13:13:50 -- host/auth.sh@44 -- # keyid=1 00:30:46.103 13:13:50 -- host/auth.sh@45 -- # key=DHHC-1:00:MWJiNDg5YTQxNzZlOGFmNzA5YWM0ZWMwMDczNmQzZTQyOTQzMTU2ZDkxZDllZTJllnVCJw==: 00:30:46.103 13:13:50 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:30:46.103 13:13:50 -- host/auth.sh@48 -- # echo ffdhe8192 00:30:46.103 13:13:50 -- host/auth.sh@49 -- # echo DHHC-1:00:MWJiNDg5YTQxNzZlOGFmNzA5YWM0ZWMwMDczNmQzZTQyOTQzMTU2ZDkxZDllZTJllnVCJw==: 00:30:46.103 13:13:50 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 1 00:30:46.103 13:13:50 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:46.103 13:13:50 -- host/auth.sh@68 -- # digest=sha256 00:30:46.103 13:13:50 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:30:46.103 13:13:50 -- host/auth.sh@68 -- # keyid=1 00:30:46.103 13:13:50 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:30:46.103 13:13:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:46.103 13:13:50 -- common/autotest_common.sh@10 -- # set +x 00:30:46.103 13:13:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:46.103 13:13:50 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:46.103 13:13:50 -- nvmf/common.sh@717 -- # local ip 00:30:46.103 13:13:50 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:46.103 13:13:50 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:46.103 13:13:50 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:46.103 13:13:50 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:46.103 13:13:50 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:46.103 13:13:50 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:46.103 13:13:50 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:46.103 13:13:50 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:46.103 13:13:50 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:46.103 13:13:50 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:30:46.103 13:13:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:46.103 13:13:50 -- common/autotest_common.sh@10 -- # set +x 00:30:46.673 nvme0n1 00:30:46.673 13:13:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:46.673 13:13:51 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:46.673 13:13:51 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:46.673 13:13:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:46.673 13:13:51 -- common/autotest_common.sh@10 -- # set +x 00:30:46.673 13:13:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:46.934 13:13:51 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:46.934 13:13:51 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:46.934 13:13:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:46.934 13:13:51 -- common/autotest_common.sh@10 -- # set +x 00:30:46.934 13:13:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:46.934 13:13:51 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:46.934 13:13:51 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:30:46.934 13:13:51 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:46.934 13:13:51 -- host/auth.sh@44 -- # digest=sha256 00:30:46.934 13:13:51 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:46.934 13:13:51 -- host/auth.sh@44 -- # keyid=2 00:30:46.934 13:13:51 -- host/auth.sh@45 -- # key=DHHC-1:01:Yjc4ODhiNzNhMjkyOGZkOTNmNjczZGI1MDg4NzQyYTGItMYt: 00:30:46.934 13:13:51 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:30:46.934 13:13:51 -- host/auth.sh@48 -- # echo ffdhe8192 00:30:46.934 13:13:51 -- host/auth.sh@49 -- # echo DHHC-1:01:Yjc4ODhiNzNhMjkyOGZkOTNmNjczZGI1MDg4NzQyYTGItMYt: 00:30:46.934 13:13:51 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 2 00:30:46.934 13:13:51 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:46.934 13:13:51 -- host/auth.sh@68 -- # digest=sha256 00:30:46.934 13:13:51 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:30:46.934 13:13:51 -- host/auth.sh@68 -- # keyid=2 00:30:46.934 13:13:51 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:30:46.934 13:13:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:46.934 13:13:51 -- common/autotest_common.sh@10 -- # set +x 00:30:46.934 13:13:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:46.934 13:13:51 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:46.934 13:13:51 -- nvmf/common.sh@717 -- # local ip 00:30:46.934 13:13:51 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:46.934 13:13:51 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:46.934 13:13:51 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:46.934 13:13:51 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:46.934 13:13:51 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:46.934 13:13:51 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:46.934 13:13:51 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:46.934 13:13:51 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:46.934 13:13:51 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:46.934 13:13:51 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:30:46.934 13:13:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:46.934 13:13:51 -- common/autotest_common.sh@10 -- # set +x 00:30:47.503 nvme0n1 00:30:47.503 13:13:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:47.503 13:13:52 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:47.503 13:13:52 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:47.503 13:13:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:47.503 13:13:52 -- common/autotest_common.sh@10 -- # set +x 00:30:47.503 13:13:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:47.763 13:13:52 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:47.763 13:13:52 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:47.763 13:13:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:47.763 13:13:52 -- common/autotest_common.sh@10 -- # set +x 00:30:47.763 13:13:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:47.763 13:13:52 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:47.763 13:13:52 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:30:47.763 13:13:52 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:47.763 13:13:52 -- host/auth.sh@44 -- # digest=sha256 00:30:47.763 13:13:52 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:47.763 13:13:52 -- host/auth.sh@44 -- # keyid=3 00:30:47.763 13:13:52 -- host/auth.sh@45 -- # key=DHHC-1:02:OWQyNTdiMDJjODg4YzExZDllMTk2OTNmOTE0NTY2ZTViNzVlMzhhMzQyYTUxYjc2YPpu8A==: 00:30:47.763 13:13:52 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:30:47.763 13:13:52 -- host/auth.sh@48 -- # echo ffdhe8192 00:30:47.763 13:13:52 -- host/auth.sh@49 -- # echo DHHC-1:02:OWQyNTdiMDJjODg4YzExZDllMTk2OTNmOTE0NTY2ZTViNzVlMzhhMzQyYTUxYjc2YPpu8A==: 00:30:47.763 13:13:52 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 3 00:30:47.763 13:13:52 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:47.763 13:13:52 -- host/auth.sh@68 -- # digest=sha256 00:30:47.763 13:13:52 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:30:47.763 13:13:52 -- host/auth.sh@68 -- # keyid=3 00:30:47.763 13:13:52 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:30:47.763 13:13:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:47.763 13:13:52 -- common/autotest_common.sh@10 -- # set +x 00:30:47.763 13:13:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:47.763 13:13:52 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:47.763 13:13:52 -- nvmf/common.sh@717 -- # local ip 00:30:47.763 13:13:52 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:47.763 13:13:52 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:47.763 13:13:52 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:47.763 13:13:52 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:47.763 13:13:52 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:47.763 13:13:52 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:47.763 13:13:52 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:47.763 13:13:52 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:47.763 13:13:52 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:47.763 13:13:52 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:30:47.763 13:13:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:47.763 13:13:52 -- common/autotest_common.sh@10 -- # set +x 00:30:48.333 nvme0n1 00:30:48.333 13:13:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:48.333 13:13:53 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:48.333 13:13:53 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:48.333 13:13:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:48.333 13:13:53 -- common/autotest_common.sh@10 -- # set +x 00:30:48.333 13:13:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:48.593 13:13:53 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:48.593 13:13:53 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:48.593 13:13:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:48.593 13:13:53 -- common/autotest_common.sh@10 -- # set +x 00:30:48.593 13:13:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:48.593 13:13:53 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:48.593 13:13:53 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:30:48.593 13:13:53 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:48.593 13:13:53 -- host/auth.sh@44 -- # digest=sha256 00:30:48.593 13:13:53 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:48.593 13:13:53 -- host/auth.sh@44 -- # keyid=4 00:30:48.593 13:13:53 -- host/auth.sh@45 -- # key=DHHC-1:03:MDA3ZWYxNTg1ZmNmNTMxOTg5MTM4NDBmOWY5MmQxOGQzZTZmODY5ZDAyNWFjODkzZjZiNmY4ZTM5ZjUyMzVhNkfxi6g=: 00:30:48.593 13:13:53 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:30:48.593 13:13:53 -- host/auth.sh@48 -- # echo ffdhe8192 00:30:48.593 13:13:53 -- host/auth.sh@49 -- # echo DHHC-1:03:MDA3ZWYxNTg1ZmNmNTMxOTg5MTM4NDBmOWY5MmQxOGQzZTZmODY5ZDAyNWFjODkzZjZiNmY4ZTM5ZjUyMzVhNkfxi6g=: 00:30:48.593 13:13:53 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 4 00:30:48.593 13:13:53 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:48.593 13:13:53 -- host/auth.sh@68 -- # digest=sha256 00:30:48.593 13:13:53 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:30:48.593 13:13:53 -- host/auth.sh@68 -- # keyid=4 00:30:48.593 13:13:53 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:30:48.593 13:13:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:48.593 13:13:53 -- common/autotest_common.sh@10 -- # set +x 00:30:48.593 13:13:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:48.593 13:13:53 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:48.593 13:13:53 -- nvmf/common.sh@717 -- # local ip 00:30:48.593 13:13:53 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:48.593 13:13:53 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:48.593 13:13:53 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:48.593 13:13:53 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:48.593 13:13:53 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:48.593 13:13:53 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:48.593 13:13:53 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:48.593 13:13:53 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:48.593 13:13:53 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:48.593 13:13:53 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:48.593 13:13:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:48.593 13:13:53 -- common/autotest_common.sh@10 -- # set +x 00:30:49.165 nvme0n1 00:30:49.165 13:13:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:49.165 13:13:54 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:49.165 13:13:54 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:49.165 13:13:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:49.165 13:13:54 -- common/autotest_common.sh@10 -- # set +x 00:30:49.165 13:13:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:49.426 13:13:54 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:49.426 13:13:54 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:49.426 13:13:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:49.426 13:13:54 -- common/autotest_common.sh@10 -- # set +x 00:30:49.426 13:13:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:49.426 13:13:54 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:30:49.426 13:13:54 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:30:49.426 13:13:54 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:49.426 13:13:54 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:30:49.426 13:13:54 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:49.426 13:13:54 -- host/auth.sh@44 -- # digest=sha384 00:30:49.426 13:13:54 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:49.426 13:13:54 -- host/auth.sh@44 -- # keyid=0 00:30:49.426 13:13:54 -- host/auth.sh@45 -- # key=DHHC-1:00:MjM5NzM5NjJmYmJiM2JjYWIyMTZjNWVmZDk4YWRkZDUswSTO: 00:30:49.426 13:13:54 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:30:49.426 13:13:54 -- host/auth.sh@48 -- # echo ffdhe2048 00:30:49.426 13:13:54 -- host/auth.sh@49 -- # echo DHHC-1:00:MjM5NzM5NjJmYmJiM2JjYWIyMTZjNWVmZDk4YWRkZDUswSTO: 00:30:49.426 13:13:54 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 0 00:30:49.426 13:13:54 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:49.426 13:13:54 -- host/auth.sh@68 -- # digest=sha384 00:30:49.426 13:13:54 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:30:49.426 13:13:54 -- host/auth.sh@68 -- # keyid=0 00:30:49.426 13:13:54 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:30:49.426 13:13:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:49.426 13:13:54 -- common/autotest_common.sh@10 -- # set +x 00:30:49.426 13:13:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:49.426 13:13:54 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:49.426 13:13:54 -- nvmf/common.sh@717 -- # local ip 00:30:49.426 13:13:54 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:49.426 13:13:54 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:49.426 13:13:54 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:49.426 13:13:54 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:49.427 13:13:54 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:49.427 13:13:54 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:49.427 13:13:54 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:49.427 13:13:54 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:49.427 13:13:54 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:49.427 13:13:54 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:30:49.427 13:13:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:49.427 13:13:54 -- common/autotest_common.sh@10 -- # set +x 00:30:49.427 nvme0n1 00:30:49.427 13:13:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:49.427 13:13:54 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:49.427 13:13:54 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:49.427 13:13:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:49.427 13:13:54 -- common/autotest_common.sh@10 -- # set +x 00:30:49.427 13:13:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:49.427 13:13:54 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:49.427 13:13:54 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:49.427 13:13:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:49.427 13:13:54 -- common/autotest_common.sh@10 -- # set +x 00:30:49.427 13:13:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:49.427 13:13:54 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:49.427 13:13:54 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:30:49.427 13:13:54 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:49.427 13:13:54 -- host/auth.sh@44 -- # digest=sha384 00:30:49.427 13:13:54 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:49.427 13:13:54 -- host/auth.sh@44 -- # keyid=1 00:30:49.427 13:13:54 -- host/auth.sh@45 -- # key=DHHC-1:00:MWJiNDg5YTQxNzZlOGFmNzA5YWM0ZWMwMDczNmQzZTQyOTQzMTU2ZDkxZDllZTJllnVCJw==: 00:30:49.427 13:13:54 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:30:49.427 13:13:54 -- host/auth.sh@48 -- # echo ffdhe2048 00:30:49.427 13:13:54 -- host/auth.sh@49 -- # echo DHHC-1:00:MWJiNDg5YTQxNzZlOGFmNzA5YWM0ZWMwMDczNmQzZTQyOTQzMTU2ZDkxZDllZTJllnVCJw==: 00:30:49.427 13:13:54 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 1 00:30:49.427 13:13:54 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:49.427 13:13:54 -- host/auth.sh@68 -- # digest=sha384 00:30:49.427 13:13:54 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:30:49.427 13:13:54 -- host/auth.sh@68 -- # keyid=1 00:30:49.427 13:13:54 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:30:49.427 13:13:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:49.427 13:13:54 -- common/autotest_common.sh@10 -- # set +x 00:30:49.688 13:13:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:49.688 13:13:54 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:49.688 13:13:54 -- nvmf/common.sh@717 -- # local ip 00:30:49.688 13:13:54 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:49.688 13:13:54 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:49.688 13:13:54 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:49.688 13:13:54 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:49.688 13:13:54 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:49.688 13:13:54 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:49.688 13:13:54 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:49.688 13:13:54 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:49.688 13:13:54 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:49.688 13:13:54 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:30:49.688 13:13:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:49.688 13:13:54 -- common/autotest_common.sh@10 -- # set +x 00:30:49.688 nvme0n1 00:30:49.688 13:13:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:49.688 13:13:54 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:49.688 13:13:54 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:49.688 13:13:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:49.688 13:13:54 -- common/autotest_common.sh@10 -- # set +x 00:30:49.688 13:13:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:49.688 13:13:54 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:49.688 13:13:54 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:49.688 13:13:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:49.688 13:13:54 -- common/autotest_common.sh@10 -- # set +x 00:30:49.688 13:13:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:49.688 13:13:54 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:49.688 13:13:54 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:30:49.688 13:13:54 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:49.688 13:13:54 -- host/auth.sh@44 -- # digest=sha384 00:30:49.688 13:13:54 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:49.688 13:13:54 -- host/auth.sh@44 -- # keyid=2 00:30:49.688 13:13:54 -- host/auth.sh@45 -- # key=DHHC-1:01:Yjc4ODhiNzNhMjkyOGZkOTNmNjczZGI1MDg4NzQyYTGItMYt: 00:30:49.688 13:13:54 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:30:49.688 13:13:54 -- host/auth.sh@48 -- # echo ffdhe2048 00:30:49.688 13:13:54 -- host/auth.sh@49 -- # echo DHHC-1:01:Yjc4ODhiNzNhMjkyOGZkOTNmNjczZGI1MDg4NzQyYTGItMYt: 00:30:49.688 13:13:54 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 2 00:30:49.688 13:13:54 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:49.688 13:13:54 -- host/auth.sh@68 -- # digest=sha384 00:30:49.688 13:13:54 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:30:49.688 13:13:54 -- host/auth.sh@68 -- # keyid=2 00:30:49.688 13:13:54 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:30:49.688 13:13:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:49.688 13:13:54 -- common/autotest_common.sh@10 -- # set +x 00:30:49.688 13:13:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:49.688 13:13:54 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:49.688 13:13:54 -- nvmf/common.sh@717 -- # local ip 00:30:49.688 13:13:54 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:49.688 13:13:54 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:49.688 13:13:54 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:49.688 13:13:54 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:49.688 13:13:54 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:49.688 13:13:54 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:49.688 13:13:54 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:49.688 13:13:54 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:49.688 13:13:54 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:49.688 13:13:54 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:30:49.688 13:13:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:49.688 13:13:54 -- common/autotest_common.sh@10 -- # set +x 00:30:49.949 nvme0n1 00:30:49.949 13:13:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:49.949 13:13:54 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:49.949 13:13:54 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:49.949 13:13:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:49.949 13:13:54 -- common/autotest_common.sh@10 -- # set +x 00:30:49.949 13:13:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:49.949 13:13:54 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:49.949 13:13:54 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:49.949 13:13:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:49.949 13:13:54 -- common/autotest_common.sh@10 -- # set +x 00:30:49.949 13:13:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:49.949 13:13:54 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:49.949 13:13:54 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:30:49.949 13:13:54 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:49.949 13:13:54 -- host/auth.sh@44 -- # digest=sha384 00:30:49.949 13:13:54 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:49.949 13:13:54 -- host/auth.sh@44 -- # keyid=3 00:30:49.949 13:13:54 -- host/auth.sh@45 -- # key=DHHC-1:02:OWQyNTdiMDJjODg4YzExZDllMTk2OTNmOTE0NTY2ZTViNzVlMzhhMzQyYTUxYjc2YPpu8A==: 00:30:49.949 13:13:54 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:30:49.949 13:13:54 -- host/auth.sh@48 -- # echo ffdhe2048 00:30:49.949 13:13:54 -- host/auth.sh@49 -- # echo DHHC-1:02:OWQyNTdiMDJjODg4YzExZDllMTk2OTNmOTE0NTY2ZTViNzVlMzhhMzQyYTUxYjc2YPpu8A==: 00:30:49.949 13:13:54 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 3 00:30:49.949 13:13:54 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:49.949 13:13:54 -- host/auth.sh@68 -- # digest=sha384 00:30:49.949 13:13:54 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:30:49.949 13:13:54 -- host/auth.sh@68 -- # keyid=3 00:30:49.949 13:13:54 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:30:49.949 13:13:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:49.949 13:13:54 -- common/autotest_common.sh@10 -- # set +x 00:30:49.949 13:13:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:49.949 13:13:54 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:49.949 13:13:54 -- nvmf/common.sh@717 -- # local ip 00:30:49.949 13:13:54 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:49.949 13:13:54 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:49.949 13:13:54 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:49.949 13:13:54 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:49.949 13:13:54 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:49.949 13:13:54 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:49.949 13:13:54 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:49.949 13:13:54 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:49.949 13:13:54 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:49.949 13:13:54 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:30:49.949 13:13:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:49.949 13:13:54 -- common/autotest_common.sh@10 -- # set +x 00:30:50.261 nvme0n1 00:30:50.261 13:13:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:50.261 13:13:55 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:50.261 13:13:55 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:50.261 13:13:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:50.261 13:13:55 -- common/autotest_common.sh@10 -- # set +x 00:30:50.261 13:13:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:50.261 13:13:55 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:50.261 13:13:55 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:50.261 13:13:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:50.261 13:13:55 -- common/autotest_common.sh@10 -- # set +x 00:30:50.261 13:13:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:50.261 13:13:55 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:50.261 13:13:55 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:30:50.261 13:13:55 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:50.261 13:13:55 -- host/auth.sh@44 -- # digest=sha384 00:30:50.261 13:13:55 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:50.261 13:13:55 -- host/auth.sh@44 -- # keyid=4 00:30:50.261 13:13:55 -- host/auth.sh@45 -- # key=DHHC-1:03:MDA3ZWYxNTg1ZmNmNTMxOTg5MTM4NDBmOWY5MmQxOGQzZTZmODY5ZDAyNWFjODkzZjZiNmY4ZTM5ZjUyMzVhNkfxi6g=: 00:30:50.261 13:13:55 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:30:50.261 13:13:55 -- host/auth.sh@48 -- # echo ffdhe2048 00:30:50.261 13:13:55 -- host/auth.sh@49 -- # echo DHHC-1:03:MDA3ZWYxNTg1ZmNmNTMxOTg5MTM4NDBmOWY5MmQxOGQzZTZmODY5ZDAyNWFjODkzZjZiNmY4ZTM5ZjUyMzVhNkfxi6g=: 00:30:50.261 13:13:55 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 4 00:30:50.261 13:13:55 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:50.261 13:13:55 -- host/auth.sh@68 -- # digest=sha384 00:30:50.261 13:13:55 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:30:50.261 13:13:55 -- host/auth.sh@68 -- # keyid=4 00:30:50.261 13:13:55 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:30:50.261 13:13:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:50.261 13:13:55 -- common/autotest_common.sh@10 -- # set +x 00:30:50.261 13:13:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:50.261 13:13:55 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:50.261 13:13:55 -- nvmf/common.sh@717 -- # local ip 00:30:50.261 13:13:55 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:50.261 13:13:55 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:50.261 13:13:55 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:50.261 13:13:55 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:50.261 13:13:55 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:50.261 13:13:55 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:50.261 13:13:55 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:50.261 13:13:55 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:50.261 13:13:55 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:50.261 13:13:55 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:50.261 13:13:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:50.261 13:13:55 -- common/autotest_common.sh@10 -- # set +x 00:30:50.522 nvme0n1 00:30:50.522 13:13:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:50.522 13:13:55 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:50.522 13:13:55 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:50.522 13:13:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:50.522 13:13:55 -- common/autotest_common.sh@10 -- # set +x 00:30:50.522 13:13:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:50.522 13:13:55 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:50.522 13:13:55 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:50.522 13:13:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:50.522 13:13:55 -- common/autotest_common.sh@10 -- # set +x 00:30:50.522 13:13:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:50.522 13:13:55 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:30:50.522 13:13:55 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:50.522 13:13:55 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:30:50.522 13:13:55 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:50.522 13:13:55 -- host/auth.sh@44 -- # digest=sha384 00:30:50.522 13:13:55 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:50.522 13:13:55 -- host/auth.sh@44 -- # keyid=0 00:30:50.522 13:13:55 -- host/auth.sh@45 -- # key=DHHC-1:00:MjM5NzM5NjJmYmJiM2JjYWIyMTZjNWVmZDk4YWRkZDUswSTO: 00:30:50.522 13:13:55 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:30:50.523 13:13:55 -- host/auth.sh@48 -- # echo ffdhe3072 00:30:50.523 13:13:55 -- host/auth.sh@49 -- # echo DHHC-1:00:MjM5NzM5NjJmYmJiM2JjYWIyMTZjNWVmZDk4YWRkZDUswSTO: 00:30:50.523 13:13:55 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 0 00:30:50.523 13:13:55 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:50.523 13:13:55 -- host/auth.sh@68 -- # digest=sha384 00:30:50.523 13:13:55 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:30:50.523 13:13:55 -- host/auth.sh@68 -- # keyid=0 00:30:50.523 13:13:55 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:30:50.523 13:13:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:50.523 13:13:55 -- common/autotest_common.sh@10 -- # set +x 00:30:50.523 13:13:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:50.523 13:13:55 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:50.523 13:13:55 -- nvmf/common.sh@717 -- # local ip 00:30:50.523 13:13:55 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:50.523 13:13:55 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:50.523 13:13:55 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:50.523 13:13:55 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:50.523 13:13:55 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:50.523 13:13:55 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:50.523 13:13:55 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:50.523 13:13:55 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:50.523 13:13:55 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:50.523 13:13:55 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:30:50.523 13:13:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:50.523 13:13:55 -- common/autotest_common.sh@10 -- # set +x 00:30:50.784 nvme0n1 00:30:50.784 13:13:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:50.784 13:13:55 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:50.784 13:13:55 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:50.784 13:13:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:50.784 13:13:55 -- common/autotest_common.sh@10 -- # set +x 00:30:50.784 13:13:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:50.784 13:13:55 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:50.784 13:13:55 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:50.784 13:13:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:50.784 13:13:55 -- common/autotest_common.sh@10 -- # set +x 00:30:50.784 13:13:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:50.784 13:13:55 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:50.784 13:13:55 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:30:50.784 13:13:55 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:50.784 13:13:55 -- host/auth.sh@44 -- # digest=sha384 00:30:50.784 13:13:55 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:50.784 13:13:55 -- host/auth.sh@44 -- # keyid=1 00:30:50.784 13:13:55 -- host/auth.sh@45 -- # key=DHHC-1:00:MWJiNDg5YTQxNzZlOGFmNzA5YWM0ZWMwMDczNmQzZTQyOTQzMTU2ZDkxZDllZTJllnVCJw==: 00:30:50.784 13:13:55 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:30:50.784 13:13:55 -- host/auth.sh@48 -- # echo ffdhe3072 00:30:50.784 13:13:55 -- host/auth.sh@49 -- # echo DHHC-1:00:MWJiNDg5YTQxNzZlOGFmNzA5YWM0ZWMwMDczNmQzZTQyOTQzMTU2ZDkxZDllZTJllnVCJw==: 00:30:50.784 13:13:55 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 1 00:30:50.784 13:13:55 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:50.784 13:13:55 -- host/auth.sh@68 -- # digest=sha384 00:30:50.784 13:13:55 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:30:50.784 13:13:55 -- host/auth.sh@68 -- # keyid=1 00:30:50.784 13:13:55 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:30:50.784 13:13:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:50.784 13:13:55 -- common/autotest_common.sh@10 -- # set +x 00:30:50.784 13:13:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:50.784 13:13:55 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:50.784 13:13:55 -- nvmf/common.sh@717 -- # local ip 00:30:50.784 13:13:55 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:50.784 13:13:55 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:50.784 13:13:55 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:50.784 13:13:55 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:50.784 13:13:55 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:50.784 13:13:55 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:50.784 13:13:55 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:50.784 13:13:55 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:50.784 13:13:55 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:50.784 13:13:55 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:30:50.784 13:13:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:50.784 13:13:55 -- common/autotest_common.sh@10 -- # set +x 00:30:51.045 nvme0n1 00:30:51.045 13:13:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:51.045 13:13:55 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:51.045 13:13:55 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:51.045 13:13:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:51.045 13:13:55 -- common/autotest_common.sh@10 -- # set +x 00:30:51.045 13:13:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:51.045 13:13:55 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:51.045 13:13:55 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:51.045 13:13:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:51.045 13:13:55 -- common/autotest_common.sh@10 -- # set +x 00:30:51.045 13:13:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:51.045 13:13:55 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:51.045 13:13:55 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:30:51.045 13:13:55 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:51.045 13:13:55 -- host/auth.sh@44 -- # digest=sha384 00:30:51.045 13:13:55 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:51.045 13:13:55 -- host/auth.sh@44 -- # keyid=2 00:30:51.045 13:13:55 -- host/auth.sh@45 -- # key=DHHC-1:01:Yjc4ODhiNzNhMjkyOGZkOTNmNjczZGI1MDg4NzQyYTGItMYt: 00:30:51.045 13:13:55 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:30:51.045 13:13:55 -- host/auth.sh@48 -- # echo ffdhe3072 00:30:51.045 13:13:55 -- host/auth.sh@49 -- # echo DHHC-1:01:Yjc4ODhiNzNhMjkyOGZkOTNmNjczZGI1MDg4NzQyYTGItMYt: 00:30:51.045 13:13:55 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 2 00:30:51.045 13:13:55 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:51.045 13:13:55 -- host/auth.sh@68 -- # digest=sha384 00:30:51.045 13:13:55 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:30:51.045 13:13:55 -- host/auth.sh@68 -- # keyid=2 00:30:51.045 13:13:55 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:30:51.045 13:13:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:51.045 13:13:55 -- common/autotest_common.sh@10 -- # set +x 00:30:51.045 13:13:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:51.045 13:13:55 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:51.045 13:13:55 -- nvmf/common.sh@717 -- # local ip 00:30:51.045 13:13:55 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:51.045 13:13:55 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:51.045 13:13:55 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:51.045 13:13:55 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:51.045 13:13:55 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:51.045 13:13:55 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:51.045 13:13:55 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:51.045 13:13:55 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:51.045 13:13:55 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:51.045 13:13:55 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:30:51.045 13:13:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:51.045 13:13:55 -- common/autotest_common.sh@10 -- # set +x 00:30:51.307 nvme0n1 00:30:51.307 13:13:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:51.307 13:13:56 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:51.307 13:13:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:51.307 13:13:56 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:51.307 13:13:56 -- common/autotest_common.sh@10 -- # set +x 00:30:51.307 13:13:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:51.307 13:13:56 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:51.307 13:13:56 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:51.307 13:13:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:51.307 13:13:56 -- common/autotest_common.sh@10 -- # set +x 00:30:51.307 13:13:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:51.307 13:13:56 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:51.307 13:13:56 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:30:51.307 13:13:56 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:51.307 13:13:56 -- host/auth.sh@44 -- # digest=sha384 00:30:51.307 13:13:56 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:51.307 13:13:56 -- host/auth.sh@44 -- # keyid=3 00:30:51.307 13:13:56 -- host/auth.sh@45 -- # key=DHHC-1:02:OWQyNTdiMDJjODg4YzExZDllMTk2OTNmOTE0NTY2ZTViNzVlMzhhMzQyYTUxYjc2YPpu8A==: 00:30:51.307 13:13:56 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:30:51.307 13:13:56 -- host/auth.sh@48 -- # echo ffdhe3072 00:30:51.307 13:13:56 -- host/auth.sh@49 -- # echo DHHC-1:02:OWQyNTdiMDJjODg4YzExZDllMTk2OTNmOTE0NTY2ZTViNzVlMzhhMzQyYTUxYjc2YPpu8A==: 00:30:51.307 13:13:56 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 3 00:30:51.307 13:13:56 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:51.307 13:13:56 -- host/auth.sh@68 -- # digest=sha384 00:30:51.307 13:13:56 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:30:51.307 13:13:56 -- host/auth.sh@68 -- # keyid=3 00:30:51.307 13:13:56 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:30:51.307 13:13:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:51.307 13:13:56 -- common/autotest_common.sh@10 -- # set +x 00:30:51.307 13:13:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:51.307 13:13:56 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:51.307 13:13:56 -- nvmf/common.sh@717 -- # local ip 00:30:51.307 13:13:56 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:51.307 13:13:56 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:51.307 13:13:56 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:51.307 13:13:56 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:51.307 13:13:56 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:51.307 13:13:56 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:51.307 13:13:56 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:51.307 13:13:56 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:51.307 13:13:56 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:51.307 13:13:56 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:30:51.307 13:13:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:51.307 13:13:56 -- common/autotest_common.sh@10 -- # set +x 00:30:51.568 nvme0n1 00:30:51.568 13:13:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:51.568 13:13:56 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:51.568 13:13:56 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:51.568 13:13:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:51.568 13:13:56 -- common/autotest_common.sh@10 -- # set +x 00:30:51.568 13:13:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:51.568 13:13:56 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:51.568 13:13:56 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:51.568 13:13:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:51.568 13:13:56 -- common/autotest_common.sh@10 -- # set +x 00:30:51.568 13:13:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:51.568 13:13:56 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:51.568 13:13:56 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:30:51.568 13:13:56 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:51.568 13:13:56 -- host/auth.sh@44 -- # digest=sha384 00:30:51.568 13:13:56 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:51.568 13:13:56 -- host/auth.sh@44 -- # keyid=4 00:30:51.568 13:13:56 -- host/auth.sh@45 -- # key=DHHC-1:03:MDA3ZWYxNTg1ZmNmNTMxOTg5MTM4NDBmOWY5MmQxOGQzZTZmODY5ZDAyNWFjODkzZjZiNmY4ZTM5ZjUyMzVhNkfxi6g=: 00:30:51.568 13:13:56 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:30:51.568 13:13:56 -- host/auth.sh@48 -- # echo ffdhe3072 00:30:51.568 13:13:56 -- host/auth.sh@49 -- # echo DHHC-1:03:MDA3ZWYxNTg1ZmNmNTMxOTg5MTM4NDBmOWY5MmQxOGQzZTZmODY5ZDAyNWFjODkzZjZiNmY4ZTM5ZjUyMzVhNkfxi6g=: 00:30:51.568 13:13:56 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 4 00:30:51.568 13:13:56 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:51.568 13:13:56 -- host/auth.sh@68 -- # digest=sha384 00:30:51.568 13:13:56 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:30:51.568 13:13:56 -- host/auth.sh@68 -- # keyid=4 00:30:51.568 13:13:56 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:30:51.568 13:13:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:51.568 13:13:56 -- common/autotest_common.sh@10 -- # set +x 00:30:51.568 13:13:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:51.568 13:13:56 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:51.568 13:13:56 -- nvmf/common.sh@717 -- # local ip 00:30:51.568 13:13:56 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:51.568 13:13:56 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:51.568 13:13:56 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:51.568 13:13:56 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:51.568 13:13:56 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:51.568 13:13:56 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:51.568 13:13:56 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:51.568 13:13:56 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:51.568 13:13:56 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:51.568 13:13:56 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:51.568 13:13:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:51.568 13:13:56 -- common/autotest_common.sh@10 -- # set +x 00:30:51.829 nvme0n1 00:30:51.829 13:13:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:51.829 13:13:56 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:51.829 13:13:56 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:51.829 13:13:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:51.829 13:13:56 -- common/autotest_common.sh@10 -- # set +x 00:30:51.829 13:13:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:51.829 13:13:56 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:51.829 13:13:56 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:51.829 13:13:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:51.829 13:13:56 -- common/autotest_common.sh@10 -- # set +x 00:30:51.829 13:13:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:51.829 13:13:56 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:30:51.829 13:13:56 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:51.829 13:13:56 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:30:51.829 13:13:56 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:51.829 13:13:56 -- host/auth.sh@44 -- # digest=sha384 00:30:51.829 13:13:56 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:51.829 13:13:56 -- host/auth.sh@44 -- # keyid=0 00:30:51.829 13:13:56 -- host/auth.sh@45 -- # key=DHHC-1:00:MjM5NzM5NjJmYmJiM2JjYWIyMTZjNWVmZDk4YWRkZDUswSTO: 00:30:51.829 13:13:56 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:30:51.829 13:13:56 -- host/auth.sh@48 -- # echo ffdhe4096 00:30:51.829 13:13:56 -- host/auth.sh@49 -- # echo DHHC-1:00:MjM5NzM5NjJmYmJiM2JjYWIyMTZjNWVmZDk4YWRkZDUswSTO: 00:30:51.829 13:13:56 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 0 00:30:51.829 13:13:56 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:51.829 13:13:56 -- host/auth.sh@68 -- # digest=sha384 00:30:51.829 13:13:56 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:30:51.829 13:13:56 -- host/auth.sh@68 -- # keyid=0 00:30:51.829 13:13:56 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:30:51.829 13:13:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:51.829 13:13:56 -- common/autotest_common.sh@10 -- # set +x 00:30:51.829 13:13:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:51.829 13:13:56 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:51.829 13:13:56 -- nvmf/common.sh@717 -- # local ip 00:30:51.829 13:13:56 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:51.829 13:13:56 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:51.829 13:13:56 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:51.829 13:13:56 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:51.829 13:13:56 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:51.829 13:13:56 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:51.829 13:13:56 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:51.829 13:13:56 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:51.829 13:13:56 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:51.829 13:13:56 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:30:51.829 13:13:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:51.829 13:13:56 -- common/autotest_common.sh@10 -- # set +x 00:30:52.090 nvme0n1 00:30:52.090 13:13:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:52.090 13:13:57 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:52.090 13:13:57 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:52.090 13:13:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:52.090 13:13:57 -- common/autotest_common.sh@10 -- # set +x 00:30:52.090 13:13:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:52.090 13:13:57 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:52.090 13:13:57 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:52.090 13:13:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:52.090 13:13:57 -- common/autotest_common.sh@10 -- # set +x 00:30:52.090 13:13:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:52.090 13:13:57 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:52.090 13:13:57 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:30:52.090 13:13:57 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:52.090 13:13:57 -- host/auth.sh@44 -- # digest=sha384 00:30:52.090 13:13:57 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:52.090 13:13:57 -- host/auth.sh@44 -- # keyid=1 00:30:52.090 13:13:57 -- host/auth.sh@45 -- # key=DHHC-1:00:MWJiNDg5YTQxNzZlOGFmNzA5YWM0ZWMwMDczNmQzZTQyOTQzMTU2ZDkxZDllZTJllnVCJw==: 00:30:52.090 13:13:57 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:30:52.090 13:13:57 -- host/auth.sh@48 -- # echo ffdhe4096 00:30:52.090 13:13:57 -- host/auth.sh@49 -- # echo DHHC-1:00:MWJiNDg5YTQxNzZlOGFmNzA5YWM0ZWMwMDczNmQzZTQyOTQzMTU2ZDkxZDllZTJllnVCJw==: 00:30:52.090 13:13:57 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 1 00:30:52.090 13:13:57 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:52.090 13:13:57 -- host/auth.sh@68 -- # digest=sha384 00:30:52.090 13:13:57 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:30:52.090 13:13:57 -- host/auth.sh@68 -- # keyid=1 00:30:52.090 13:13:57 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:30:52.090 13:13:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:52.090 13:13:57 -- common/autotest_common.sh@10 -- # set +x 00:30:52.090 13:13:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:52.090 13:13:57 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:52.090 13:13:57 -- nvmf/common.sh@717 -- # local ip 00:30:52.090 13:13:57 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:52.090 13:13:57 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:52.090 13:13:57 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:52.090 13:13:57 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:52.090 13:13:57 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:52.090 13:13:57 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:52.090 13:13:57 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:52.090 13:13:57 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:52.090 13:13:57 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:52.090 13:13:57 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:30:52.090 13:13:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:52.090 13:13:57 -- common/autotest_common.sh@10 -- # set +x 00:30:52.351 nvme0n1 00:30:52.351 13:13:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:52.351 13:13:57 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:52.351 13:13:57 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:52.351 13:13:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:52.351 13:13:57 -- common/autotest_common.sh@10 -- # set +x 00:30:52.351 13:13:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:52.351 13:13:57 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:52.351 13:13:57 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:52.351 13:13:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:52.351 13:13:57 -- common/autotest_common.sh@10 -- # set +x 00:30:52.351 13:13:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:52.351 13:13:57 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:52.351 13:13:57 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:30:52.351 13:13:57 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:52.351 13:13:57 -- host/auth.sh@44 -- # digest=sha384 00:30:52.351 13:13:57 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:52.351 13:13:57 -- host/auth.sh@44 -- # keyid=2 00:30:52.351 13:13:57 -- host/auth.sh@45 -- # key=DHHC-1:01:Yjc4ODhiNzNhMjkyOGZkOTNmNjczZGI1MDg4NzQyYTGItMYt: 00:30:52.351 13:13:57 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:30:52.351 13:13:57 -- host/auth.sh@48 -- # echo ffdhe4096 00:30:52.351 13:13:57 -- host/auth.sh@49 -- # echo DHHC-1:01:Yjc4ODhiNzNhMjkyOGZkOTNmNjczZGI1MDg4NzQyYTGItMYt: 00:30:52.351 13:13:57 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 2 00:30:52.351 13:13:57 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:52.351 13:13:57 -- host/auth.sh@68 -- # digest=sha384 00:30:52.351 13:13:57 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:30:52.351 13:13:57 -- host/auth.sh@68 -- # keyid=2 00:30:52.351 13:13:57 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:30:52.351 13:13:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:52.351 13:13:57 -- common/autotest_common.sh@10 -- # set +x 00:30:52.351 13:13:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:52.351 13:13:57 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:52.351 13:13:57 -- nvmf/common.sh@717 -- # local ip 00:30:52.351 13:13:57 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:52.351 13:13:57 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:52.351 13:13:57 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:52.351 13:13:57 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:52.351 13:13:57 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:52.351 13:13:57 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:52.351 13:13:57 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:52.351 13:13:57 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:52.351 13:13:57 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:52.351 13:13:57 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:30:52.612 13:13:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:52.612 13:13:57 -- common/autotest_common.sh@10 -- # set +x 00:30:52.874 nvme0n1 00:30:52.874 13:13:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:52.874 13:13:57 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:52.874 13:13:57 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:52.874 13:13:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:52.874 13:13:57 -- common/autotest_common.sh@10 -- # set +x 00:30:52.874 13:13:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:52.874 13:13:57 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:52.874 13:13:57 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:52.874 13:13:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:52.874 13:13:57 -- common/autotest_common.sh@10 -- # set +x 00:30:52.874 13:13:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:52.874 13:13:57 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:52.874 13:13:57 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:30:52.874 13:13:57 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:52.874 13:13:57 -- host/auth.sh@44 -- # digest=sha384 00:30:52.874 13:13:57 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:52.874 13:13:57 -- host/auth.sh@44 -- # keyid=3 00:30:52.874 13:13:57 -- host/auth.sh@45 -- # key=DHHC-1:02:OWQyNTdiMDJjODg4YzExZDllMTk2OTNmOTE0NTY2ZTViNzVlMzhhMzQyYTUxYjc2YPpu8A==: 00:30:52.874 13:13:57 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:30:52.874 13:13:57 -- host/auth.sh@48 -- # echo ffdhe4096 00:30:52.874 13:13:57 -- host/auth.sh@49 -- # echo DHHC-1:02:OWQyNTdiMDJjODg4YzExZDllMTk2OTNmOTE0NTY2ZTViNzVlMzhhMzQyYTUxYjc2YPpu8A==: 00:30:52.874 13:13:57 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 3 00:30:52.874 13:13:57 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:52.874 13:13:57 -- host/auth.sh@68 -- # digest=sha384 00:30:52.874 13:13:57 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:30:52.874 13:13:57 -- host/auth.sh@68 -- # keyid=3 00:30:52.874 13:13:57 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:30:52.874 13:13:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:52.874 13:13:57 -- common/autotest_common.sh@10 -- # set +x 00:30:52.874 13:13:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:52.874 13:13:57 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:52.874 13:13:57 -- nvmf/common.sh@717 -- # local ip 00:30:52.874 13:13:57 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:52.874 13:13:57 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:52.874 13:13:57 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:52.874 13:13:57 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:52.874 13:13:57 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:52.874 13:13:57 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:52.874 13:13:57 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:52.874 13:13:57 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:52.874 13:13:57 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:52.874 13:13:57 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:30:52.874 13:13:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:52.874 13:13:57 -- common/autotest_common.sh@10 -- # set +x 00:30:53.135 nvme0n1 00:30:53.135 13:13:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:53.135 13:13:58 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:53.135 13:13:58 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:53.135 13:13:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:53.135 13:13:58 -- common/autotest_common.sh@10 -- # set +x 00:30:53.135 13:13:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:53.135 13:13:58 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:53.135 13:13:58 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:53.135 13:13:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:53.135 13:13:58 -- common/autotest_common.sh@10 -- # set +x 00:30:53.135 13:13:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:53.135 13:13:58 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:53.135 13:13:58 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:30:53.135 13:13:58 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:53.135 13:13:58 -- host/auth.sh@44 -- # digest=sha384 00:30:53.135 13:13:58 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:53.135 13:13:58 -- host/auth.sh@44 -- # keyid=4 00:30:53.135 13:13:58 -- host/auth.sh@45 -- # key=DHHC-1:03:MDA3ZWYxNTg1ZmNmNTMxOTg5MTM4NDBmOWY5MmQxOGQzZTZmODY5ZDAyNWFjODkzZjZiNmY4ZTM5ZjUyMzVhNkfxi6g=: 00:30:53.135 13:13:58 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:30:53.135 13:13:58 -- host/auth.sh@48 -- # echo ffdhe4096 00:30:53.135 13:13:58 -- host/auth.sh@49 -- # echo DHHC-1:03:MDA3ZWYxNTg1ZmNmNTMxOTg5MTM4NDBmOWY5MmQxOGQzZTZmODY5ZDAyNWFjODkzZjZiNmY4ZTM5ZjUyMzVhNkfxi6g=: 00:30:53.135 13:13:58 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 4 00:30:53.135 13:13:58 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:53.135 13:13:58 -- host/auth.sh@68 -- # digest=sha384 00:30:53.135 13:13:58 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:30:53.135 13:13:58 -- host/auth.sh@68 -- # keyid=4 00:30:53.135 13:13:58 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:30:53.135 13:13:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:53.135 13:13:58 -- common/autotest_common.sh@10 -- # set +x 00:30:53.135 13:13:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:53.135 13:13:58 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:53.135 13:13:58 -- nvmf/common.sh@717 -- # local ip 00:30:53.135 13:13:58 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:53.135 13:13:58 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:53.135 13:13:58 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:53.135 13:13:58 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:53.135 13:13:58 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:53.135 13:13:58 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:53.135 13:13:58 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:53.135 13:13:58 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:53.135 13:13:58 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:53.135 13:13:58 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:53.135 13:13:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:53.135 13:13:58 -- common/autotest_common.sh@10 -- # set +x 00:30:53.396 nvme0n1 00:30:53.396 13:13:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:53.396 13:13:58 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:53.396 13:13:58 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:53.396 13:13:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:53.396 13:13:58 -- common/autotest_common.sh@10 -- # set +x 00:30:53.396 13:13:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:53.396 13:13:58 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:53.396 13:13:58 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:53.396 13:13:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:53.396 13:13:58 -- common/autotest_common.sh@10 -- # set +x 00:30:53.396 13:13:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:53.396 13:13:58 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:30:53.396 13:13:58 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:53.396 13:13:58 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:30:53.396 13:13:58 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:53.396 13:13:58 -- host/auth.sh@44 -- # digest=sha384 00:30:53.396 13:13:58 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:53.396 13:13:58 -- host/auth.sh@44 -- # keyid=0 00:30:53.396 13:13:58 -- host/auth.sh@45 -- # key=DHHC-1:00:MjM5NzM5NjJmYmJiM2JjYWIyMTZjNWVmZDk4YWRkZDUswSTO: 00:30:53.396 13:13:58 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:30:53.396 13:13:58 -- host/auth.sh@48 -- # echo ffdhe6144 00:30:53.396 13:13:58 -- host/auth.sh@49 -- # echo DHHC-1:00:MjM5NzM5NjJmYmJiM2JjYWIyMTZjNWVmZDk4YWRkZDUswSTO: 00:30:53.396 13:13:58 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 0 00:30:53.396 13:13:58 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:53.396 13:13:58 -- host/auth.sh@68 -- # digest=sha384 00:30:53.396 13:13:58 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:30:53.396 13:13:58 -- host/auth.sh@68 -- # keyid=0 00:30:53.396 13:13:58 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:30:53.396 13:13:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:53.396 13:13:58 -- common/autotest_common.sh@10 -- # set +x 00:30:53.657 13:13:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:53.657 13:13:58 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:53.657 13:13:58 -- nvmf/common.sh@717 -- # local ip 00:30:53.657 13:13:58 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:53.657 13:13:58 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:53.657 13:13:58 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:53.657 13:13:58 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:53.657 13:13:58 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:53.657 13:13:58 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:53.657 13:13:58 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:53.657 13:13:58 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:53.657 13:13:58 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:53.657 13:13:58 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:30:53.657 13:13:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:53.657 13:13:58 -- common/autotest_common.sh@10 -- # set +x 00:30:53.917 nvme0n1 00:30:53.917 13:13:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:53.917 13:13:58 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:53.917 13:13:58 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:53.917 13:13:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:53.917 13:13:58 -- common/autotest_common.sh@10 -- # set +x 00:30:53.917 13:13:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:53.917 13:13:58 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:53.917 13:13:58 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:53.917 13:13:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:53.917 13:13:58 -- common/autotest_common.sh@10 -- # set +x 00:30:54.177 13:13:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:54.177 13:13:58 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:54.177 13:13:58 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:30:54.177 13:13:58 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:54.177 13:13:58 -- host/auth.sh@44 -- # digest=sha384 00:30:54.177 13:13:58 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:54.177 13:13:58 -- host/auth.sh@44 -- # keyid=1 00:30:54.177 13:13:58 -- host/auth.sh@45 -- # key=DHHC-1:00:MWJiNDg5YTQxNzZlOGFmNzA5YWM0ZWMwMDczNmQzZTQyOTQzMTU2ZDkxZDllZTJllnVCJw==: 00:30:54.177 13:13:58 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:30:54.177 13:13:58 -- host/auth.sh@48 -- # echo ffdhe6144 00:30:54.177 13:13:58 -- host/auth.sh@49 -- # echo DHHC-1:00:MWJiNDg5YTQxNzZlOGFmNzA5YWM0ZWMwMDczNmQzZTQyOTQzMTU2ZDkxZDllZTJllnVCJw==: 00:30:54.177 13:13:58 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 1 00:30:54.177 13:13:58 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:54.177 13:13:58 -- host/auth.sh@68 -- # digest=sha384 00:30:54.177 13:13:58 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:30:54.177 13:13:58 -- host/auth.sh@68 -- # keyid=1 00:30:54.177 13:13:58 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:30:54.177 13:13:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:54.177 13:13:58 -- common/autotest_common.sh@10 -- # set +x 00:30:54.177 13:13:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:54.177 13:13:59 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:54.177 13:13:59 -- nvmf/common.sh@717 -- # local ip 00:30:54.177 13:13:59 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:54.177 13:13:59 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:54.177 13:13:59 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:54.177 13:13:59 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:54.177 13:13:59 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:54.177 13:13:59 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:54.177 13:13:59 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:54.177 13:13:59 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:54.177 13:13:59 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:54.177 13:13:59 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:30:54.177 13:13:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:54.177 13:13:59 -- common/autotest_common.sh@10 -- # set +x 00:30:54.437 nvme0n1 00:30:54.437 13:13:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:54.437 13:13:59 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:54.437 13:13:59 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:54.437 13:13:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:54.437 13:13:59 -- common/autotest_common.sh@10 -- # set +x 00:30:54.437 13:13:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:54.438 13:13:59 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:54.438 13:13:59 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:54.438 13:13:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:54.438 13:13:59 -- common/autotest_common.sh@10 -- # set +x 00:30:54.438 13:13:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:54.438 13:13:59 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:54.438 13:13:59 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:30:54.438 13:13:59 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:54.438 13:13:59 -- host/auth.sh@44 -- # digest=sha384 00:30:54.438 13:13:59 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:54.438 13:13:59 -- host/auth.sh@44 -- # keyid=2 00:30:54.438 13:13:59 -- host/auth.sh@45 -- # key=DHHC-1:01:Yjc4ODhiNzNhMjkyOGZkOTNmNjczZGI1MDg4NzQyYTGItMYt: 00:30:54.438 13:13:59 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:30:54.438 13:13:59 -- host/auth.sh@48 -- # echo ffdhe6144 00:30:54.438 13:13:59 -- host/auth.sh@49 -- # echo DHHC-1:01:Yjc4ODhiNzNhMjkyOGZkOTNmNjczZGI1MDg4NzQyYTGItMYt: 00:30:54.438 13:13:59 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 2 00:30:54.698 13:13:59 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:54.698 13:13:59 -- host/auth.sh@68 -- # digest=sha384 00:30:54.698 13:13:59 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:30:54.698 13:13:59 -- host/auth.sh@68 -- # keyid=2 00:30:54.698 13:13:59 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:30:54.698 13:13:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:54.698 13:13:59 -- common/autotest_common.sh@10 -- # set +x 00:30:54.698 13:13:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:54.698 13:13:59 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:54.698 13:13:59 -- nvmf/common.sh@717 -- # local ip 00:30:54.698 13:13:59 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:54.698 13:13:59 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:54.698 13:13:59 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:54.698 13:13:59 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:54.698 13:13:59 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:54.698 13:13:59 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:54.698 13:13:59 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:54.698 13:13:59 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:54.698 13:13:59 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:54.698 13:13:59 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:30:54.698 13:13:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:54.698 13:13:59 -- common/autotest_common.sh@10 -- # set +x 00:30:54.958 nvme0n1 00:30:54.958 13:13:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:54.958 13:13:59 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:54.958 13:13:59 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:54.958 13:13:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:54.958 13:13:59 -- common/autotest_common.sh@10 -- # set +x 00:30:54.958 13:13:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:55.217 13:14:00 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:55.217 13:14:00 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:55.217 13:14:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:55.217 13:14:00 -- common/autotest_common.sh@10 -- # set +x 00:30:55.217 13:14:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:55.217 13:14:00 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:55.217 13:14:00 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:30:55.217 13:14:00 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:55.217 13:14:00 -- host/auth.sh@44 -- # digest=sha384 00:30:55.217 13:14:00 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:55.217 13:14:00 -- host/auth.sh@44 -- # keyid=3 00:30:55.217 13:14:00 -- host/auth.sh@45 -- # key=DHHC-1:02:OWQyNTdiMDJjODg4YzExZDllMTk2OTNmOTE0NTY2ZTViNzVlMzhhMzQyYTUxYjc2YPpu8A==: 00:30:55.217 13:14:00 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:30:55.217 13:14:00 -- host/auth.sh@48 -- # echo ffdhe6144 00:30:55.217 13:14:00 -- host/auth.sh@49 -- # echo DHHC-1:02:OWQyNTdiMDJjODg4YzExZDllMTk2OTNmOTE0NTY2ZTViNzVlMzhhMzQyYTUxYjc2YPpu8A==: 00:30:55.217 13:14:00 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 3 00:30:55.217 13:14:00 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:55.217 13:14:00 -- host/auth.sh@68 -- # digest=sha384 00:30:55.217 13:14:00 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:30:55.217 13:14:00 -- host/auth.sh@68 -- # keyid=3 00:30:55.217 13:14:00 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:30:55.217 13:14:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:55.217 13:14:00 -- common/autotest_common.sh@10 -- # set +x 00:30:55.217 13:14:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:55.217 13:14:00 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:55.217 13:14:00 -- nvmf/common.sh@717 -- # local ip 00:30:55.217 13:14:00 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:55.217 13:14:00 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:55.217 13:14:00 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:55.217 13:14:00 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:55.217 13:14:00 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:55.218 13:14:00 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:55.218 13:14:00 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:55.218 13:14:00 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:55.218 13:14:00 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:55.218 13:14:00 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:30:55.218 13:14:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:55.218 13:14:00 -- common/autotest_common.sh@10 -- # set +x 00:30:55.478 nvme0n1 00:30:55.478 13:14:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:55.478 13:14:00 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:55.478 13:14:00 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:55.478 13:14:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:55.478 13:14:00 -- common/autotest_common.sh@10 -- # set +x 00:30:55.478 13:14:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:55.738 13:14:00 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:55.738 13:14:00 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:55.738 13:14:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:55.738 13:14:00 -- common/autotest_common.sh@10 -- # set +x 00:30:55.738 13:14:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:55.738 13:14:00 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:55.738 13:14:00 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:30:55.738 13:14:00 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:55.738 13:14:00 -- host/auth.sh@44 -- # digest=sha384 00:30:55.738 13:14:00 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:55.738 13:14:00 -- host/auth.sh@44 -- # keyid=4 00:30:55.738 13:14:00 -- host/auth.sh@45 -- # key=DHHC-1:03:MDA3ZWYxNTg1ZmNmNTMxOTg5MTM4NDBmOWY5MmQxOGQzZTZmODY5ZDAyNWFjODkzZjZiNmY4ZTM5ZjUyMzVhNkfxi6g=: 00:30:55.738 13:14:00 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:30:55.738 13:14:00 -- host/auth.sh@48 -- # echo ffdhe6144 00:30:55.738 13:14:00 -- host/auth.sh@49 -- # echo DHHC-1:03:MDA3ZWYxNTg1ZmNmNTMxOTg5MTM4NDBmOWY5MmQxOGQzZTZmODY5ZDAyNWFjODkzZjZiNmY4ZTM5ZjUyMzVhNkfxi6g=: 00:30:55.738 13:14:00 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 4 00:30:55.738 13:14:00 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:55.738 13:14:00 -- host/auth.sh@68 -- # digest=sha384 00:30:55.738 13:14:00 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:30:55.738 13:14:00 -- host/auth.sh@68 -- # keyid=4 00:30:55.738 13:14:00 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:30:55.738 13:14:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:55.738 13:14:00 -- common/autotest_common.sh@10 -- # set +x 00:30:55.738 13:14:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:55.738 13:14:00 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:55.738 13:14:00 -- nvmf/common.sh@717 -- # local ip 00:30:55.738 13:14:00 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:55.738 13:14:00 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:55.738 13:14:00 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:55.738 13:14:00 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:55.738 13:14:00 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:55.738 13:14:00 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:55.738 13:14:00 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:55.738 13:14:00 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:55.738 13:14:00 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:55.738 13:14:00 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:55.738 13:14:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:55.738 13:14:00 -- common/autotest_common.sh@10 -- # set +x 00:30:55.997 nvme0n1 00:30:56.257 13:14:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:56.257 13:14:01 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:56.257 13:14:01 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:56.257 13:14:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:56.257 13:14:01 -- common/autotest_common.sh@10 -- # set +x 00:30:56.257 13:14:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:56.257 13:14:01 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:56.257 13:14:01 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:56.257 13:14:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:56.257 13:14:01 -- common/autotest_common.sh@10 -- # set +x 00:30:56.257 13:14:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:56.257 13:14:01 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:30:56.257 13:14:01 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:56.257 13:14:01 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:30:56.257 13:14:01 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:56.257 13:14:01 -- host/auth.sh@44 -- # digest=sha384 00:30:56.257 13:14:01 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:56.257 13:14:01 -- host/auth.sh@44 -- # keyid=0 00:30:56.257 13:14:01 -- host/auth.sh@45 -- # key=DHHC-1:00:MjM5NzM5NjJmYmJiM2JjYWIyMTZjNWVmZDk4YWRkZDUswSTO: 00:30:56.257 13:14:01 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:30:56.257 13:14:01 -- host/auth.sh@48 -- # echo ffdhe8192 00:30:56.257 13:14:01 -- host/auth.sh@49 -- # echo DHHC-1:00:MjM5NzM5NjJmYmJiM2JjYWIyMTZjNWVmZDk4YWRkZDUswSTO: 00:30:56.257 13:14:01 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 0 00:30:56.257 13:14:01 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:56.257 13:14:01 -- host/auth.sh@68 -- # digest=sha384 00:30:56.257 13:14:01 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:30:56.257 13:14:01 -- host/auth.sh@68 -- # keyid=0 00:30:56.257 13:14:01 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:30:56.257 13:14:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:56.257 13:14:01 -- common/autotest_common.sh@10 -- # set +x 00:30:56.257 13:14:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:56.257 13:14:01 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:56.257 13:14:01 -- nvmf/common.sh@717 -- # local ip 00:30:56.257 13:14:01 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:56.257 13:14:01 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:56.257 13:14:01 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:56.257 13:14:01 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:56.257 13:14:01 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:56.257 13:14:01 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:56.257 13:14:01 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:56.257 13:14:01 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:56.257 13:14:01 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:56.257 13:14:01 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:30:56.257 13:14:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:56.257 13:14:01 -- common/autotest_common.sh@10 -- # set +x 00:30:56.828 nvme0n1 00:30:56.828 13:14:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:56.828 13:14:01 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:56.828 13:14:01 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:56.828 13:14:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:56.828 13:14:01 -- common/autotest_common.sh@10 -- # set +x 00:30:56.828 13:14:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:57.087 13:14:01 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:57.087 13:14:01 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:57.087 13:14:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:57.087 13:14:01 -- common/autotest_common.sh@10 -- # set +x 00:30:57.087 13:14:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:57.087 13:14:01 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:57.087 13:14:01 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:30:57.087 13:14:01 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:57.087 13:14:01 -- host/auth.sh@44 -- # digest=sha384 00:30:57.087 13:14:01 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:57.087 13:14:01 -- host/auth.sh@44 -- # keyid=1 00:30:57.087 13:14:01 -- host/auth.sh@45 -- # key=DHHC-1:00:MWJiNDg5YTQxNzZlOGFmNzA5YWM0ZWMwMDczNmQzZTQyOTQzMTU2ZDkxZDllZTJllnVCJw==: 00:30:57.087 13:14:01 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:30:57.087 13:14:01 -- host/auth.sh@48 -- # echo ffdhe8192 00:30:57.087 13:14:01 -- host/auth.sh@49 -- # echo DHHC-1:00:MWJiNDg5YTQxNzZlOGFmNzA5YWM0ZWMwMDczNmQzZTQyOTQzMTU2ZDkxZDllZTJllnVCJw==: 00:30:57.087 13:14:01 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 1 00:30:57.087 13:14:01 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:57.087 13:14:01 -- host/auth.sh@68 -- # digest=sha384 00:30:57.087 13:14:01 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:30:57.087 13:14:01 -- host/auth.sh@68 -- # keyid=1 00:30:57.087 13:14:01 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:30:57.087 13:14:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:57.087 13:14:01 -- common/autotest_common.sh@10 -- # set +x 00:30:57.087 13:14:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:57.087 13:14:01 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:57.087 13:14:01 -- nvmf/common.sh@717 -- # local ip 00:30:57.087 13:14:01 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:57.087 13:14:01 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:57.087 13:14:01 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:57.087 13:14:01 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:57.087 13:14:01 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:57.087 13:14:01 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:57.087 13:14:01 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:57.087 13:14:01 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:57.087 13:14:01 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:57.087 13:14:01 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:30:57.087 13:14:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:57.087 13:14:01 -- common/autotest_common.sh@10 -- # set +x 00:30:57.657 nvme0n1 00:30:57.657 13:14:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:57.657 13:14:02 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:57.657 13:14:02 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:57.657 13:14:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:57.657 13:14:02 -- common/autotest_common.sh@10 -- # set +x 00:30:57.657 13:14:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:58.001 13:14:02 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:58.001 13:14:02 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:58.001 13:14:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:58.001 13:14:02 -- common/autotest_common.sh@10 -- # set +x 00:30:58.001 13:14:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:58.001 13:14:02 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:58.001 13:14:02 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:30:58.001 13:14:02 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:58.001 13:14:02 -- host/auth.sh@44 -- # digest=sha384 00:30:58.001 13:14:02 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:58.001 13:14:02 -- host/auth.sh@44 -- # keyid=2 00:30:58.001 13:14:02 -- host/auth.sh@45 -- # key=DHHC-1:01:Yjc4ODhiNzNhMjkyOGZkOTNmNjczZGI1MDg4NzQyYTGItMYt: 00:30:58.001 13:14:02 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:30:58.001 13:14:02 -- host/auth.sh@48 -- # echo ffdhe8192 00:30:58.001 13:14:02 -- host/auth.sh@49 -- # echo DHHC-1:01:Yjc4ODhiNzNhMjkyOGZkOTNmNjczZGI1MDg4NzQyYTGItMYt: 00:30:58.001 13:14:02 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 2 00:30:58.001 13:14:02 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:58.001 13:14:02 -- host/auth.sh@68 -- # digest=sha384 00:30:58.001 13:14:02 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:30:58.001 13:14:02 -- host/auth.sh@68 -- # keyid=2 00:30:58.001 13:14:02 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:30:58.001 13:14:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:58.001 13:14:02 -- common/autotest_common.sh@10 -- # set +x 00:30:58.001 13:14:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:58.001 13:14:02 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:58.001 13:14:02 -- nvmf/common.sh@717 -- # local ip 00:30:58.001 13:14:02 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:58.001 13:14:02 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:58.001 13:14:02 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:58.001 13:14:02 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:58.001 13:14:02 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:58.001 13:14:02 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:58.001 13:14:02 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:58.001 13:14:02 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:58.001 13:14:02 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:58.001 13:14:02 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:30:58.001 13:14:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:58.001 13:14:02 -- common/autotest_common.sh@10 -- # set +x 00:30:58.575 nvme0n1 00:30:58.575 13:14:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:58.575 13:14:03 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:58.575 13:14:03 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:58.575 13:14:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:58.575 13:14:03 -- common/autotest_common.sh@10 -- # set +x 00:30:58.575 13:14:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:58.575 13:14:03 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:58.575 13:14:03 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:58.575 13:14:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:58.575 13:14:03 -- common/autotest_common.sh@10 -- # set +x 00:30:58.575 13:14:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:58.575 13:14:03 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:58.575 13:14:03 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:30:58.575 13:14:03 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:58.575 13:14:03 -- host/auth.sh@44 -- # digest=sha384 00:30:58.575 13:14:03 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:58.575 13:14:03 -- host/auth.sh@44 -- # keyid=3 00:30:58.575 13:14:03 -- host/auth.sh@45 -- # key=DHHC-1:02:OWQyNTdiMDJjODg4YzExZDllMTk2OTNmOTE0NTY2ZTViNzVlMzhhMzQyYTUxYjc2YPpu8A==: 00:30:58.575 13:14:03 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:30:58.575 13:14:03 -- host/auth.sh@48 -- # echo ffdhe8192 00:30:58.575 13:14:03 -- host/auth.sh@49 -- # echo DHHC-1:02:OWQyNTdiMDJjODg4YzExZDllMTk2OTNmOTE0NTY2ZTViNzVlMzhhMzQyYTUxYjc2YPpu8A==: 00:30:58.575 13:14:03 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 3 00:30:58.575 13:14:03 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:58.575 13:14:03 -- host/auth.sh@68 -- # digest=sha384 00:30:58.575 13:14:03 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:30:58.575 13:14:03 -- host/auth.sh@68 -- # keyid=3 00:30:58.575 13:14:03 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:30:58.575 13:14:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:58.575 13:14:03 -- common/autotest_common.sh@10 -- # set +x 00:30:58.575 13:14:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:58.575 13:14:03 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:58.575 13:14:03 -- nvmf/common.sh@717 -- # local ip 00:30:58.575 13:14:03 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:58.575 13:14:03 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:58.575 13:14:03 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:58.575 13:14:03 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:58.575 13:14:03 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:58.575 13:14:03 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:58.575 13:14:03 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:58.575 13:14:03 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:58.575 13:14:03 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:58.575 13:14:03 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:30:58.575 13:14:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:58.575 13:14:03 -- common/autotest_common.sh@10 -- # set +x 00:30:59.513 nvme0n1 00:30:59.513 13:14:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:59.513 13:14:04 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:59.514 13:14:04 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:59.514 13:14:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:59.514 13:14:04 -- common/autotest_common.sh@10 -- # set +x 00:30:59.514 13:14:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:59.514 13:14:04 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:59.514 13:14:04 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:59.514 13:14:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:59.514 13:14:04 -- common/autotest_common.sh@10 -- # set +x 00:30:59.514 13:14:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:59.514 13:14:04 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:59.514 13:14:04 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:30:59.514 13:14:04 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:59.514 13:14:04 -- host/auth.sh@44 -- # digest=sha384 00:30:59.514 13:14:04 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:59.514 13:14:04 -- host/auth.sh@44 -- # keyid=4 00:30:59.514 13:14:04 -- host/auth.sh@45 -- # key=DHHC-1:03:MDA3ZWYxNTg1ZmNmNTMxOTg5MTM4NDBmOWY5MmQxOGQzZTZmODY5ZDAyNWFjODkzZjZiNmY4ZTM5ZjUyMzVhNkfxi6g=: 00:30:59.514 13:14:04 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:30:59.514 13:14:04 -- host/auth.sh@48 -- # echo ffdhe8192 00:30:59.514 13:14:04 -- host/auth.sh@49 -- # echo DHHC-1:03:MDA3ZWYxNTg1ZmNmNTMxOTg5MTM4NDBmOWY5MmQxOGQzZTZmODY5ZDAyNWFjODkzZjZiNmY4ZTM5ZjUyMzVhNkfxi6g=: 00:30:59.514 13:14:04 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 4 00:30:59.514 13:14:04 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:59.514 13:14:04 -- host/auth.sh@68 -- # digest=sha384 00:30:59.514 13:14:04 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:30:59.514 13:14:04 -- host/auth.sh@68 -- # keyid=4 00:30:59.514 13:14:04 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:30:59.514 13:14:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:59.514 13:14:04 -- common/autotest_common.sh@10 -- # set +x 00:30:59.514 13:14:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:59.514 13:14:04 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:59.514 13:14:04 -- nvmf/common.sh@717 -- # local ip 00:30:59.514 13:14:04 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:59.514 13:14:04 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:59.514 13:14:04 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:59.514 13:14:04 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:59.514 13:14:04 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:59.514 13:14:04 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:59.514 13:14:04 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:59.514 13:14:04 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:59.514 13:14:04 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:59.514 13:14:04 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:59.514 13:14:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:59.514 13:14:04 -- common/autotest_common.sh@10 -- # set +x 00:31:00.453 nvme0n1 00:31:00.453 13:14:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:00.453 13:14:05 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:00.453 13:14:05 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:00.453 13:14:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:00.453 13:14:05 -- common/autotest_common.sh@10 -- # set +x 00:31:00.453 13:14:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:00.453 13:14:05 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:00.453 13:14:05 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:00.453 13:14:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:00.453 13:14:05 -- common/autotest_common.sh@10 -- # set +x 00:31:00.453 13:14:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:00.453 13:14:05 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:31:00.453 13:14:05 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:31:00.453 13:14:05 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:00.453 13:14:05 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:31:00.453 13:14:05 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:00.453 13:14:05 -- host/auth.sh@44 -- # digest=sha512 00:31:00.453 13:14:05 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:00.453 13:14:05 -- host/auth.sh@44 -- # keyid=0 00:31:00.453 13:14:05 -- host/auth.sh@45 -- # key=DHHC-1:00:MjM5NzM5NjJmYmJiM2JjYWIyMTZjNWVmZDk4YWRkZDUswSTO: 00:31:00.453 13:14:05 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:31:00.453 13:14:05 -- host/auth.sh@48 -- # echo ffdhe2048 00:31:00.454 13:14:05 -- host/auth.sh@49 -- # echo DHHC-1:00:MjM5NzM5NjJmYmJiM2JjYWIyMTZjNWVmZDk4YWRkZDUswSTO: 00:31:00.454 13:14:05 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 0 00:31:00.454 13:14:05 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:00.454 13:14:05 -- host/auth.sh@68 -- # digest=sha512 00:31:00.454 13:14:05 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:31:00.454 13:14:05 -- host/auth.sh@68 -- # keyid=0 00:31:00.454 13:14:05 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:31:00.454 13:14:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:00.454 13:14:05 -- common/autotest_common.sh@10 -- # set +x 00:31:00.454 13:14:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:00.454 13:14:05 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:00.454 13:14:05 -- nvmf/common.sh@717 -- # local ip 00:31:00.454 13:14:05 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:00.454 13:14:05 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:00.454 13:14:05 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:00.454 13:14:05 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:00.454 13:14:05 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:00.454 13:14:05 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:00.454 13:14:05 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:00.454 13:14:05 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:00.454 13:14:05 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:00.454 13:14:05 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:31:00.454 13:14:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:00.454 13:14:05 -- common/autotest_common.sh@10 -- # set +x 00:31:00.454 nvme0n1 00:31:00.454 13:14:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:00.454 13:14:05 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:00.454 13:14:05 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:00.454 13:14:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:00.454 13:14:05 -- common/autotest_common.sh@10 -- # set +x 00:31:00.454 13:14:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:00.454 13:14:05 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:00.454 13:14:05 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:00.454 13:14:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:00.454 13:14:05 -- common/autotest_common.sh@10 -- # set +x 00:31:00.454 13:14:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:00.454 13:14:05 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:00.454 13:14:05 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:31:00.454 13:14:05 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:00.454 13:14:05 -- host/auth.sh@44 -- # digest=sha512 00:31:00.454 13:14:05 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:00.454 13:14:05 -- host/auth.sh@44 -- # keyid=1 00:31:00.454 13:14:05 -- host/auth.sh@45 -- # key=DHHC-1:00:MWJiNDg5YTQxNzZlOGFmNzA5YWM0ZWMwMDczNmQzZTQyOTQzMTU2ZDkxZDllZTJllnVCJw==: 00:31:00.454 13:14:05 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:31:00.454 13:14:05 -- host/auth.sh@48 -- # echo ffdhe2048 00:31:00.454 13:14:05 -- host/auth.sh@49 -- # echo DHHC-1:00:MWJiNDg5YTQxNzZlOGFmNzA5YWM0ZWMwMDczNmQzZTQyOTQzMTU2ZDkxZDllZTJllnVCJw==: 00:31:00.454 13:14:05 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 1 00:31:00.454 13:14:05 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:00.454 13:14:05 -- host/auth.sh@68 -- # digest=sha512 00:31:00.454 13:14:05 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:31:00.454 13:14:05 -- host/auth.sh@68 -- # keyid=1 00:31:00.454 13:14:05 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:31:00.454 13:14:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:00.454 13:14:05 -- common/autotest_common.sh@10 -- # set +x 00:31:00.454 13:14:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:00.454 13:14:05 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:00.454 13:14:05 -- nvmf/common.sh@717 -- # local ip 00:31:00.454 13:14:05 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:00.454 13:14:05 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:00.454 13:14:05 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:00.454 13:14:05 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:00.454 13:14:05 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:00.454 13:14:05 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:00.454 13:14:05 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:00.454 13:14:05 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:00.454 13:14:05 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:00.454 13:14:05 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:31:00.454 13:14:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:00.454 13:14:05 -- common/autotest_common.sh@10 -- # set +x 00:31:00.714 nvme0n1 00:31:00.714 13:14:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:00.714 13:14:05 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:00.714 13:14:05 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:00.714 13:14:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:00.714 13:14:05 -- common/autotest_common.sh@10 -- # set +x 00:31:00.714 13:14:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:00.714 13:14:05 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:00.714 13:14:05 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:00.714 13:14:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:00.714 13:14:05 -- common/autotest_common.sh@10 -- # set +x 00:31:00.714 13:14:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:00.714 13:14:05 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:00.714 13:14:05 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:31:00.714 13:14:05 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:00.714 13:14:05 -- host/auth.sh@44 -- # digest=sha512 00:31:00.714 13:14:05 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:00.714 13:14:05 -- host/auth.sh@44 -- # keyid=2 00:31:00.714 13:14:05 -- host/auth.sh@45 -- # key=DHHC-1:01:Yjc4ODhiNzNhMjkyOGZkOTNmNjczZGI1MDg4NzQyYTGItMYt: 00:31:00.714 13:14:05 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:31:00.714 13:14:05 -- host/auth.sh@48 -- # echo ffdhe2048 00:31:00.714 13:14:05 -- host/auth.sh@49 -- # echo DHHC-1:01:Yjc4ODhiNzNhMjkyOGZkOTNmNjczZGI1MDg4NzQyYTGItMYt: 00:31:00.714 13:14:05 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 2 00:31:00.714 13:14:05 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:00.714 13:14:05 -- host/auth.sh@68 -- # digest=sha512 00:31:00.714 13:14:05 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:31:00.714 13:14:05 -- host/auth.sh@68 -- # keyid=2 00:31:00.714 13:14:05 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:31:00.714 13:14:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:00.714 13:14:05 -- common/autotest_common.sh@10 -- # set +x 00:31:00.714 13:14:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:00.714 13:14:05 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:00.714 13:14:05 -- nvmf/common.sh@717 -- # local ip 00:31:00.715 13:14:05 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:00.715 13:14:05 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:00.715 13:14:05 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:00.715 13:14:05 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:00.715 13:14:05 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:00.715 13:14:05 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:00.715 13:14:05 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:00.715 13:14:05 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:00.715 13:14:05 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:00.715 13:14:05 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:31:00.715 13:14:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:00.715 13:14:05 -- common/autotest_common.sh@10 -- # set +x 00:31:00.975 nvme0n1 00:31:00.975 13:14:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:00.975 13:14:05 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:00.975 13:14:05 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:00.975 13:14:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:00.975 13:14:05 -- common/autotest_common.sh@10 -- # set +x 00:31:00.975 13:14:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:00.975 13:14:05 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:00.975 13:14:05 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:00.975 13:14:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:00.975 13:14:05 -- common/autotest_common.sh@10 -- # set +x 00:31:00.975 13:14:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:00.975 13:14:05 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:00.975 13:14:05 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:31:00.975 13:14:05 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:00.975 13:14:05 -- host/auth.sh@44 -- # digest=sha512 00:31:00.975 13:14:05 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:00.975 13:14:05 -- host/auth.sh@44 -- # keyid=3 00:31:00.975 13:14:05 -- host/auth.sh@45 -- # key=DHHC-1:02:OWQyNTdiMDJjODg4YzExZDllMTk2OTNmOTE0NTY2ZTViNzVlMzhhMzQyYTUxYjc2YPpu8A==: 00:31:00.975 13:14:05 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:31:00.975 13:14:05 -- host/auth.sh@48 -- # echo ffdhe2048 00:31:00.975 13:14:05 -- host/auth.sh@49 -- # echo DHHC-1:02:OWQyNTdiMDJjODg4YzExZDllMTk2OTNmOTE0NTY2ZTViNzVlMzhhMzQyYTUxYjc2YPpu8A==: 00:31:00.975 13:14:05 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 3 00:31:00.975 13:14:05 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:00.975 13:14:05 -- host/auth.sh@68 -- # digest=sha512 00:31:00.975 13:14:05 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:31:00.975 13:14:05 -- host/auth.sh@68 -- # keyid=3 00:31:00.975 13:14:05 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:31:00.975 13:14:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:00.975 13:14:05 -- common/autotest_common.sh@10 -- # set +x 00:31:00.975 13:14:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:00.975 13:14:05 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:00.975 13:14:05 -- nvmf/common.sh@717 -- # local ip 00:31:00.975 13:14:05 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:00.975 13:14:05 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:00.975 13:14:05 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:00.975 13:14:05 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:00.975 13:14:05 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:00.975 13:14:05 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:00.975 13:14:05 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:00.975 13:14:05 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:00.975 13:14:05 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:00.975 13:14:05 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:31:00.975 13:14:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:00.975 13:14:05 -- common/autotest_common.sh@10 -- # set +x 00:31:01.236 nvme0n1 00:31:01.236 13:14:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:01.236 13:14:06 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:01.236 13:14:06 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:01.236 13:14:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:01.236 13:14:06 -- common/autotest_common.sh@10 -- # set +x 00:31:01.236 13:14:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:01.236 13:14:06 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:01.236 13:14:06 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:01.236 13:14:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:01.236 13:14:06 -- common/autotest_common.sh@10 -- # set +x 00:31:01.236 13:14:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:01.236 13:14:06 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:01.236 13:14:06 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:31:01.236 13:14:06 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:01.236 13:14:06 -- host/auth.sh@44 -- # digest=sha512 00:31:01.236 13:14:06 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:01.236 13:14:06 -- host/auth.sh@44 -- # keyid=4 00:31:01.236 13:14:06 -- host/auth.sh@45 -- # key=DHHC-1:03:MDA3ZWYxNTg1ZmNmNTMxOTg5MTM4NDBmOWY5MmQxOGQzZTZmODY5ZDAyNWFjODkzZjZiNmY4ZTM5ZjUyMzVhNkfxi6g=: 00:31:01.236 13:14:06 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:31:01.236 13:14:06 -- host/auth.sh@48 -- # echo ffdhe2048 00:31:01.236 13:14:06 -- host/auth.sh@49 -- # echo DHHC-1:03:MDA3ZWYxNTg1ZmNmNTMxOTg5MTM4NDBmOWY5MmQxOGQzZTZmODY5ZDAyNWFjODkzZjZiNmY4ZTM5ZjUyMzVhNkfxi6g=: 00:31:01.236 13:14:06 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 4 00:31:01.236 13:14:06 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:01.236 13:14:06 -- host/auth.sh@68 -- # digest=sha512 00:31:01.236 13:14:06 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:31:01.236 13:14:06 -- host/auth.sh@68 -- # keyid=4 00:31:01.236 13:14:06 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:31:01.236 13:14:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:01.236 13:14:06 -- common/autotest_common.sh@10 -- # set +x 00:31:01.236 13:14:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:01.236 13:14:06 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:01.236 13:14:06 -- nvmf/common.sh@717 -- # local ip 00:31:01.236 13:14:06 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:01.236 13:14:06 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:01.237 13:14:06 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:01.237 13:14:06 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:01.237 13:14:06 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:01.237 13:14:06 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:01.237 13:14:06 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:01.237 13:14:06 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:01.237 13:14:06 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:01.237 13:14:06 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:01.237 13:14:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:01.237 13:14:06 -- common/autotest_common.sh@10 -- # set +x 00:31:01.497 nvme0n1 00:31:01.497 13:14:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:01.497 13:14:06 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:01.497 13:14:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:01.497 13:14:06 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:01.497 13:14:06 -- common/autotest_common.sh@10 -- # set +x 00:31:01.497 13:14:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:01.497 13:14:06 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:01.497 13:14:06 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:01.497 13:14:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:01.497 13:14:06 -- common/autotest_common.sh@10 -- # set +x 00:31:01.497 13:14:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:01.497 13:14:06 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:31:01.497 13:14:06 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:01.497 13:14:06 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:31:01.497 13:14:06 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:01.497 13:14:06 -- host/auth.sh@44 -- # digest=sha512 00:31:01.497 13:14:06 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:01.497 13:14:06 -- host/auth.sh@44 -- # keyid=0 00:31:01.497 13:14:06 -- host/auth.sh@45 -- # key=DHHC-1:00:MjM5NzM5NjJmYmJiM2JjYWIyMTZjNWVmZDk4YWRkZDUswSTO: 00:31:01.497 13:14:06 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:31:01.497 13:14:06 -- host/auth.sh@48 -- # echo ffdhe3072 00:31:01.497 13:14:06 -- host/auth.sh@49 -- # echo DHHC-1:00:MjM5NzM5NjJmYmJiM2JjYWIyMTZjNWVmZDk4YWRkZDUswSTO: 00:31:01.497 13:14:06 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 0 00:31:01.497 13:14:06 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:01.497 13:14:06 -- host/auth.sh@68 -- # digest=sha512 00:31:01.497 13:14:06 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:31:01.497 13:14:06 -- host/auth.sh@68 -- # keyid=0 00:31:01.497 13:14:06 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:31:01.497 13:14:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:01.497 13:14:06 -- common/autotest_common.sh@10 -- # set +x 00:31:01.497 13:14:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:01.497 13:14:06 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:01.497 13:14:06 -- nvmf/common.sh@717 -- # local ip 00:31:01.497 13:14:06 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:01.497 13:14:06 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:01.497 13:14:06 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:01.498 13:14:06 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:01.498 13:14:06 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:01.498 13:14:06 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:01.498 13:14:06 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:01.498 13:14:06 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:01.498 13:14:06 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:01.498 13:14:06 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:31:01.498 13:14:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:01.498 13:14:06 -- common/autotest_common.sh@10 -- # set +x 00:31:01.498 nvme0n1 00:31:01.498 13:14:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:01.498 13:14:06 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:01.498 13:14:06 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:01.498 13:14:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:01.498 13:14:06 -- common/autotest_common.sh@10 -- # set +x 00:31:01.758 13:14:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:01.758 13:14:06 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:01.758 13:14:06 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:01.758 13:14:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:01.758 13:14:06 -- common/autotest_common.sh@10 -- # set +x 00:31:01.758 13:14:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:01.758 13:14:06 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:01.758 13:14:06 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:31:01.758 13:14:06 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:01.758 13:14:06 -- host/auth.sh@44 -- # digest=sha512 00:31:01.758 13:14:06 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:01.758 13:14:06 -- host/auth.sh@44 -- # keyid=1 00:31:01.758 13:14:06 -- host/auth.sh@45 -- # key=DHHC-1:00:MWJiNDg5YTQxNzZlOGFmNzA5YWM0ZWMwMDczNmQzZTQyOTQzMTU2ZDkxZDllZTJllnVCJw==: 00:31:01.758 13:14:06 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:31:01.758 13:14:06 -- host/auth.sh@48 -- # echo ffdhe3072 00:31:01.758 13:14:06 -- host/auth.sh@49 -- # echo DHHC-1:00:MWJiNDg5YTQxNzZlOGFmNzA5YWM0ZWMwMDczNmQzZTQyOTQzMTU2ZDkxZDllZTJllnVCJw==: 00:31:01.758 13:14:06 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 1 00:31:01.758 13:14:06 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:01.758 13:14:06 -- host/auth.sh@68 -- # digest=sha512 00:31:01.758 13:14:06 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:31:01.758 13:14:06 -- host/auth.sh@68 -- # keyid=1 00:31:01.758 13:14:06 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:31:01.758 13:14:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:01.758 13:14:06 -- common/autotest_common.sh@10 -- # set +x 00:31:01.758 13:14:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:01.758 13:14:06 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:01.758 13:14:06 -- nvmf/common.sh@717 -- # local ip 00:31:01.758 13:14:06 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:01.758 13:14:06 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:01.758 13:14:06 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:01.758 13:14:06 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:01.758 13:14:06 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:01.758 13:14:06 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:01.758 13:14:06 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:01.758 13:14:06 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:01.758 13:14:06 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:01.758 13:14:06 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:31:01.758 13:14:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:01.758 13:14:06 -- common/autotest_common.sh@10 -- # set +x 00:31:01.758 nvme0n1 00:31:01.758 13:14:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:02.018 13:14:06 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:02.018 13:14:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:02.018 13:14:06 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:02.018 13:14:06 -- common/autotest_common.sh@10 -- # set +x 00:31:02.018 13:14:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:02.018 13:14:06 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:02.018 13:14:06 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:02.019 13:14:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:02.019 13:14:06 -- common/autotest_common.sh@10 -- # set +x 00:31:02.019 13:14:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:02.019 13:14:06 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:02.019 13:14:06 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:31:02.019 13:14:06 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:02.019 13:14:06 -- host/auth.sh@44 -- # digest=sha512 00:31:02.019 13:14:06 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:02.019 13:14:06 -- host/auth.sh@44 -- # keyid=2 00:31:02.019 13:14:06 -- host/auth.sh@45 -- # key=DHHC-1:01:Yjc4ODhiNzNhMjkyOGZkOTNmNjczZGI1MDg4NzQyYTGItMYt: 00:31:02.019 13:14:06 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:31:02.019 13:14:06 -- host/auth.sh@48 -- # echo ffdhe3072 00:31:02.019 13:14:06 -- host/auth.sh@49 -- # echo DHHC-1:01:Yjc4ODhiNzNhMjkyOGZkOTNmNjczZGI1MDg4NzQyYTGItMYt: 00:31:02.019 13:14:06 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 2 00:31:02.019 13:14:06 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:02.019 13:14:06 -- host/auth.sh@68 -- # digest=sha512 00:31:02.019 13:14:06 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:31:02.019 13:14:06 -- host/auth.sh@68 -- # keyid=2 00:31:02.019 13:14:06 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:31:02.019 13:14:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:02.019 13:14:06 -- common/autotest_common.sh@10 -- # set +x 00:31:02.019 13:14:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:02.019 13:14:06 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:02.019 13:14:06 -- nvmf/common.sh@717 -- # local ip 00:31:02.019 13:14:06 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:02.019 13:14:06 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:02.019 13:14:06 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:02.019 13:14:06 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:02.019 13:14:06 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:02.019 13:14:06 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:02.019 13:14:06 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:02.019 13:14:06 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:02.019 13:14:06 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:02.019 13:14:06 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:31:02.019 13:14:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:02.019 13:14:06 -- common/autotest_common.sh@10 -- # set +x 00:31:02.280 nvme0n1 00:31:02.280 13:14:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:02.280 13:14:07 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:02.280 13:14:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:02.280 13:14:07 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:02.280 13:14:07 -- common/autotest_common.sh@10 -- # set +x 00:31:02.280 13:14:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:02.280 13:14:07 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:02.280 13:14:07 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:02.280 13:14:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:02.280 13:14:07 -- common/autotest_common.sh@10 -- # set +x 00:31:02.280 13:14:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:02.280 13:14:07 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:02.280 13:14:07 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:31:02.280 13:14:07 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:02.280 13:14:07 -- host/auth.sh@44 -- # digest=sha512 00:31:02.280 13:14:07 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:02.280 13:14:07 -- host/auth.sh@44 -- # keyid=3 00:31:02.280 13:14:07 -- host/auth.sh@45 -- # key=DHHC-1:02:OWQyNTdiMDJjODg4YzExZDllMTk2OTNmOTE0NTY2ZTViNzVlMzhhMzQyYTUxYjc2YPpu8A==: 00:31:02.280 13:14:07 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:31:02.280 13:14:07 -- host/auth.sh@48 -- # echo ffdhe3072 00:31:02.280 13:14:07 -- host/auth.sh@49 -- # echo DHHC-1:02:OWQyNTdiMDJjODg4YzExZDllMTk2OTNmOTE0NTY2ZTViNzVlMzhhMzQyYTUxYjc2YPpu8A==: 00:31:02.280 13:14:07 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 3 00:31:02.280 13:14:07 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:02.281 13:14:07 -- host/auth.sh@68 -- # digest=sha512 00:31:02.281 13:14:07 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:31:02.281 13:14:07 -- host/auth.sh@68 -- # keyid=3 00:31:02.281 13:14:07 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:31:02.281 13:14:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:02.281 13:14:07 -- common/autotest_common.sh@10 -- # set +x 00:31:02.281 13:14:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:02.281 13:14:07 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:02.281 13:14:07 -- nvmf/common.sh@717 -- # local ip 00:31:02.281 13:14:07 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:02.281 13:14:07 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:02.281 13:14:07 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:02.281 13:14:07 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:02.281 13:14:07 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:02.281 13:14:07 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:02.281 13:14:07 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:02.281 13:14:07 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:02.281 13:14:07 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:02.281 13:14:07 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:31:02.281 13:14:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:02.281 13:14:07 -- common/autotest_common.sh@10 -- # set +x 00:31:02.543 nvme0n1 00:31:02.543 13:14:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:02.543 13:14:07 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:02.543 13:14:07 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:02.543 13:14:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:02.543 13:14:07 -- common/autotest_common.sh@10 -- # set +x 00:31:02.543 13:14:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:02.543 13:14:07 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:02.543 13:14:07 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:02.543 13:14:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:02.543 13:14:07 -- common/autotest_common.sh@10 -- # set +x 00:31:02.543 13:14:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:02.543 13:14:07 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:02.543 13:14:07 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:31:02.543 13:14:07 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:02.543 13:14:07 -- host/auth.sh@44 -- # digest=sha512 00:31:02.543 13:14:07 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:02.543 13:14:07 -- host/auth.sh@44 -- # keyid=4 00:31:02.543 13:14:07 -- host/auth.sh@45 -- # key=DHHC-1:03:MDA3ZWYxNTg1ZmNmNTMxOTg5MTM4NDBmOWY5MmQxOGQzZTZmODY5ZDAyNWFjODkzZjZiNmY4ZTM5ZjUyMzVhNkfxi6g=: 00:31:02.543 13:14:07 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:31:02.543 13:14:07 -- host/auth.sh@48 -- # echo ffdhe3072 00:31:02.543 13:14:07 -- host/auth.sh@49 -- # echo DHHC-1:03:MDA3ZWYxNTg1ZmNmNTMxOTg5MTM4NDBmOWY5MmQxOGQzZTZmODY5ZDAyNWFjODkzZjZiNmY4ZTM5ZjUyMzVhNkfxi6g=: 00:31:02.543 13:14:07 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 4 00:31:02.543 13:14:07 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:02.543 13:14:07 -- host/auth.sh@68 -- # digest=sha512 00:31:02.543 13:14:07 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:31:02.543 13:14:07 -- host/auth.sh@68 -- # keyid=4 00:31:02.543 13:14:07 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:31:02.543 13:14:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:02.543 13:14:07 -- common/autotest_common.sh@10 -- # set +x 00:31:02.543 13:14:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:02.543 13:14:07 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:02.543 13:14:07 -- nvmf/common.sh@717 -- # local ip 00:31:02.543 13:14:07 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:02.543 13:14:07 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:02.543 13:14:07 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:02.543 13:14:07 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:02.543 13:14:07 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:02.543 13:14:07 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:02.543 13:14:07 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:02.543 13:14:07 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:02.543 13:14:07 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:02.543 13:14:07 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:02.543 13:14:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:02.543 13:14:07 -- common/autotest_common.sh@10 -- # set +x 00:31:02.803 nvme0n1 00:31:02.803 13:14:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:02.803 13:14:07 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:02.803 13:14:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:02.803 13:14:07 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:02.803 13:14:07 -- common/autotest_common.sh@10 -- # set +x 00:31:02.803 13:14:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:02.803 13:14:07 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:02.803 13:14:07 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:02.803 13:14:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:02.803 13:14:07 -- common/autotest_common.sh@10 -- # set +x 00:31:02.803 13:14:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:02.803 13:14:07 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:31:02.803 13:14:07 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:02.803 13:14:07 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:31:02.803 13:14:07 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:02.803 13:14:07 -- host/auth.sh@44 -- # digest=sha512 00:31:02.803 13:14:07 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:02.803 13:14:07 -- host/auth.sh@44 -- # keyid=0 00:31:02.803 13:14:07 -- host/auth.sh@45 -- # key=DHHC-1:00:MjM5NzM5NjJmYmJiM2JjYWIyMTZjNWVmZDk4YWRkZDUswSTO: 00:31:02.803 13:14:07 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:31:02.803 13:14:07 -- host/auth.sh@48 -- # echo ffdhe4096 00:31:02.803 13:14:07 -- host/auth.sh@49 -- # echo DHHC-1:00:MjM5NzM5NjJmYmJiM2JjYWIyMTZjNWVmZDk4YWRkZDUswSTO: 00:31:02.803 13:14:07 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 0 00:31:02.803 13:14:07 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:02.803 13:14:07 -- host/auth.sh@68 -- # digest=sha512 00:31:02.803 13:14:07 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:31:02.803 13:14:07 -- host/auth.sh@68 -- # keyid=0 00:31:02.803 13:14:07 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:31:02.803 13:14:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:02.803 13:14:07 -- common/autotest_common.sh@10 -- # set +x 00:31:02.803 13:14:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:02.803 13:14:07 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:02.803 13:14:07 -- nvmf/common.sh@717 -- # local ip 00:31:02.803 13:14:07 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:02.803 13:14:07 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:02.804 13:14:07 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:02.804 13:14:07 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:02.804 13:14:07 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:02.804 13:14:07 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:02.804 13:14:07 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:02.804 13:14:07 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:02.804 13:14:07 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:02.804 13:14:07 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:31:02.804 13:14:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:02.804 13:14:07 -- common/autotest_common.sh@10 -- # set +x 00:31:03.065 nvme0n1 00:31:03.065 13:14:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:03.065 13:14:07 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:03.065 13:14:07 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:03.065 13:14:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:03.065 13:14:07 -- common/autotest_common.sh@10 -- # set +x 00:31:03.065 13:14:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:03.065 13:14:08 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:03.065 13:14:08 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:03.065 13:14:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:03.065 13:14:08 -- common/autotest_common.sh@10 -- # set +x 00:31:03.065 13:14:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:03.065 13:14:08 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:03.065 13:14:08 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:31:03.065 13:14:08 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:03.065 13:14:08 -- host/auth.sh@44 -- # digest=sha512 00:31:03.065 13:14:08 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:03.065 13:14:08 -- host/auth.sh@44 -- # keyid=1 00:31:03.065 13:14:08 -- host/auth.sh@45 -- # key=DHHC-1:00:MWJiNDg5YTQxNzZlOGFmNzA5YWM0ZWMwMDczNmQzZTQyOTQzMTU2ZDkxZDllZTJllnVCJw==: 00:31:03.065 13:14:08 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:31:03.065 13:14:08 -- host/auth.sh@48 -- # echo ffdhe4096 00:31:03.065 13:14:08 -- host/auth.sh@49 -- # echo DHHC-1:00:MWJiNDg5YTQxNzZlOGFmNzA5YWM0ZWMwMDczNmQzZTQyOTQzMTU2ZDkxZDllZTJllnVCJw==: 00:31:03.065 13:14:08 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 1 00:31:03.065 13:14:08 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:03.065 13:14:08 -- host/auth.sh@68 -- # digest=sha512 00:31:03.065 13:14:08 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:31:03.065 13:14:08 -- host/auth.sh@68 -- # keyid=1 00:31:03.065 13:14:08 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:31:03.065 13:14:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:03.065 13:14:08 -- common/autotest_common.sh@10 -- # set +x 00:31:03.065 13:14:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:03.065 13:14:08 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:03.065 13:14:08 -- nvmf/common.sh@717 -- # local ip 00:31:03.065 13:14:08 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:03.065 13:14:08 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:03.065 13:14:08 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:03.065 13:14:08 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:03.065 13:14:08 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:03.065 13:14:08 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:03.065 13:14:08 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:03.065 13:14:08 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:03.065 13:14:08 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:03.065 13:14:08 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:31:03.065 13:14:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:03.065 13:14:08 -- common/autotest_common.sh@10 -- # set +x 00:31:03.325 nvme0n1 00:31:03.325 13:14:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:03.325 13:14:08 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:03.325 13:14:08 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:03.325 13:14:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:03.325 13:14:08 -- common/autotest_common.sh@10 -- # set +x 00:31:03.325 13:14:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:03.325 13:14:08 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:03.325 13:14:08 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:03.325 13:14:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:03.325 13:14:08 -- common/autotest_common.sh@10 -- # set +x 00:31:03.325 13:14:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:03.325 13:14:08 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:03.325 13:14:08 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:31:03.325 13:14:08 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:03.325 13:14:08 -- host/auth.sh@44 -- # digest=sha512 00:31:03.325 13:14:08 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:03.325 13:14:08 -- host/auth.sh@44 -- # keyid=2 00:31:03.325 13:14:08 -- host/auth.sh@45 -- # key=DHHC-1:01:Yjc4ODhiNzNhMjkyOGZkOTNmNjczZGI1MDg4NzQyYTGItMYt: 00:31:03.325 13:14:08 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:31:03.325 13:14:08 -- host/auth.sh@48 -- # echo ffdhe4096 00:31:03.325 13:14:08 -- host/auth.sh@49 -- # echo DHHC-1:01:Yjc4ODhiNzNhMjkyOGZkOTNmNjczZGI1MDg4NzQyYTGItMYt: 00:31:03.325 13:14:08 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 2 00:31:03.325 13:14:08 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:03.325 13:14:08 -- host/auth.sh@68 -- # digest=sha512 00:31:03.325 13:14:08 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:31:03.325 13:14:08 -- host/auth.sh@68 -- # keyid=2 00:31:03.325 13:14:08 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:31:03.325 13:14:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:03.325 13:14:08 -- common/autotest_common.sh@10 -- # set +x 00:31:03.585 13:14:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:03.585 13:14:08 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:03.585 13:14:08 -- nvmf/common.sh@717 -- # local ip 00:31:03.585 13:14:08 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:03.585 13:14:08 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:03.585 13:14:08 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:03.585 13:14:08 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:03.585 13:14:08 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:03.585 13:14:08 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:03.585 13:14:08 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:03.585 13:14:08 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:03.585 13:14:08 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:03.585 13:14:08 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:31:03.585 13:14:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:03.585 13:14:08 -- common/autotest_common.sh@10 -- # set +x 00:31:03.845 nvme0n1 00:31:03.845 13:14:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:03.845 13:14:08 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:03.845 13:14:08 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:03.845 13:14:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:03.845 13:14:08 -- common/autotest_common.sh@10 -- # set +x 00:31:03.845 13:14:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:03.845 13:14:08 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:03.845 13:14:08 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:03.845 13:14:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:03.845 13:14:08 -- common/autotest_common.sh@10 -- # set +x 00:31:03.845 13:14:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:03.845 13:14:08 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:03.845 13:14:08 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:31:03.845 13:14:08 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:03.845 13:14:08 -- host/auth.sh@44 -- # digest=sha512 00:31:03.845 13:14:08 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:03.845 13:14:08 -- host/auth.sh@44 -- # keyid=3 00:31:03.845 13:14:08 -- host/auth.sh@45 -- # key=DHHC-1:02:OWQyNTdiMDJjODg4YzExZDllMTk2OTNmOTE0NTY2ZTViNzVlMzhhMzQyYTUxYjc2YPpu8A==: 00:31:03.845 13:14:08 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:31:03.845 13:14:08 -- host/auth.sh@48 -- # echo ffdhe4096 00:31:03.845 13:14:08 -- host/auth.sh@49 -- # echo DHHC-1:02:OWQyNTdiMDJjODg4YzExZDllMTk2OTNmOTE0NTY2ZTViNzVlMzhhMzQyYTUxYjc2YPpu8A==: 00:31:03.845 13:14:08 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 3 00:31:03.845 13:14:08 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:03.845 13:14:08 -- host/auth.sh@68 -- # digest=sha512 00:31:03.845 13:14:08 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:31:03.845 13:14:08 -- host/auth.sh@68 -- # keyid=3 00:31:03.845 13:14:08 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:31:03.845 13:14:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:03.845 13:14:08 -- common/autotest_common.sh@10 -- # set +x 00:31:03.845 13:14:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:03.845 13:14:08 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:03.845 13:14:08 -- nvmf/common.sh@717 -- # local ip 00:31:03.845 13:14:08 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:03.845 13:14:08 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:03.845 13:14:08 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:03.845 13:14:08 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:03.845 13:14:08 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:03.845 13:14:08 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:03.845 13:14:08 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:03.845 13:14:08 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:03.845 13:14:08 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:03.845 13:14:08 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:31:03.845 13:14:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:03.845 13:14:08 -- common/autotest_common.sh@10 -- # set +x 00:31:04.107 nvme0n1 00:31:04.107 13:14:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:04.107 13:14:09 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:04.107 13:14:09 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:04.107 13:14:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:04.107 13:14:09 -- common/autotest_common.sh@10 -- # set +x 00:31:04.107 13:14:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:04.107 13:14:09 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:04.107 13:14:09 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:04.107 13:14:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:04.107 13:14:09 -- common/autotest_common.sh@10 -- # set +x 00:31:04.107 13:14:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:04.107 13:14:09 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:04.107 13:14:09 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:31:04.107 13:14:09 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:04.107 13:14:09 -- host/auth.sh@44 -- # digest=sha512 00:31:04.107 13:14:09 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:04.107 13:14:09 -- host/auth.sh@44 -- # keyid=4 00:31:04.107 13:14:09 -- host/auth.sh@45 -- # key=DHHC-1:03:MDA3ZWYxNTg1ZmNmNTMxOTg5MTM4NDBmOWY5MmQxOGQzZTZmODY5ZDAyNWFjODkzZjZiNmY4ZTM5ZjUyMzVhNkfxi6g=: 00:31:04.107 13:14:09 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:31:04.107 13:14:09 -- host/auth.sh@48 -- # echo ffdhe4096 00:31:04.107 13:14:09 -- host/auth.sh@49 -- # echo DHHC-1:03:MDA3ZWYxNTg1ZmNmNTMxOTg5MTM4NDBmOWY5MmQxOGQzZTZmODY5ZDAyNWFjODkzZjZiNmY4ZTM5ZjUyMzVhNkfxi6g=: 00:31:04.107 13:14:09 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 4 00:31:04.107 13:14:09 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:04.107 13:14:09 -- host/auth.sh@68 -- # digest=sha512 00:31:04.107 13:14:09 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:31:04.107 13:14:09 -- host/auth.sh@68 -- # keyid=4 00:31:04.107 13:14:09 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:31:04.107 13:14:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:04.107 13:14:09 -- common/autotest_common.sh@10 -- # set +x 00:31:04.107 13:14:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:04.107 13:14:09 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:04.107 13:14:09 -- nvmf/common.sh@717 -- # local ip 00:31:04.107 13:14:09 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:04.107 13:14:09 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:04.107 13:14:09 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:04.107 13:14:09 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:04.107 13:14:09 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:04.107 13:14:09 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:04.107 13:14:09 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:04.107 13:14:09 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:04.107 13:14:09 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:04.107 13:14:09 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:04.107 13:14:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:04.107 13:14:09 -- common/autotest_common.sh@10 -- # set +x 00:31:04.369 nvme0n1 00:31:04.369 13:14:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:04.369 13:14:09 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:04.369 13:14:09 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:04.369 13:14:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:04.369 13:14:09 -- common/autotest_common.sh@10 -- # set +x 00:31:04.369 13:14:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:04.369 13:14:09 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:04.369 13:14:09 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:04.369 13:14:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:04.369 13:14:09 -- common/autotest_common.sh@10 -- # set +x 00:31:04.630 13:14:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:04.630 13:14:09 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:31:04.630 13:14:09 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:04.630 13:14:09 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:31:04.630 13:14:09 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:04.630 13:14:09 -- host/auth.sh@44 -- # digest=sha512 00:31:04.630 13:14:09 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:04.630 13:14:09 -- host/auth.sh@44 -- # keyid=0 00:31:04.630 13:14:09 -- host/auth.sh@45 -- # key=DHHC-1:00:MjM5NzM5NjJmYmJiM2JjYWIyMTZjNWVmZDk4YWRkZDUswSTO: 00:31:04.630 13:14:09 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:31:04.630 13:14:09 -- host/auth.sh@48 -- # echo ffdhe6144 00:31:04.630 13:14:09 -- host/auth.sh@49 -- # echo DHHC-1:00:MjM5NzM5NjJmYmJiM2JjYWIyMTZjNWVmZDk4YWRkZDUswSTO: 00:31:04.630 13:14:09 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 0 00:31:04.630 13:14:09 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:04.630 13:14:09 -- host/auth.sh@68 -- # digest=sha512 00:31:04.630 13:14:09 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:31:04.630 13:14:09 -- host/auth.sh@68 -- # keyid=0 00:31:04.630 13:14:09 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:31:04.630 13:14:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:04.630 13:14:09 -- common/autotest_common.sh@10 -- # set +x 00:31:04.630 13:14:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:04.630 13:14:09 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:04.630 13:14:09 -- nvmf/common.sh@717 -- # local ip 00:31:04.630 13:14:09 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:04.630 13:14:09 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:04.630 13:14:09 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:04.630 13:14:09 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:04.630 13:14:09 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:04.630 13:14:09 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:04.630 13:14:09 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:04.630 13:14:09 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:04.630 13:14:09 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:04.630 13:14:09 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:31:04.630 13:14:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:04.630 13:14:09 -- common/autotest_common.sh@10 -- # set +x 00:31:04.891 nvme0n1 00:31:04.891 13:14:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:04.891 13:14:09 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:04.891 13:14:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:04.891 13:14:09 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:04.891 13:14:09 -- common/autotest_common.sh@10 -- # set +x 00:31:04.891 13:14:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:04.891 13:14:09 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:04.891 13:14:09 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:04.891 13:14:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:04.891 13:14:09 -- common/autotest_common.sh@10 -- # set +x 00:31:04.891 13:14:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:04.891 13:14:09 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:04.891 13:14:09 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:31:04.891 13:14:09 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:04.891 13:14:09 -- host/auth.sh@44 -- # digest=sha512 00:31:04.891 13:14:09 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:04.891 13:14:09 -- host/auth.sh@44 -- # keyid=1 00:31:04.891 13:14:09 -- host/auth.sh@45 -- # key=DHHC-1:00:MWJiNDg5YTQxNzZlOGFmNzA5YWM0ZWMwMDczNmQzZTQyOTQzMTU2ZDkxZDllZTJllnVCJw==: 00:31:04.891 13:14:09 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:31:04.891 13:14:09 -- host/auth.sh@48 -- # echo ffdhe6144 00:31:04.891 13:14:09 -- host/auth.sh@49 -- # echo DHHC-1:00:MWJiNDg5YTQxNzZlOGFmNzA5YWM0ZWMwMDczNmQzZTQyOTQzMTU2ZDkxZDllZTJllnVCJw==: 00:31:04.891 13:14:09 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 1 00:31:04.891 13:14:09 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:04.891 13:14:09 -- host/auth.sh@68 -- # digest=sha512 00:31:04.891 13:14:09 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:31:04.891 13:14:09 -- host/auth.sh@68 -- # keyid=1 00:31:04.891 13:14:09 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:31:04.891 13:14:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:04.891 13:14:09 -- common/autotest_common.sh@10 -- # set +x 00:31:04.892 13:14:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:04.892 13:14:09 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:04.892 13:14:09 -- nvmf/common.sh@717 -- # local ip 00:31:04.892 13:14:09 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:04.892 13:14:09 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:04.892 13:14:09 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:04.892 13:14:09 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:04.892 13:14:09 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:04.892 13:14:09 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:04.892 13:14:09 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:04.892 13:14:09 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:04.892 13:14:09 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:04.892 13:14:09 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:31:04.892 13:14:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:04.892 13:14:09 -- common/autotest_common.sh@10 -- # set +x 00:31:05.463 nvme0n1 00:31:05.463 13:14:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:05.463 13:14:10 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:05.463 13:14:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:05.463 13:14:10 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:05.463 13:14:10 -- common/autotest_common.sh@10 -- # set +x 00:31:05.463 13:14:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:05.463 13:14:10 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:05.463 13:14:10 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:05.463 13:14:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:05.463 13:14:10 -- common/autotest_common.sh@10 -- # set +x 00:31:05.463 13:14:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:05.463 13:14:10 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:05.463 13:14:10 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:31:05.463 13:14:10 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:05.463 13:14:10 -- host/auth.sh@44 -- # digest=sha512 00:31:05.463 13:14:10 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:05.463 13:14:10 -- host/auth.sh@44 -- # keyid=2 00:31:05.463 13:14:10 -- host/auth.sh@45 -- # key=DHHC-1:01:Yjc4ODhiNzNhMjkyOGZkOTNmNjczZGI1MDg4NzQyYTGItMYt: 00:31:05.463 13:14:10 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:31:05.463 13:14:10 -- host/auth.sh@48 -- # echo ffdhe6144 00:31:05.463 13:14:10 -- host/auth.sh@49 -- # echo DHHC-1:01:Yjc4ODhiNzNhMjkyOGZkOTNmNjczZGI1MDg4NzQyYTGItMYt: 00:31:05.463 13:14:10 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 2 00:31:05.463 13:14:10 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:05.463 13:14:10 -- host/auth.sh@68 -- # digest=sha512 00:31:05.463 13:14:10 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:31:05.463 13:14:10 -- host/auth.sh@68 -- # keyid=2 00:31:05.463 13:14:10 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:31:05.463 13:14:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:05.463 13:14:10 -- common/autotest_common.sh@10 -- # set +x 00:31:05.463 13:14:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:05.463 13:14:10 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:05.463 13:14:10 -- nvmf/common.sh@717 -- # local ip 00:31:05.463 13:14:10 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:05.463 13:14:10 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:05.463 13:14:10 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:05.463 13:14:10 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:05.463 13:14:10 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:05.463 13:14:10 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:05.463 13:14:10 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:05.463 13:14:10 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:05.463 13:14:10 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:05.463 13:14:10 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:31:05.463 13:14:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:05.463 13:14:10 -- common/autotest_common.sh@10 -- # set +x 00:31:06.036 nvme0n1 00:31:06.036 13:14:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:06.036 13:14:10 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:06.036 13:14:10 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:06.036 13:14:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:06.036 13:14:10 -- common/autotest_common.sh@10 -- # set +x 00:31:06.036 13:14:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:06.036 13:14:10 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:06.036 13:14:10 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:06.036 13:14:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:06.036 13:14:10 -- common/autotest_common.sh@10 -- # set +x 00:31:06.036 13:14:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:06.036 13:14:10 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:06.036 13:14:10 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:31:06.036 13:14:10 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:06.036 13:14:10 -- host/auth.sh@44 -- # digest=sha512 00:31:06.036 13:14:10 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:06.036 13:14:10 -- host/auth.sh@44 -- # keyid=3 00:31:06.036 13:14:10 -- host/auth.sh@45 -- # key=DHHC-1:02:OWQyNTdiMDJjODg4YzExZDllMTk2OTNmOTE0NTY2ZTViNzVlMzhhMzQyYTUxYjc2YPpu8A==: 00:31:06.036 13:14:10 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:31:06.036 13:14:10 -- host/auth.sh@48 -- # echo ffdhe6144 00:31:06.036 13:14:10 -- host/auth.sh@49 -- # echo DHHC-1:02:OWQyNTdiMDJjODg4YzExZDllMTk2OTNmOTE0NTY2ZTViNzVlMzhhMzQyYTUxYjc2YPpu8A==: 00:31:06.036 13:14:10 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 3 00:31:06.036 13:14:10 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:06.036 13:14:10 -- host/auth.sh@68 -- # digest=sha512 00:31:06.036 13:14:10 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:31:06.036 13:14:10 -- host/auth.sh@68 -- # keyid=3 00:31:06.036 13:14:10 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:31:06.036 13:14:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:06.036 13:14:10 -- common/autotest_common.sh@10 -- # set +x 00:31:06.036 13:14:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:06.036 13:14:10 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:06.036 13:14:10 -- nvmf/common.sh@717 -- # local ip 00:31:06.036 13:14:11 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:06.036 13:14:11 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:06.036 13:14:11 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:06.036 13:14:11 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:06.036 13:14:11 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:06.036 13:14:11 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:06.036 13:14:11 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:06.036 13:14:11 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:06.036 13:14:11 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:06.036 13:14:11 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:31:06.036 13:14:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:06.036 13:14:11 -- common/autotest_common.sh@10 -- # set +x 00:31:06.606 nvme0n1 00:31:06.606 13:14:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:06.606 13:14:11 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:06.606 13:14:11 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:06.606 13:14:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:06.606 13:14:11 -- common/autotest_common.sh@10 -- # set +x 00:31:06.606 13:14:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:06.606 13:14:11 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:06.606 13:14:11 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:06.606 13:14:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:06.606 13:14:11 -- common/autotest_common.sh@10 -- # set +x 00:31:06.606 13:14:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:06.606 13:14:11 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:06.606 13:14:11 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:31:06.606 13:14:11 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:06.606 13:14:11 -- host/auth.sh@44 -- # digest=sha512 00:31:06.606 13:14:11 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:06.606 13:14:11 -- host/auth.sh@44 -- # keyid=4 00:31:06.606 13:14:11 -- host/auth.sh@45 -- # key=DHHC-1:03:MDA3ZWYxNTg1ZmNmNTMxOTg5MTM4NDBmOWY5MmQxOGQzZTZmODY5ZDAyNWFjODkzZjZiNmY4ZTM5ZjUyMzVhNkfxi6g=: 00:31:06.606 13:14:11 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:31:06.606 13:14:11 -- host/auth.sh@48 -- # echo ffdhe6144 00:31:06.606 13:14:11 -- host/auth.sh@49 -- # echo DHHC-1:03:MDA3ZWYxNTg1ZmNmNTMxOTg5MTM4NDBmOWY5MmQxOGQzZTZmODY5ZDAyNWFjODkzZjZiNmY4ZTM5ZjUyMzVhNkfxi6g=: 00:31:06.606 13:14:11 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 4 00:31:06.606 13:14:11 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:06.606 13:14:11 -- host/auth.sh@68 -- # digest=sha512 00:31:06.606 13:14:11 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:31:06.606 13:14:11 -- host/auth.sh@68 -- # keyid=4 00:31:06.606 13:14:11 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:31:06.606 13:14:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:06.606 13:14:11 -- common/autotest_common.sh@10 -- # set +x 00:31:06.606 13:14:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:06.606 13:14:11 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:06.606 13:14:11 -- nvmf/common.sh@717 -- # local ip 00:31:06.606 13:14:11 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:06.606 13:14:11 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:06.606 13:14:11 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:06.606 13:14:11 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:06.606 13:14:11 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:06.606 13:14:11 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:06.606 13:14:11 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:06.606 13:14:11 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:06.606 13:14:11 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:06.607 13:14:11 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:06.607 13:14:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:06.607 13:14:11 -- common/autotest_common.sh@10 -- # set +x 00:31:07.180 nvme0n1 00:31:07.180 13:14:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:07.180 13:14:12 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:07.180 13:14:12 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:07.180 13:14:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:07.180 13:14:12 -- common/autotest_common.sh@10 -- # set +x 00:31:07.180 13:14:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:07.180 13:14:12 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:07.180 13:14:12 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:07.180 13:14:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:07.180 13:14:12 -- common/autotest_common.sh@10 -- # set +x 00:31:07.180 13:14:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:07.180 13:14:12 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:31:07.180 13:14:12 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:07.180 13:14:12 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:31:07.180 13:14:12 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:07.180 13:14:12 -- host/auth.sh@44 -- # digest=sha512 00:31:07.180 13:14:12 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:07.180 13:14:12 -- host/auth.sh@44 -- # keyid=0 00:31:07.180 13:14:12 -- host/auth.sh@45 -- # key=DHHC-1:00:MjM5NzM5NjJmYmJiM2JjYWIyMTZjNWVmZDk4YWRkZDUswSTO: 00:31:07.180 13:14:12 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:31:07.180 13:14:12 -- host/auth.sh@48 -- # echo ffdhe8192 00:31:07.180 13:14:12 -- host/auth.sh@49 -- # echo DHHC-1:00:MjM5NzM5NjJmYmJiM2JjYWIyMTZjNWVmZDk4YWRkZDUswSTO: 00:31:07.180 13:14:12 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 0 00:31:07.180 13:14:12 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:07.180 13:14:12 -- host/auth.sh@68 -- # digest=sha512 00:31:07.180 13:14:12 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:31:07.180 13:14:12 -- host/auth.sh@68 -- # keyid=0 00:31:07.180 13:14:12 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:31:07.180 13:14:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:07.180 13:14:12 -- common/autotest_common.sh@10 -- # set +x 00:31:07.180 13:14:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:07.180 13:14:12 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:07.180 13:14:12 -- nvmf/common.sh@717 -- # local ip 00:31:07.180 13:14:12 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:07.180 13:14:12 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:07.180 13:14:12 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:07.180 13:14:12 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:07.180 13:14:12 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:07.180 13:14:12 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:07.180 13:14:12 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:07.180 13:14:12 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:07.180 13:14:12 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:07.180 13:14:12 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:31:07.180 13:14:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:07.180 13:14:12 -- common/autotest_common.sh@10 -- # set +x 00:31:08.119 nvme0n1 00:31:08.119 13:14:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:08.119 13:14:12 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:08.120 13:14:12 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:08.120 13:14:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:08.120 13:14:12 -- common/autotest_common.sh@10 -- # set +x 00:31:08.120 13:14:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:08.120 13:14:12 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:08.120 13:14:12 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:08.120 13:14:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:08.120 13:14:12 -- common/autotest_common.sh@10 -- # set +x 00:31:08.120 13:14:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:08.120 13:14:12 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:08.120 13:14:12 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:31:08.120 13:14:12 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:08.120 13:14:12 -- host/auth.sh@44 -- # digest=sha512 00:31:08.120 13:14:12 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:08.120 13:14:12 -- host/auth.sh@44 -- # keyid=1 00:31:08.120 13:14:12 -- host/auth.sh@45 -- # key=DHHC-1:00:MWJiNDg5YTQxNzZlOGFmNzA5YWM0ZWMwMDczNmQzZTQyOTQzMTU2ZDkxZDllZTJllnVCJw==: 00:31:08.120 13:14:12 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:31:08.120 13:14:12 -- host/auth.sh@48 -- # echo ffdhe8192 00:31:08.120 13:14:12 -- host/auth.sh@49 -- # echo DHHC-1:00:MWJiNDg5YTQxNzZlOGFmNzA5YWM0ZWMwMDczNmQzZTQyOTQzMTU2ZDkxZDllZTJllnVCJw==: 00:31:08.120 13:14:12 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 1 00:31:08.120 13:14:12 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:08.120 13:14:12 -- host/auth.sh@68 -- # digest=sha512 00:31:08.120 13:14:12 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:31:08.120 13:14:12 -- host/auth.sh@68 -- # keyid=1 00:31:08.120 13:14:12 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:31:08.120 13:14:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:08.120 13:14:12 -- common/autotest_common.sh@10 -- # set +x 00:31:08.120 13:14:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:08.120 13:14:12 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:08.120 13:14:12 -- nvmf/common.sh@717 -- # local ip 00:31:08.120 13:14:12 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:08.120 13:14:12 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:08.120 13:14:12 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:08.120 13:14:12 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:08.120 13:14:12 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:08.120 13:14:12 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:08.120 13:14:12 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:08.120 13:14:12 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:08.120 13:14:12 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:08.120 13:14:12 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:31:08.120 13:14:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:08.120 13:14:12 -- common/autotest_common.sh@10 -- # set +x 00:31:08.688 nvme0n1 00:31:08.688 13:14:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:08.688 13:14:13 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:08.688 13:14:13 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:08.688 13:14:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:08.688 13:14:13 -- common/autotest_common.sh@10 -- # set +x 00:31:08.688 13:14:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:08.688 13:14:13 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:08.688 13:14:13 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:08.688 13:14:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:08.688 13:14:13 -- common/autotest_common.sh@10 -- # set +x 00:31:08.688 13:14:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:08.688 13:14:13 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:08.688 13:14:13 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:31:08.688 13:14:13 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:08.688 13:14:13 -- host/auth.sh@44 -- # digest=sha512 00:31:08.688 13:14:13 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:08.688 13:14:13 -- host/auth.sh@44 -- # keyid=2 00:31:08.688 13:14:13 -- host/auth.sh@45 -- # key=DHHC-1:01:Yjc4ODhiNzNhMjkyOGZkOTNmNjczZGI1MDg4NzQyYTGItMYt: 00:31:08.688 13:14:13 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:31:08.688 13:14:13 -- host/auth.sh@48 -- # echo ffdhe8192 00:31:08.688 13:14:13 -- host/auth.sh@49 -- # echo DHHC-1:01:Yjc4ODhiNzNhMjkyOGZkOTNmNjczZGI1MDg4NzQyYTGItMYt: 00:31:08.688 13:14:13 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 2 00:31:08.689 13:14:13 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:08.689 13:14:13 -- host/auth.sh@68 -- # digest=sha512 00:31:08.689 13:14:13 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:31:08.689 13:14:13 -- host/auth.sh@68 -- # keyid=2 00:31:08.689 13:14:13 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:31:08.689 13:14:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:08.689 13:14:13 -- common/autotest_common.sh@10 -- # set +x 00:31:08.689 13:14:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:08.689 13:14:13 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:08.689 13:14:13 -- nvmf/common.sh@717 -- # local ip 00:31:08.689 13:14:13 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:08.689 13:14:13 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:08.689 13:14:13 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:08.689 13:14:13 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:08.689 13:14:13 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:08.689 13:14:13 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:08.689 13:14:13 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:08.689 13:14:13 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:08.689 13:14:13 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:08.689 13:14:13 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:31:08.689 13:14:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:08.689 13:14:13 -- common/autotest_common.sh@10 -- # set +x 00:31:09.627 nvme0n1 00:31:09.627 13:14:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:09.627 13:14:14 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:09.627 13:14:14 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:09.627 13:14:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:09.627 13:14:14 -- common/autotest_common.sh@10 -- # set +x 00:31:09.627 13:14:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:09.627 13:14:14 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:09.627 13:14:14 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:09.627 13:14:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:09.627 13:14:14 -- common/autotest_common.sh@10 -- # set +x 00:31:09.627 13:14:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:09.627 13:14:14 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:09.627 13:14:14 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:31:09.627 13:14:14 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:09.627 13:14:14 -- host/auth.sh@44 -- # digest=sha512 00:31:09.627 13:14:14 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:09.627 13:14:14 -- host/auth.sh@44 -- # keyid=3 00:31:09.627 13:14:14 -- host/auth.sh@45 -- # key=DHHC-1:02:OWQyNTdiMDJjODg4YzExZDllMTk2OTNmOTE0NTY2ZTViNzVlMzhhMzQyYTUxYjc2YPpu8A==: 00:31:09.627 13:14:14 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:31:09.627 13:14:14 -- host/auth.sh@48 -- # echo ffdhe8192 00:31:09.627 13:14:14 -- host/auth.sh@49 -- # echo DHHC-1:02:OWQyNTdiMDJjODg4YzExZDllMTk2OTNmOTE0NTY2ZTViNzVlMzhhMzQyYTUxYjc2YPpu8A==: 00:31:09.627 13:14:14 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 3 00:31:09.627 13:14:14 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:09.627 13:14:14 -- host/auth.sh@68 -- # digest=sha512 00:31:09.627 13:14:14 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:31:09.627 13:14:14 -- host/auth.sh@68 -- # keyid=3 00:31:09.627 13:14:14 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:31:09.627 13:14:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:09.627 13:14:14 -- common/autotest_common.sh@10 -- # set +x 00:31:09.627 13:14:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:09.627 13:14:14 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:09.627 13:14:14 -- nvmf/common.sh@717 -- # local ip 00:31:09.627 13:14:14 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:09.627 13:14:14 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:09.627 13:14:14 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:09.627 13:14:14 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:09.627 13:14:14 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:09.627 13:14:14 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:09.627 13:14:14 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:09.627 13:14:14 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:09.627 13:14:14 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:09.627 13:14:14 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:31:09.627 13:14:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:09.627 13:14:14 -- common/autotest_common.sh@10 -- # set +x 00:31:10.567 nvme0n1 00:31:10.567 13:14:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:10.567 13:14:15 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:10.567 13:14:15 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:10.567 13:14:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:10.567 13:14:15 -- common/autotest_common.sh@10 -- # set +x 00:31:10.567 13:14:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:10.567 13:14:15 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:10.567 13:14:15 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:10.567 13:14:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:10.567 13:14:15 -- common/autotest_common.sh@10 -- # set +x 00:31:10.567 13:14:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:10.567 13:14:15 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:10.567 13:14:15 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:31:10.567 13:14:15 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:10.567 13:14:15 -- host/auth.sh@44 -- # digest=sha512 00:31:10.567 13:14:15 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:10.567 13:14:15 -- host/auth.sh@44 -- # keyid=4 00:31:10.567 13:14:15 -- host/auth.sh@45 -- # key=DHHC-1:03:MDA3ZWYxNTg1ZmNmNTMxOTg5MTM4NDBmOWY5MmQxOGQzZTZmODY5ZDAyNWFjODkzZjZiNmY4ZTM5ZjUyMzVhNkfxi6g=: 00:31:10.567 13:14:15 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:31:10.567 13:14:15 -- host/auth.sh@48 -- # echo ffdhe8192 00:31:10.567 13:14:15 -- host/auth.sh@49 -- # echo DHHC-1:03:MDA3ZWYxNTg1ZmNmNTMxOTg5MTM4NDBmOWY5MmQxOGQzZTZmODY5ZDAyNWFjODkzZjZiNmY4ZTM5ZjUyMzVhNkfxi6g=: 00:31:10.567 13:14:15 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 4 00:31:10.567 13:14:15 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:10.567 13:14:15 -- host/auth.sh@68 -- # digest=sha512 00:31:10.567 13:14:15 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:31:10.567 13:14:15 -- host/auth.sh@68 -- # keyid=4 00:31:10.567 13:14:15 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:31:10.567 13:14:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:10.567 13:14:15 -- common/autotest_common.sh@10 -- # set +x 00:31:10.567 13:14:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:10.567 13:14:15 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:10.567 13:14:15 -- nvmf/common.sh@717 -- # local ip 00:31:10.567 13:14:15 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:10.567 13:14:15 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:10.567 13:14:15 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:10.567 13:14:15 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:10.567 13:14:15 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:10.567 13:14:15 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:10.567 13:14:15 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:10.567 13:14:15 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:10.567 13:14:15 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:10.568 13:14:15 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:10.568 13:14:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:10.568 13:14:15 -- common/autotest_common.sh@10 -- # set +x 00:31:11.137 nvme0n1 00:31:11.137 13:14:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:11.137 13:14:16 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:11.137 13:14:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:11.137 13:14:16 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:11.137 13:14:16 -- common/autotest_common.sh@10 -- # set +x 00:31:11.137 13:14:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:11.137 13:14:16 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:11.137 13:14:16 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:11.137 13:14:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:11.137 13:14:16 -- common/autotest_common.sh@10 -- # set +x 00:31:11.398 13:14:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:11.398 13:14:16 -- host/auth.sh@117 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:31:11.398 13:14:16 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:11.398 13:14:16 -- host/auth.sh@44 -- # digest=sha256 00:31:11.398 13:14:16 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:11.398 13:14:16 -- host/auth.sh@44 -- # keyid=1 00:31:11.398 13:14:16 -- host/auth.sh@45 -- # key=DHHC-1:00:MWJiNDg5YTQxNzZlOGFmNzA5YWM0ZWMwMDczNmQzZTQyOTQzMTU2ZDkxZDllZTJllnVCJw==: 00:31:11.398 13:14:16 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:31:11.398 13:14:16 -- host/auth.sh@48 -- # echo ffdhe2048 00:31:11.398 13:14:16 -- host/auth.sh@49 -- # echo DHHC-1:00:MWJiNDg5YTQxNzZlOGFmNzA5YWM0ZWMwMDczNmQzZTQyOTQzMTU2ZDkxZDllZTJllnVCJw==: 00:31:11.398 13:14:16 -- host/auth.sh@118 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:11.398 13:14:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:11.398 13:14:16 -- common/autotest_common.sh@10 -- # set +x 00:31:11.398 13:14:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:11.398 13:14:16 -- host/auth.sh@119 -- # get_main_ns_ip 00:31:11.398 13:14:16 -- nvmf/common.sh@717 -- # local ip 00:31:11.398 13:14:16 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:11.398 13:14:16 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:11.398 13:14:16 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:11.398 13:14:16 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:11.398 13:14:16 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:11.398 13:14:16 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:11.398 13:14:16 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:11.398 13:14:16 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:11.398 13:14:16 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:11.398 13:14:16 -- host/auth.sh@119 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:31:11.398 13:14:16 -- common/autotest_common.sh@638 -- # local es=0 00:31:11.398 13:14:16 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:31:11.398 13:14:16 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:31:11.398 13:14:16 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:31:11.398 13:14:16 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:31:11.398 13:14:16 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:31:11.398 13:14:16 -- common/autotest_common.sh@641 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:31:11.398 13:14:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:11.398 13:14:16 -- common/autotest_common.sh@10 -- # set +x 00:31:11.398 request: 00:31:11.398 { 00:31:11.398 "name": "nvme0", 00:31:11.398 "trtype": "tcp", 00:31:11.398 "traddr": "10.0.0.1", 00:31:11.398 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:31:11.398 "adrfam": "ipv4", 00:31:11.398 "trsvcid": "4420", 00:31:11.398 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:31:11.398 "method": "bdev_nvme_attach_controller", 00:31:11.398 "req_id": 1 00:31:11.398 } 00:31:11.398 Got JSON-RPC error response 00:31:11.398 response: 00:31:11.398 { 00:31:11.398 "code": -32602, 00:31:11.398 "message": "Invalid parameters" 00:31:11.398 } 00:31:11.398 13:14:16 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:31:11.398 13:14:16 -- common/autotest_common.sh@641 -- # es=1 00:31:11.398 13:14:16 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:31:11.398 13:14:16 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:31:11.398 13:14:16 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:31:11.398 13:14:16 -- host/auth.sh@121 -- # rpc_cmd bdev_nvme_get_controllers 00:31:11.398 13:14:16 -- host/auth.sh@121 -- # jq length 00:31:11.398 13:14:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:11.398 13:14:16 -- common/autotest_common.sh@10 -- # set +x 00:31:11.398 13:14:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:11.398 13:14:16 -- host/auth.sh@121 -- # (( 0 == 0 )) 00:31:11.398 13:14:16 -- host/auth.sh@124 -- # get_main_ns_ip 00:31:11.398 13:14:16 -- nvmf/common.sh@717 -- # local ip 00:31:11.398 13:14:16 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:11.398 13:14:16 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:11.398 13:14:16 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:11.398 13:14:16 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:11.398 13:14:16 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:11.398 13:14:16 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:11.398 13:14:16 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:11.398 13:14:16 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:11.398 13:14:16 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:11.398 13:14:16 -- host/auth.sh@124 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:31:11.398 13:14:16 -- common/autotest_common.sh@638 -- # local es=0 00:31:11.398 13:14:16 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:31:11.398 13:14:16 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:31:11.398 13:14:16 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:31:11.398 13:14:16 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:31:11.399 13:14:16 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:31:11.399 13:14:16 -- common/autotest_common.sh@641 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:31:11.399 13:14:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:11.399 13:14:16 -- common/autotest_common.sh@10 -- # set +x 00:31:11.399 request: 00:31:11.399 { 00:31:11.399 "name": "nvme0", 00:31:11.399 "trtype": "tcp", 00:31:11.399 "traddr": "10.0.0.1", 00:31:11.399 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:31:11.399 "adrfam": "ipv4", 00:31:11.399 "trsvcid": "4420", 00:31:11.399 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:31:11.399 "dhchap_key": "key2", 00:31:11.399 "method": "bdev_nvme_attach_controller", 00:31:11.399 "req_id": 1 00:31:11.399 } 00:31:11.399 Got JSON-RPC error response 00:31:11.399 response: 00:31:11.399 { 00:31:11.399 "code": -32602, 00:31:11.399 "message": "Invalid parameters" 00:31:11.399 } 00:31:11.399 13:14:16 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:31:11.399 13:14:16 -- common/autotest_common.sh@641 -- # es=1 00:31:11.399 13:14:16 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:31:11.399 13:14:16 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:31:11.399 13:14:16 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:31:11.399 13:14:16 -- host/auth.sh@127 -- # rpc_cmd bdev_nvme_get_controllers 00:31:11.399 13:14:16 -- host/auth.sh@127 -- # jq length 00:31:11.399 13:14:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:11.399 13:14:16 -- common/autotest_common.sh@10 -- # set +x 00:31:11.399 13:14:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:11.399 13:14:16 -- host/auth.sh@127 -- # (( 0 == 0 )) 00:31:11.399 13:14:16 -- host/auth.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:31:11.399 13:14:16 -- host/auth.sh@130 -- # cleanup 00:31:11.399 13:14:16 -- host/auth.sh@24 -- # nvmftestfini 00:31:11.399 13:14:16 -- nvmf/common.sh@477 -- # nvmfcleanup 00:31:11.399 13:14:16 -- nvmf/common.sh@117 -- # sync 00:31:11.399 13:14:16 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:11.399 13:14:16 -- nvmf/common.sh@120 -- # set +e 00:31:11.399 13:14:16 -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:11.399 13:14:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:11.659 rmmod nvme_tcp 00:31:11.659 rmmod nvme_fabrics 00:31:11.659 13:14:16 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:11.659 13:14:16 -- nvmf/common.sh@124 -- # set -e 00:31:11.659 13:14:16 -- nvmf/common.sh@125 -- # return 0 00:31:11.659 13:14:16 -- nvmf/common.sh@478 -- # '[' -n 4176968 ']' 00:31:11.659 13:14:16 -- nvmf/common.sh@479 -- # killprocess 4176968 00:31:11.659 13:14:16 -- common/autotest_common.sh@936 -- # '[' -z 4176968 ']' 00:31:11.659 13:14:16 -- common/autotest_common.sh@940 -- # kill -0 4176968 00:31:11.659 13:14:16 -- common/autotest_common.sh@941 -- # uname 00:31:11.659 13:14:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:31:11.659 13:14:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4176968 00:31:11.659 13:14:16 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:31:11.659 13:14:16 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:31:11.659 13:14:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4176968' 00:31:11.659 killing process with pid 4176968 00:31:11.659 13:14:16 -- common/autotest_common.sh@955 -- # kill 4176968 00:31:11.659 13:14:16 -- common/autotest_common.sh@960 -- # wait 4176968 00:31:11.659 13:14:16 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:31:11.659 13:14:16 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:31:11.659 13:14:16 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:31:11.659 13:14:16 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:11.659 13:14:16 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:11.659 13:14:16 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:11.659 13:14:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:11.659 13:14:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:14.200 13:14:18 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:14.200 13:14:18 -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:31:14.200 13:14:18 -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:31:14.200 13:14:18 -- host/auth.sh@27 -- # clean_kernel_target 00:31:14.200 13:14:18 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:31:14.200 13:14:18 -- nvmf/common.sh@675 -- # echo 0 00:31:14.200 13:14:18 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:31:14.200 13:14:18 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:31:14.200 13:14:18 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:31:14.200 13:14:18 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:31:14.200 13:14:18 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:31:14.200 13:14:18 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:31:14.200 13:14:18 -- nvmf/common.sh@687 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:31:17.509 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:31:17.509 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:31:17.509 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:31:17.510 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:31:17.510 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:31:17.510 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:31:17.510 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:31:17.510 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:31:17.510 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:31:17.510 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:31:17.510 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:31:17.510 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:31:17.510 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:31:17.510 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:31:17.510 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:31:17.510 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:31:17.510 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:31:17.770 13:14:22 -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.h1h /tmp/spdk.key-null.Hpy /tmp/spdk.key-sha256.oAQ /tmp/spdk.key-sha384.aVR /tmp/spdk.key-sha512.pz4 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:31:17.770 13:14:22 -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:31:21.072 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:31:21.072 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:31:21.072 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:31:21.072 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:31:21.072 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:31:21.072 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:31:21.072 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:31:21.072 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:31:21.072 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:31:21.072 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:31:21.072 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:31:21.072 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:31:21.072 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:31:21.072 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:31:21.072 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:31:21.072 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:31:21.072 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:31:21.333 00:31:21.333 real 0m57.717s 00:31:21.333 user 0m51.240s 00:31:21.333 sys 0m14.995s 00:31:21.333 13:14:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:31:21.333 13:14:26 -- common/autotest_common.sh@10 -- # set +x 00:31:21.333 ************************************ 00:31:21.333 END TEST nvmf_auth 00:31:21.333 ************************************ 00:31:21.594 13:14:26 -- nvmf/nvmf.sh@104 -- # [[ tcp == \t\c\p ]] 00:31:21.594 13:14:26 -- nvmf/nvmf.sh@105 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:31:21.594 13:14:26 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:31:21.594 13:14:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:21.594 13:14:26 -- common/autotest_common.sh@10 -- # set +x 00:31:21.594 ************************************ 00:31:21.594 START TEST nvmf_digest 00:31:21.594 ************************************ 00:31:21.594 13:14:26 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:31:21.856 * Looking for test storage... 00:31:21.856 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:21.856 13:14:26 -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:21.856 13:14:26 -- nvmf/common.sh@7 -- # uname -s 00:31:21.856 13:14:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:21.856 13:14:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:21.856 13:14:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:21.856 13:14:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:21.856 13:14:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:21.856 13:14:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:21.856 13:14:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:21.856 13:14:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:21.856 13:14:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:21.856 13:14:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:21.856 13:14:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:21.856 13:14:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:21.856 13:14:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:21.856 13:14:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:21.856 13:14:26 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:21.856 13:14:26 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:21.856 13:14:26 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:21.856 13:14:26 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:21.856 13:14:26 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:21.856 13:14:26 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:21.856 13:14:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:21.856 13:14:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:21.856 13:14:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:21.856 13:14:26 -- paths/export.sh@5 -- # export PATH 00:31:21.857 13:14:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:21.857 13:14:26 -- nvmf/common.sh@47 -- # : 0 00:31:21.857 13:14:26 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:21.857 13:14:26 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:21.857 13:14:26 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:21.857 13:14:26 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:21.857 13:14:26 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:21.857 13:14:26 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:21.857 13:14:26 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:21.857 13:14:26 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:21.857 13:14:26 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:31:21.857 13:14:26 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:31:21.857 13:14:26 -- host/digest.sh@16 -- # runtime=2 00:31:21.857 13:14:26 -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:31:21.857 13:14:26 -- host/digest.sh@138 -- # nvmftestinit 00:31:21.857 13:14:26 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:31:21.857 13:14:26 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:21.857 13:14:26 -- nvmf/common.sh@437 -- # prepare_net_devs 00:31:21.857 13:14:26 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:31:21.857 13:14:26 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:31:21.857 13:14:26 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:21.857 13:14:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:21.857 13:14:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:21.857 13:14:26 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:31:21.857 13:14:26 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:31:21.857 13:14:26 -- nvmf/common.sh@285 -- # xtrace_disable 00:31:21.857 13:14:26 -- common/autotest_common.sh@10 -- # set +x 00:31:30.011 13:14:33 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:31:30.011 13:14:33 -- nvmf/common.sh@291 -- # pci_devs=() 00:31:30.011 13:14:33 -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:30.011 13:14:33 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:30.011 13:14:33 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:30.011 13:14:33 -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:30.011 13:14:33 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:30.011 13:14:33 -- nvmf/common.sh@295 -- # net_devs=() 00:31:30.011 13:14:33 -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:30.011 13:14:33 -- nvmf/common.sh@296 -- # e810=() 00:31:30.011 13:14:33 -- nvmf/common.sh@296 -- # local -ga e810 00:31:30.011 13:14:33 -- nvmf/common.sh@297 -- # x722=() 00:31:30.011 13:14:33 -- nvmf/common.sh@297 -- # local -ga x722 00:31:30.011 13:14:33 -- nvmf/common.sh@298 -- # mlx=() 00:31:30.011 13:14:33 -- nvmf/common.sh@298 -- # local -ga mlx 00:31:30.011 13:14:33 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:30.011 13:14:33 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:30.011 13:14:33 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:30.011 13:14:33 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:30.011 13:14:33 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:30.011 13:14:33 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:30.011 13:14:33 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:30.011 13:14:33 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:30.011 13:14:33 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:30.011 13:14:33 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:30.011 13:14:33 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:30.012 13:14:33 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:30.012 13:14:33 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:30.012 13:14:33 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:30.012 13:14:33 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:30.012 13:14:33 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:30.012 13:14:33 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:30.012 13:14:33 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:30.012 13:14:33 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:31:30.012 Found 0000:31:00.0 (0x8086 - 0x159b) 00:31:30.012 13:14:33 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:30.012 13:14:33 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:30.012 13:14:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:30.012 13:14:33 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:30.012 13:14:33 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:30.012 13:14:33 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:30.012 13:14:33 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:31:30.012 Found 0000:31:00.1 (0x8086 - 0x159b) 00:31:30.012 13:14:33 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:30.012 13:14:33 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:30.012 13:14:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:30.012 13:14:33 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:30.012 13:14:33 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:30.012 13:14:33 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:30.012 13:14:33 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:30.012 13:14:33 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:30.012 13:14:33 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:30.012 13:14:33 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:30.012 13:14:33 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:31:30.012 13:14:33 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:30.012 13:14:33 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:31:30.012 Found net devices under 0000:31:00.0: cvl_0_0 00:31:30.012 13:14:33 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:31:30.012 13:14:33 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:30.012 13:14:33 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:30.012 13:14:33 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:31:30.012 13:14:33 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:30.012 13:14:33 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:31:30.012 Found net devices under 0000:31:00.1: cvl_0_1 00:31:30.012 13:14:33 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:31:30.012 13:14:33 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:31:30.012 13:14:33 -- nvmf/common.sh@403 -- # is_hw=yes 00:31:30.012 13:14:33 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:31:30.012 13:14:33 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:31:30.012 13:14:33 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:31:30.012 13:14:33 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:30.012 13:14:33 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:30.012 13:14:33 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:30.012 13:14:33 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:30.012 13:14:33 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:30.012 13:14:33 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:30.012 13:14:33 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:30.012 13:14:33 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:30.012 13:14:33 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:30.012 13:14:33 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:30.012 13:14:33 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:30.012 13:14:33 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:30.012 13:14:33 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:30.012 13:14:33 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:30.012 13:14:33 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:30.012 13:14:33 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:30.012 13:14:33 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:30.012 13:14:34 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:30.012 13:14:34 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:30.012 13:14:34 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:30.012 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:30.012 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.525 ms 00:31:30.012 00:31:30.012 --- 10.0.0.2 ping statistics --- 00:31:30.012 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:30.012 rtt min/avg/max/mdev = 0.525/0.525/0.525/0.000 ms 00:31:30.012 13:14:34 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:30.012 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:30.012 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.268 ms 00:31:30.012 00:31:30.012 --- 10.0.0.1 ping statistics --- 00:31:30.012 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:30.012 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:31:30.012 13:14:34 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:30.012 13:14:34 -- nvmf/common.sh@411 -- # return 0 00:31:30.012 13:14:34 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:31:30.012 13:14:34 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:30.012 13:14:34 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:31:30.012 13:14:34 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:31:30.012 13:14:34 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:30.012 13:14:34 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:31:30.012 13:14:34 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:31:30.012 13:14:34 -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:31:30.012 13:14:34 -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:31:30.012 13:14:34 -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:31:30.012 13:14:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:31:30.012 13:14:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:30.012 13:14:34 -- common/autotest_common.sh@10 -- # set +x 00:31:30.012 ************************************ 00:31:30.012 START TEST nvmf_digest_clean 00:31:30.012 ************************************ 00:31:30.012 13:14:34 -- common/autotest_common.sh@1111 -- # run_digest 00:31:30.012 13:14:34 -- host/digest.sh@120 -- # local dsa_initiator 00:31:30.012 13:14:34 -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:31:30.012 13:14:34 -- host/digest.sh@121 -- # dsa_initiator=false 00:31:30.012 13:14:34 -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:31:30.012 13:14:34 -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:31:30.012 13:14:34 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:31:30.012 13:14:34 -- common/autotest_common.sh@710 -- # xtrace_disable 00:31:30.012 13:14:34 -- common/autotest_common.sh@10 -- # set +x 00:31:30.012 13:14:34 -- nvmf/common.sh@470 -- # nvmfpid=4193390 00:31:30.012 13:14:34 -- nvmf/common.sh@471 -- # waitforlisten 4193390 00:31:30.012 13:14:34 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:31:30.012 13:14:34 -- common/autotest_common.sh@817 -- # '[' -z 4193390 ']' 00:31:30.012 13:14:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:30.012 13:14:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:31:30.012 13:14:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:30.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:30.012 13:14:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:31:30.012 13:14:34 -- common/autotest_common.sh@10 -- # set +x 00:31:30.012 [2024-04-26 13:14:34.318668] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:31:30.012 [2024-04-26 13:14:34.318723] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:30.012 EAL: No free 2048 kB hugepages reported on node 1 00:31:30.012 [2024-04-26 13:14:34.390395] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:30.012 [2024-04-26 13:14:34.461701] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:30.012 [2024-04-26 13:14:34.461739] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:30.012 [2024-04-26 13:14:34.461747] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:30.012 [2024-04-26 13:14:34.461754] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:30.012 [2024-04-26 13:14:34.461760] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:30.012 [2024-04-26 13:14:34.461785] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:30.274 13:14:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:31:30.274 13:14:35 -- common/autotest_common.sh@850 -- # return 0 00:31:30.274 13:14:35 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:31:30.274 13:14:35 -- common/autotest_common.sh@716 -- # xtrace_disable 00:31:30.274 13:14:35 -- common/autotest_common.sh@10 -- # set +x 00:31:30.274 13:14:35 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:30.274 13:14:35 -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:31:30.274 13:14:35 -- host/digest.sh@126 -- # common_target_config 00:31:30.274 13:14:35 -- host/digest.sh@43 -- # rpc_cmd 00:31:30.274 13:14:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:30.274 13:14:35 -- common/autotest_common.sh@10 -- # set +x 00:31:30.274 null0 00:31:30.274 [2024-04-26 13:14:35.196100] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:30.274 [2024-04-26 13:14:35.220265] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:30.274 13:14:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:30.274 13:14:35 -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:31:30.274 13:14:35 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:31:30.274 13:14:35 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:31:30.274 13:14:35 -- host/digest.sh@80 -- # rw=randread 00:31:30.274 13:14:35 -- host/digest.sh@80 -- # bs=4096 00:31:30.274 13:14:35 -- host/digest.sh@80 -- # qd=128 00:31:30.274 13:14:35 -- host/digest.sh@80 -- # scan_dsa=false 00:31:30.274 13:14:35 -- host/digest.sh@83 -- # bperfpid=4193736 00:31:30.274 13:14:35 -- host/digest.sh@84 -- # waitforlisten 4193736 /var/tmp/bperf.sock 00:31:30.274 13:14:35 -- common/autotest_common.sh@817 -- # '[' -z 4193736 ']' 00:31:30.274 13:14:35 -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:31:30.274 13:14:35 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:30.274 13:14:35 -- common/autotest_common.sh@822 -- # local max_retries=100 00:31:30.274 13:14:35 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:30.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:30.274 13:14:35 -- common/autotest_common.sh@826 -- # xtrace_disable 00:31:30.274 13:14:35 -- common/autotest_common.sh@10 -- # set +x 00:31:30.274 [2024-04-26 13:14:35.273786] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:31:30.274 [2024-04-26 13:14:35.273831] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4193736 ] 00:31:30.274 EAL: No free 2048 kB hugepages reported on node 1 00:31:30.536 [2024-04-26 13:14:35.351473] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:30.536 [2024-04-26 13:14:35.413765] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:31.105 13:14:36 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:31:31.105 13:14:36 -- common/autotest_common.sh@850 -- # return 0 00:31:31.105 13:14:36 -- host/digest.sh@86 -- # false 00:31:31.105 13:14:36 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:31:31.105 13:14:36 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:31:31.365 13:14:36 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:31.365 13:14:36 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:31.625 nvme0n1 00:31:31.625 13:14:36 -- host/digest.sh@92 -- # bperf_py perform_tests 00:31:31.625 13:14:36 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:31.625 Running I/O for 2 seconds... 00:31:34.167 00:31:34.167 Latency(us) 00:31:34.167 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:34.167 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:31:34.167 nvme0n1 : 2.00 19639.10 76.72 0.00 0.00 6511.01 3276.80 20643.84 00:31:34.167 =================================================================================================================== 00:31:34.167 Total : 19639.10 76.72 0.00 0.00 6511.01 3276.80 20643.84 00:31:34.167 0 00:31:34.167 13:14:38 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:31:34.167 13:14:38 -- host/digest.sh@93 -- # get_accel_stats 00:31:34.168 13:14:38 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:31:34.168 13:14:38 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:31:34.168 13:14:38 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:31:34.168 | select(.opcode=="crc32c") 00:31:34.168 | "\(.module_name) \(.executed)"' 00:31:34.168 13:14:38 -- host/digest.sh@94 -- # false 00:31:34.168 13:14:38 -- host/digest.sh@94 -- # exp_module=software 00:31:34.168 13:14:38 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:31:34.168 13:14:38 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:31:34.168 13:14:38 -- host/digest.sh@98 -- # killprocess 4193736 00:31:34.168 13:14:38 -- common/autotest_common.sh@936 -- # '[' -z 4193736 ']' 00:31:34.168 13:14:38 -- common/autotest_common.sh@940 -- # kill -0 4193736 00:31:34.168 13:14:38 -- common/autotest_common.sh@941 -- # uname 00:31:34.168 13:14:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:31:34.168 13:14:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4193736 00:31:34.168 13:14:38 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:31:34.168 13:14:38 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:31:34.168 13:14:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4193736' 00:31:34.168 killing process with pid 4193736 00:31:34.168 13:14:38 -- common/autotest_common.sh@955 -- # kill 4193736 00:31:34.168 Received shutdown signal, test time was about 2.000000 seconds 00:31:34.168 00:31:34.168 Latency(us) 00:31:34.168 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:34.168 =================================================================================================================== 00:31:34.168 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:34.168 13:14:38 -- common/autotest_common.sh@960 -- # wait 4193736 00:31:34.168 13:14:38 -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:31:34.168 13:14:38 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:31:34.168 13:14:38 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:31:34.168 13:14:38 -- host/digest.sh@80 -- # rw=randread 00:31:34.168 13:14:38 -- host/digest.sh@80 -- # bs=131072 00:31:34.168 13:14:38 -- host/digest.sh@80 -- # qd=16 00:31:34.168 13:14:38 -- host/digest.sh@80 -- # scan_dsa=false 00:31:34.168 13:14:38 -- host/digest.sh@83 -- # bperfpid=800 00:31:34.168 13:14:38 -- host/digest.sh@84 -- # waitforlisten 800 /var/tmp/bperf.sock 00:31:34.168 13:14:38 -- common/autotest_common.sh@817 -- # '[' -z 800 ']' 00:31:34.168 13:14:38 -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:31:34.168 13:14:38 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:34.168 13:14:38 -- common/autotest_common.sh@822 -- # local max_retries=100 00:31:34.168 13:14:38 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:34.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:34.168 13:14:38 -- common/autotest_common.sh@826 -- # xtrace_disable 00:31:34.168 13:14:38 -- common/autotest_common.sh@10 -- # set +x 00:31:34.168 [2024-04-26 13:14:39.037613] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:31:34.168 [2024-04-26 13:14:39.037668] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid800 ] 00:31:34.168 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:34.168 Zero copy mechanism will not be used. 00:31:34.168 EAL: No free 2048 kB hugepages reported on node 1 00:31:34.168 [2024-04-26 13:14:39.112737] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:34.168 [2024-04-26 13:14:39.164242] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:34.739 13:14:39 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:31:34.739 13:14:39 -- common/autotest_common.sh@850 -- # return 0 00:31:34.739 13:14:39 -- host/digest.sh@86 -- # false 00:31:34.739 13:14:39 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:31:34.739 13:14:39 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:31:34.999 13:14:39 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:34.999 13:14:39 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:35.570 nvme0n1 00:31:35.570 13:14:40 -- host/digest.sh@92 -- # bperf_py perform_tests 00:31:35.570 13:14:40 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:35.570 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:35.570 Zero copy mechanism will not be used. 00:31:35.570 Running I/O for 2 seconds... 00:31:37.482 00:31:37.482 Latency(us) 00:31:37.482 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:37.482 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:31:37.482 nvme0n1 : 2.00 3469.50 433.69 0.00 0.00 4608.54 727.04 13161.81 00:31:37.482 =================================================================================================================== 00:31:37.482 Total : 3469.50 433.69 0.00 0.00 4608.54 727.04 13161.81 00:31:37.482 0 00:31:37.482 13:14:42 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:31:37.482 13:14:42 -- host/digest.sh@93 -- # get_accel_stats 00:31:37.482 13:14:42 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:31:37.482 13:14:42 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:31:37.482 | select(.opcode=="crc32c") 00:31:37.482 | "\(.module_name) \(.executed)"' 00:31:37.482 13:14:42 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:31:37.743 13:14:42 -- host/digest.sh@94 -- # false 00:31:37.743 13:14:42 -- host/digest.sh@94 -- # exp_module=software 00:31:37.743 13:14:42 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:31:37.743 13:14:42 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:31:37.743 13:14:42 -- host/digest.sh@98 -- # killprocess 800 00:31:37.743 13:14:42 -- common/autotest_common.sh@936 -- # '[' -z 800 ']' 00:31:37.743 13:14:42 -- common/autotest_common.sh@940 -- # kill -0 800 00:31:37.743 13:14:42 -- common/autotest_common.sh@941 -- # uname 00:31:37.743 13:14:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:31:37.743 13:14:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 800 00:31:37.743 13:14:42 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:31:37.743 13:14:42 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:31:37.743 13:14:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 800' 00:31:37.743 killing process with pid 800 00:31:37.743 13:14:42 -- common/autotest_common.sh@955 -- # kill 800 00:31:37.743 Received shutdown signal, test time was about 2.000000 seconds 00:31:37.743 00:31:37.743 Latency(us) 00:31:37.743 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:37.743 =================================================================================================================== 00:31:37.743 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:37.743 13:14:42 -- common/autotest_common.sh@960 -- # wait 800 00:31:37.743 13:14:42 -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:31:37.743 13:14:42 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:31:37.743 13:14:42 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:31:37.743 13:14:42 -- host/digest.sh@80 -- # rw=randwrite 00:31:37.743 13:14:42 -- host/digest.sh@80 -- # bs=4096 00:31:37.743 13:14:42 -- host/digest.sh@80 -- # qd=128 00:31:37.743 13:14:42 -- host/digest.sh@80 -- # scan_dsa=false 00:31:37.743 13:14:42 -- host/digest.sh@83 -- # bperfpid=1660 00:31:37.743 13:14:42 -- host/digest.sh@84 -- # waitforlisten 1660 /var/tmp/bperf.sock 00:31:37.743 13:14:42 -- common/autotest_common.sh@817 -- # '[' -z 1660 ']' 00:31:37.743 13:14:42 -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:31:37.743 13:14:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:37.743 13:14:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:31:37.743 13:14:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:37.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:37.743 13:14:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:31:37.743 13:14:42 -- common/autotest_common.sh@10 -- # set +x 00:31:38.003 [2024-04-26 13:14:42.844138] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:31:38.003 [2024-04-26 13:14:42.844194] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1660 ] 00:31:38.003 EAL: No free 2048 kB hugepages reported on node 1 00:31:38.003 [2024-04-26 13:14:42.919693] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:38.003 [2024-04-26 13:14:42.972122] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:38.574 13:14:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:31:38.574 13:14:43 -- common/autotest_common.sh@850 -- # return 0 00:31:38.574 13:14:43 -- host/digest.sh@86 -- # false 00:31:38.574 13:14:43 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:31:38.574 13:14:43 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:31:38.834 13:14:43 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:38.834 13:14:43 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:39.093 nvme0n1 00:31:39.093 13:14:44 -- host/digest.sh@92 -- # bperf_py perform_tests 00:31:39.093 13:14:44 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:39.093 Running I/O for 2 seconds... 00:31:41.632 00:31:41.632 Latency(us) 00:31:41.632 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:41.632 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:41.632 nvme0n1 : 2.01 21264.79 83.07 0.00 0.00 6014.75 2225.49 14745.60 00:31:41.632 =================================================================================================================== 00:31:41.632 Total : 21264.79 83.07 0.00 0.00 6014.75 2225.49 14745.60 00:31:41.632 0 00:31:41.632 13:14:46 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:31:41.632 13:14:46 -- host/digest.sh@93 -- # get_accel_stats 00:31:41.632 13:14:46 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:31:41.632 13:14:46 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:31:41.632 | select(.opcode=="crc32c") 00:31:41.632 | "\(.module_name) \(.executed)"' 00:31:41.632 13:14:46 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:31:41.632 13:14:46 -- host/digest.sh@94 -- # false 00:31:41.632 13:14:46 -- host/digest.sh@94 -- # exp_module=software 00:31:41.632 13:14:46 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:31:41.632 13:14:46 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:31:41.632 13:14:46 -- host/digest.sh@98 -- # killprocess 1660 00:31:41.632 13:14:46 -- common/autotest_common.sh@936 -- # '[' -z 1660 ']' 00:31:41.632 13:14:46 -- common/autotest_common.sh@940 -- # kill -0 1660 00:31:41.632 13:14:46 -- common/autotest_common.sh@941 -- # uname 00:31:41.632 13:14:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:31:41.632 13:14:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1660 00:31:41.632 13:14:46 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:31:41.632 13:14:46 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:31:41.632 13:14:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1660' 00:31:41.632 killing process with pid 1660 00:31:41.632 13:14:46 -- common/autotest_common.sh@955 -- # kill 1660 00:31:41.632 Received shutdown signal, test time was about 2.000000 seconds 00:31:41.632 00:31:41.632 Latency(us) 00:31:41.632 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:41.632 =================================================================================================================== 00:31:41.632 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:41.632 13:14:46 -- common/autotest_common.sh@960 -- # wait 1660 00:31:41.632 13:14:46 -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:31:41.632 13:14:46 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:31:41.632 13:14:46 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:31:41.632 13:14:46 -- host/digest.sh@80 -- # rw=randwrite 00:31:41.632 13:14:46 -- host/digest.sh@80 -- # bs=131072 00:31:41.632 13:14:46 -- host/digest.sh@80 -- # qd=16 00:31:41.632 13:14:46 -- host/digest.sh@80 -- # scan_dsa=false 00:31:41.632 13:14:46 -- host/digest.sh@83 -- # bperfpid=2374 00:31:41.632 13:14:46 -- host/digest.sh@84 -- # waitforlisten 2374 /var/tmp/bperf.sock 00:31:41.632 13:14:46 -- common/autotest_common.sh@817 -- # '[' -z 2374 ']' 00:31:41.632 13:14:46 -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:31:41.632 13:14:46 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:41.632 13:14:46 -- common/autotest_common.sh@822 -- # local max_retries=100 00:31:41.632 13:14:46 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:41.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:41.632 13:14:46 -- common/autotest_common.sh@826 -- # xtrace_disable 00:31:41.632 13:14:46 -- common/autotest_common.sh@10 -- # set +x 00:31:41.632 [2024-04-26 13:14:46.571025] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:31:41.632 [2024-04-26 13:14:46.571078] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2374 ] 00:31:41.632 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:41.632 Zero copy mechanism will not be used. 00:31:41.632 EAL: No free 2048 kB hugepages reported on node 1 00:31:41.632 [2024-04-26 13:14:46.645986] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:41.892 [2024-04-26 13:14:46.697860] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:42.490 13:14:47 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:31:42.490 13:14:47 -- common/autotest_common.sh@850 -- # return 0 00:31:42.490 13:14:47 -- host/digest.sh@86 -- # false 00:31:42.490 13:14:47 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:31:42.490 13:14:47 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:31:42.490 13:14:47 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:42.490 13:14:47 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:43.061 nvme0n1 00:31:43.061 13:14:47 -- host/digest.sh@92 -- # bperf_py perform_tests 00:31:43.061 13:14:47 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:43.061 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:43.061 Zero copy mechanism will not be used. 00:31:43.061 Running I/O for 2 seconds... 00:31:44.971 00:31:44.971 Latency(us) 00:31:44.971 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:44.971 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:31:44.971 nvme0n1 : 2.00 4217.92 527.24 0.00 0.00 3788.94 1856.85 11195.73 00:31:44.971 =================================================================================================================== 00:31:44.971 Total : 4217.92 527.24 0.00 0.00 3788.94 1856.85 11195.73 00:31:44.971 0 00:31:44.971 13:14:49 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:31:44.971 13:14:49 -- host/digest.sh@93 -- # get_accel_stats 00:31:44.971 13:14:49 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:31:44.971 13:14:49 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:31:44.971 | select(.opcode=="crc32c") 00:31:44.971 | "\(.module_name) \(.executed)"' 00:31:44.971 13:14:49 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:31:45.232 13:14:50 -- host/digest.sh@94 -- # false 00:31:45.232 13:14:50 -- host/digest.sh@94 -- # exp_module=software 00:31:45.232 13:14:50 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:31:45.232 13:14:50 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:31:45.232 13:14:50 -- host/digest.sh@98 -- # killprocess 2374 00:31:45.232 13:14:50 -- common/autotest_common.sh@936 -- # '[' -z 2374 ']' 00:31:45.232 13:14:50 -- common/autotest_common.sh@940 -- # kill -0 2374 00:31:45.232 13:14:50 -- common/autotest_common.sh@941 -- # uname 00:31:45.232 13:14:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:31:45.232 13:14:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2374 00:31:45.232 13:14:50 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:31:45.232 13:14:50 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:31:45.232 13:14:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2374' 00:31:45.232 killing process with pid 2374 00:31:45.232 13:14:50 -- common/autotest_common.sh@955 -- # kill 2374 00:31:45.232 Received shutdown signal, test time was about 2.000000 seconds 00:31:45.232 00:31:45.232 Latency(us) 00:31:45.232 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:45.232 =================================================================================================================== 00:31:45.232 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:45.232 13:14:50 -- common/autotest_common.sh@960 -- # wait 2374 00:31:45.494 13:14:50 -- host/digest.sh@132 -- # killprocess 4193390 00:31:45.494 13:14:50 -- common/autotest_common.sh@936 -- # '[' -z 4193390 ']' 00:31:45.494 13:14:50 -- common/autotest_common.sh@940 -- # kill -0 4193390 00:31:45.494 13:14:50 -- common/autotest_common.sh@941 -- # uname 00:31:45.494 13:14:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:31:45.494 13:14:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4193390 00:31:45.494 13:14:50 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:31:45.494 13:14:50 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:31:45.494 13:14:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4193390' 00:31:45.494 killing process with pid 4193390 00:31:45.494 13:14:50 -- common/autotest_common.sh@955 -- # kill 4193390 00:31:45.494 13:14:50 -- common/autotest_common.sh@960 -- # wait 4193390 00:31:45.494 00:31:45.494 real 0m16.223s 00:31:45.494 user 0m31.992s 00:31:45.494 sys 0m3.278s 00:31:45.494 13:14:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:31:45.494 13:14:50 -- common/autotest_common.sh@10 -- # set +x 00:31:45.494 ************************************ 00:31:45.494 END TEST nvmf_digest_clean 00:31:45.494 ************************************ 00:31:45.494 13:14:50 -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:31:45.494 13:14:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:31:45.494 13:14:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:45.494 13:14:50 -- common/autotest_common.sh@10 -- # set +x 00:31:45.756 ************************************ 00:31:45.756 START TEST nvmf_digest_error 00:31:45.756 ************************************ 00:31:45.756 13:14:50 -- common/autotest_common.sh@1111 -- # run_digest_error 00:31:45.756 13:14:50 -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:31:45.756 13:14:50 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:31:45.756 13:14:50 -- common/autotest_common.sh@710 -- # xtrace_disable 00:31:45.756 13:14:50 -- common/autotest_common.sh@10 -- # set +x 00:31:45.756 13:14:50 -- nvmf/common.sh@470 -- # nvmfpid=3218 00:31:45.756 13:14:50 -- nvmf/common.sh@471 -- # waitforlisten 3218 00:31:45.756 13:14:50 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:31:45.756 13:14:50 -- common/autotest_common.sh@817 -- # '[' -z 3218 ']' 00:31:45.756 13:14:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:45.756 13:14:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:31:45.756 13:14:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:45.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:45.756 13:14:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:31:45.756 13:14:50 -- common/autotest_common.sh@10 -- # set +x 00:31:45.756 [2024-04-26 13:14:50.732812] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:31:45.756 [2024-04-26 13:14:50.732875] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:45.756 EAL: No free 2048 kB hugepages reported on node 1 00:31:45.756 [2024-04-26 13:14:50.804032] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:46.018 [2024-04-26 13:14:50.877329] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:46.018 [2024-04-26 13:14:50.877367] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:46.018 [2024-04-26 13:14:50.877374] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:46.018 [2024-04-26 13:14:50.877380] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:46.018 [2024-04-26 13:14:50.877386] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:46.018 [2024-04-26 13:14:50.877405] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:46.590 13:14:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:31:46.590 13:14:51 -- common/autotest_common.sh@850 -- # return 0 00:31:46.590 13:14:51 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:31:46.590 13:14:51 -- common/autotest_common.sh@716 -- # xtrace_disable 00:31:46.590 13:14:51 -- common/autotest_common.sh@10 -- # set +x 00:31:46.591 13:14:51 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:46.591 13:14:51 -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:31:46.591 13:14:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:46.591 13:14:51 -- common/autotest_common.sh@10 -- # set +x 00:31:46.591 [2024-04-26 13:14:51.543322] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:31:46.591 13:14:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:46.591 13:14:51 -- host/digest.sh@105 -- # common_target_config 00:31:46.591 13:14:51 -- host/digest.sh@43 -- # rpc_cmd 00:31:46.591 13:14:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:46.591 13:14:51 -- common/autotest_common.sh@10 -- # set +x 00:31:46.591 null0 00:31:46.591 [2024-04-26 13:14:51.623774] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:46.591 [2024-04-26 13:14:51.647961] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:46.851 13:14:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:46.851 13:14:51 -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:31:46.851 13:14:51 -- host/digest.sh@54 -- # local rw bs qd 00:31:46.851 13:14:51 -- host/digest.sh@56 -- # rw=randread 00:31:46.851 13:14:51 -- host/digest.sh@56 -- # bs=4096 00:31:46.851 13:14:51 -- host/digest.sh@56 -- # qd=128 00:31:46.851 13:14:51 -- host/digest.sh@58 -- # bperfpid=3547 00:31:46.851 13:14:51 -- host/digest.sh@60 -- # waitforlisten 3547 /var/tmp/bperf.sock 00:31:46.851 13:14:51 -- common/autotest_common.sh@817 -- # '[' -z 3547 ']' 00:31:46.851 13:14:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:46.851 13:14:51 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:31:46.851 13:14:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:31:46.851 13:14:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:46.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:46.851 13:14:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:31:46.851 13:14:51 -- common/autotest_common.sh@10 -- # set +x 00:31:46.851 [2024-04-26 13:14:51.700490] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:31:46.851 [2024-04-26 13:14:51.700537] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3547 ] 00:31:46.851 EAL: No free 2048 kB hugepages reported on node 1 00:31:46.851 [2024-04-26 13:14:51.776435] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:46.851 [2024-04-26 13:14:51.829079] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:47.420 13:14:52 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:31:47.420 13:14:52 -- common/autotest_common.sh@850 -- # return 0 00:31:47.420 13:14:52 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:47.420 13:14:52 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:47.680 13:14:52 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:31:47.680 13:14:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:47.680 13:14:52 -- common/autotest_common.sh@10 -- # set +x 00:31:47.680 13:14:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:47.680 13:14:52 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:47.680 13:14:52 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:47.941 nvme0n1 00:31:47.941 13:14:52 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:31:47.941 13:14:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:47.941 13:14:52 -- common/autotest_common.sh@10 -- # set +x 00:31:47.941 13:14:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:47.941 13:14:52 -- host/digest.sh@69 -- # bperf_py perform_tests 00:31:47.941 13:14:52 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:47.941 Running I/O for 2 seconds... 00:31:47.941 [2024-04-26 13:14:52.947684] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:47.941 [2024-04-26 13:14:52.947715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17735 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.941 [2024-04-26 13:14:52.947723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.941 [2024-04-26 13:14:52.958197] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:47.941 [2024-04-26 13:14:52.958217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:894 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.941 [2024-04-26 13:14:52.958224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.941 [2024-04-26 13:14:52.972936] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:47.941 [2024-04-26 13:14:52.972954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:12360 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.941 [2024-04-26 13:14:52.972961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.941 [2024-04-26 13:14:52.984583] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:47.942 [2024-04-26 13:14:52.984602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:15739 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.942 [2024-04-26 13:14:52.984609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.942 [2024-04-26 13:14:52.997731] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:47.942 [2024-04-26 13:14:52.997750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:7727 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.942 [2024-04-26 13:14:52.997756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.203 [2024-04-26 13:14:53.010419] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:48.203 [2024-04-26 13:14:53.010437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:19205 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.203 [2024-04-26 13:14:53.010443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.203 [2024-04-26 13:14:53.022730] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:48.203 [2024-04-26 13:14:53.022752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:2296 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.203 [2024-04-26 13:14:53.022762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.203 [2024-04-26 13:14:53.035917] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:48.203 [2024-04-26 13:14:53.035937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:540 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.203 [2024-04-26 13:14:53.035944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.203 [2024-04-26 13:14:53.048973] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:48.203 [2024-04-26 13:14:53.048990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:23205 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.203 [2024-04-26 13:14:53.049000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.203 [2024-04-26 13:14:53.061879] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:48.203 [2024-04-26 13:14:53.061897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:2972 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.203 [2024-04-26 13:14:53.061903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.203 [2024-04-26 13:14:53.072964] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:48.203 [2024-04-26 13:14:53.072981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:25511 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.203 [2024-04-26 13:14:53.072987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.203 [2024-04-26 13:14:53.086253] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:48.203 [2024-04-26 13:14:53.086270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:5419 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.203 [2024-04-26 13:14:53.086277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.203 [2024-04-26 13:14:53.098014] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:48.203 [2024-04-26 13:14:53.098032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:14420 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.203 [2024-04-26 13:14:53.098038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.203 [2024-04-26 13:14:53.111687] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:48.203 [2024-04-26 13:14:53.111705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:25131 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.203 [2024-04-26 13:14:53.111711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.203 [2024-04-26 13:14:53.123182] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:48.203 [2024-04-26 13:14:53.123199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:7510 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.203 [2024-04-26 13:14:53.123206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.203 [2024-04-26 13:14:53.137417] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:48.203 [2024-04-26 13:14:53.137434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.203 [2024-04-26 13:14:53.137441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.203 [2024-04-26 13:14:53.149865] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:48.203 [2024-04-26 13:14:53.149883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10008 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.203 [2024-04-26 13:14:53.149889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.203 [2024-04-26 13:14:53.163082] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:48.203 [2024-04-26 13:14:53.163100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:24393 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.203 [2024-04-26 13:14:53.163106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.203 [2024-04-26 13:14:53.176054] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:48.203 [2024-04-26 13:14:53.176072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:16441 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.203 [2024-04-26 13:14:53.176079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.204 [2024-04-26 13:14:53.188200] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:48.204 [2024-04-26 13:14:53.188217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:16705 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.204 [2024-04-26 13:14:53.188224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.204 [2024-04-26 13:14:53.200389] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:48.204 [2024-04-26 13:14:53.200407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:13052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.204 [2024-04-26 13:14:53.200414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.204 [2024-04-26 13:14:53.212904] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:48.204 [2024-04-26 13:14:53.212923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:15679 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.204 [2024-04-26 13:14:53.212930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.204 [2024-04-26 13:14:53.225181] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:48.204 [2024-04-26 13:14:53.225199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:2485 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.204 [2024-04-26 13:14:53.225205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.204 [2024-04-26 13:14:53.238545] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:48.204 [2024-04-26 13:14:53.238567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:21080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.204 [2024-04-26 13:14:53.238574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.204 [2024-04-26 13:14:53.251674] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:48.204 [2024-04-26 13:14:53.251692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:8764 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.204 [2024-04-26 13:14:53.251698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.480 [2024-04-26 13:14:53.264962] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:48.480 [2024-04-26 13:14:53.264981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:23042 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.480 [2024-04-26 13:14:53.264988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.481 [2024-04-26 13:14:53.277051] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:48.481 [2024-04-26 13:14:53.277072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:24668 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.481 [2024-04-26 13:14:53.277079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.481 [2024-04-26 13:14:53.288217] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:48.481 [2024-04-26 13:14:53.288234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:8699 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.481 [2024-04-26 13:14:53.288241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.481 [2024-04-26 13:14:53.301008] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:48.481 [2024-04-26 13:14:53.301026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:3908 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.481 [2024-04-26 13:14:53.301032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.481 [2024-04-26 13:14:53.314490] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:48.481 [2024-04-26 13:14:53.314507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:16699 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.481 [2024-04-26 13:14:53.314513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.481 [2024-04-26 13:14:53.327634] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:48.481 [2024-04-26 13:14:53.327651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:7667 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.481 [2024-04-26 13:14:53.327658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.481 [2024-04-26 13:14:53.339859] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:48.481 [2024-04-26 13:14:53.339877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:11241 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.481 [2024-04-26 13:14:53.339884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.481 [2024-04-26 13:14:53.353039] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:48.481 [2024-04-26 13:14:53.353056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:2735 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.481 [2024-04-26 13:14:53.353062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.481 [2024-04-26 13:14:53.365028] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:48.481 [2024-04-26 13:14:53.365046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:12865 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.481 [2024-04-26 13:14:53.365052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.481 [2024-04-26 13:14:53.378835] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:48.481 [2024-04-26 13:14:53.378857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:10605 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.481 [2024-04-26 13:14:53.378864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.481 [2024-04-26 13:14:53.391900] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:48.481 [2024-04-26 13:14:53.391921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17041 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.481 [2024-04-26 13:14:53.391928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.481 [2024-04-26 13:14:53.401672] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:48.481 [2024-04-26 13:14:53.401690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:20807 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.481 [2024-04-26 13:14:53.401696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.481 [2024-04-26 13:14:53.415368] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:48.481 [2024-04-26 13:14:53.415387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21021 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.481 [2024-04-26 13:14:53.415395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.481 [2024-04-26 13:14:53.429408] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:48.481 [2024-04-26 13:14:53.429426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12524 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.481 [2024-04-26 13:14:53.429433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.481 [2024-04-26 13:14:53.439956] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:48.481 [2024-04-26 13:14:53.439973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:16661 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.481 [2024-04-26 13:14:53.439980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.481 [2024-04-26 13:14:53.453434] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:48.481 [2024-04-26 13:14:53.453453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:25098 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.481 [2024-04-26 13:14:53.453462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.481 [2024-04-26 13:14:53.466498] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:48.481 [2024-04-26 13:14:53.466517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:2436 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.481 [2024-04-26 13:14:53.466523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.481 [2024-04-26 13:14:53.478818] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:48.481 [2024-04-26 13:14:53.478840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:11205 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.481 [2024-04-26 13:14:53.478847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.481 [2024-04-26 13:14:53.491880] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:48.481 [2024-04-26 13:14:53.491897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7490 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.481 [2024-04-26 13:14:53.491904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.481 [2024-04-26 13:14:53.505341] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:48.481 [2024-04-26 13:14:53.505358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:10274 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.481 [2024-04-26 13:14:53.505365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.481 [2024-04-26 13:14:53.518008] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:48.481 [2024-04-26 13:14:53.518025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:23264 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.481 [2024-04-26 13:14:53.518031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.481 [2024-04-26 13:14:53.529684] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:48.481 [2024-04-26 13:14:53.529702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:22652 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.481 [2024-04-26 13:14:53.529708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.746 [2024-04-26 13:14:53.544018] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:48.746 [2024-04-26 13:14:53.544036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:17817 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.746 [2024-04-26 13:14:53.544043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.746 [2024-04-26 13:14:53.556490] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:48.746 [2024-04-26 13:14:53.556510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:7403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.746 [2024-04-26 13:14:53.556517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.746 [2024-04-26 13:14:53.569216] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:48.746 [2024-04-26 13:14:53.569238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:7238 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.746 [2024-04-26 13:14:53.569246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.746 [2024-04-26 13:14:53.582832] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:48.746 [2024-04-26 13:14:53.582855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:25283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.746 [2024-04-26 13:14:53.582862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.746 [2024-04-26 13:14:53.594311] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:48.746 [2024-04-26 13:14:53.594328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:13130 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.746 [2024-04-26 13:14:53.594338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.746 [2024-04-26 13:14:53.605402] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:48.746 [2024-04-26 13:14:53.605419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15857 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.746 [2024-04-26 13:14:53.605426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.746 [2024-04-26 13:14:53.619636] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:48.746 [2024-04-26 13:14:53.619653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:6528 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.746 [2024-04-26 13:14:53.619660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.746 [2024-04-26 13:14:53.634682] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:48.746 [2024-04-26 13:14:53.634699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13694 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.746 [2024-04-26 13:14:53.634706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.746 [2024-04-26 13:14:53.646802] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:48.746 [2024-04-26 13:14:53.646819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:4146 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.746 [2024-04-26 13:14:53.646825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.746 [2024-04-26 13:14:53.658061] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:48.746 [2024-04-26 13:14:53.658079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:22255 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.746 [2024-04-26 13:14:53.658085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.746 [2024-04-26 13:14:53.672104] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:48.746 [2024-04-26 13:14:53.672121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:23044 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.746 [2024-04-26 13:14:53.672127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.746 [2024-04-26 13:14:53.684770] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:48.746 [2024-04-26 13:14:53.684787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:8384 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.746 [2024-04-26 13:14:53.684793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.746 [2024-04-26 13:14:53.696352] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:48.746 [2024-04-26 13:14:53.696370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:14563 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.746 [2024-04-26 13:14:53.696377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.746 [2024-04-26 13:14:53.708941] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:48.746 [2024-04-26 13:14:53.708959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:7120 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.746 [2024-04-26 13:14:53.708965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.746 [2024-04-26 13:14:53.723758] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:48.746 [2024-04-26 13:14:53.723776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:4858 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.746 [2024-04-26 13:14:53.723783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.746 [2024-04-26 13:14:53.737133] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:48.746 [2024-04-26 13:14:53.737149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8143 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.746 [2024-04-26 13:14:53.737156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.746 [2024-04-26 13:14:53.746993] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:48.746 [2024-04-26 13:14:53.747010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.746 [2024-04-26 13:14:53.747017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.746 [2024-04-26 13:14:53.759882] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:48.746 [2024-04-26 13:14:53.759900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:9904 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.746 [2024-04-26 13:14:53.759906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.746 [2024-04-26 13:14:53.774758] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:48.746 [2024-04-26 13:14:53.774775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:14648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.746 [2024-04-26 13:14:53.774781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.746 [2024-04-26 13:14:53.786924] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:48.746 [2024-04-26 13:14:53.786941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:7963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.746 [2024-04-26 13:14:53.786950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.746 [2024-04-26 13:14:53.800185] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:48.746 [2024-04-26 13:14:53.800202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:11595 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.746 [2024-04-26 13:14:53.800209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:49.008 [2024-04-26 13:14:53.811854] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:49.008 [2024-04-26 13:14:53.811872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:5595 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.008 [2024-04-26 13:14:53.811879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:49.008 [2024-04-26 13:14:53.824328] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:49.008 [2024-04-26 13:14:53.824346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:4936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.008 [2024-04-26 13:14:53.824352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:49.008 [2024-04-26 13:14:53.836847] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:49.008 [2024-04-26 13:14:53.836866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:14922 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.008 [2024-04-26 13:14:53.836873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:49.008 [2024-04-26 13:14:53.850289] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:49.008 [2024-04-26 13:14:53.850308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:14736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.008 [2024-04-26 13:14:53.850315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:49.008 [2024-04-26 13:14:53.862454] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:49.008 [2024-04-26 13:14:53.862471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:24011 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.008 [2024-04-26 13:14:53.862478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:49.008 [2024-04-26 13:14:53.876244] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:49.008 [2024-04-26 13:14:53.876262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:24416 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.008 [2024-04-26 13:14:53.876268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:49.008 [2024-04-26 13:14:53.888114] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:49.008 [2024-04-26 13:14:53.888132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:23087 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.008 [2024-04-26 13:14:53.888138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:49.008 [2024-04-26 13:14:53.901582] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:49.008 [2024-04-26 13:14:53.901600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22205 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.008 [2024-04-26 13:14:53.901607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:49.008 [2024-04-26 13:14:53.913872] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:49.008 [2024-04-26 13:14:53.913889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:813 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.008 [2024-04-26 13:14:53.913896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:49.008 [2024-04-26 13:14:53.926671] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:49.008 [2024-04-26 13:14:53.926689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:21864 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.008 [2024-04-26 13:14:53.926696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:49.008 [2024-04-26 13:14:53.938797] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:49.008 [2024-04-26 13:14:53.938815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:25293 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.008 [2024-04-26 13:14:53.938822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:49.008 [2024-04-26 13:14:53.950664] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:49.008 [2024-04-26 13:14:53.950682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:8398 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.009 [2024-04-26 13:14:53.950688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:49.009 [2024-04-26 13:14:53.964588] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:49.009 [2024-04-26 13:14:53.964605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23771 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.009 [2024-04-26 13:14:53.964611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:49.009 [2024-04-26 13:14:53.977208] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:49.009 [2024-04-26 13:14:53.977226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:13559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.009 [2024-04-26 13:14:53.977233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:49.009 [2024-04-26 13:14:53.990496] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:49.009 [2024-04-26 13:14:53.990517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:1367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.009 [2024-04-26 13:14:53.990523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:49.009 [2024-04-26 13:14:54.002059] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:49.009 [2024-04-26 13:14:54.002077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:13940 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.009 [2024-04-26 13:14:54.002086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:49.009 [2024-04-26 13:14:54.015012] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:49.009 [2024-04-26 13:14:54.015029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:24034 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.009 [2024-04-26 13:14:54.015036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:49.009 [2024-04-26 13:14:54.026756] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:49.009 [2024-04-26 13:14:54.026774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:6996 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.009 [2024-04-26 13:14:54.026781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:49.009 [2024-04-26 13:14:54.040409] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:49.009 [2024-04-26 13:14:54.040427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:3633 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.009 [2024-04-26 13:14:54.040433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:49.009 [2024-04-26 13:14:54.052373] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:49.009 [2024-04-26 13:14:54.052390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:21264 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.009 [2024-04-26 13:14:54.052396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:49.009 [2024-04-26 13:14:54.065858] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:49.009 [2024-04-26 13:14:54.065876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:16254 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.009 [2024-04-26 13:14:54.065882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:49.269 [2024-04-26 13:14:54.078130] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:49.269 [2024-04-26 13:14:54.078148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:9593 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.269 [2024-04-26 13:14:54.078155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:49.269 [2024-04-26 13:14:54.092474] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:49.269 [2024-04-26 13:14:54.092494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23280 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.269 [2024-04-26 13:14:54.092502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:49.269 [2024-04-26 13:14:54.106378] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:49.269 [2024-04-26 13:14:54.106395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:5294 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.269 [2024-04-26 13:14:54.106402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:49.269 [2024-04-26 13:14:54.117067] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:49.269 [2024-04-26 13:14:54.117087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:11032 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.269 [2024-04-26 13:14:54.117094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:49.269 [2024-04-26 13:14:54.131007] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:49.269 [2024-04-26 13:14:54.131026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:4619 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.269 [2024-04-26 13:14:54.131032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:49.269 [2024-04-26 13:14:54.145164] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:49.269 [2024-04-26 13:14:54.145182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:18517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.269 [2024-04-26 13:14:54.145191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:49.269 [2024-04-26 13:14:54.155817] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:49.270 [2024-04-26 13:14:54.155834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:3602 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.270 [2024-04-26 13:14:54.155844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:49.270 [2024-04-26 13:14:54.170511] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:49.270 [2024-04-26 13:14:54.170528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6221 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.270 [2024-04-26 13:14:54.170535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:49.270 [2024-04-26 13:14:54.182617] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:49.270 [2024-04-26 13:14:54.182634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:3938 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.270 [2024-04-26 13:14:54.182641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:49.270 [2024-04-26 13:14:54.195889] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:49.270 [2024-04-26 13:14:54.195907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3158 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.270 [2024-04-26 13:14:54.195914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:49.270 [2024-04-26 13:14:54.208793] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:49.270 [2024-04-26 13:14:54.208811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18599 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.270 [2024-04-26 13:14:54.208818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:49.270 [2024-04-26 13:14:54.221544] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:49.270 [2024-04-26 13:14:54.221562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:22248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.270 [2024-04-26 13:14:54.221569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:49.270 [2024-04-26 13:14:54.233750] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:49.270 [2024-04-26 13:14:54.233768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22918 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.270 [2024-04-26 13:14:54.233775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:49.270 [2024-04-26 13:14:54.246244] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:49.270 [2024-04-26 13:14:54.246263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15380 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.270 [2024-04-26 13:14:54.246272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:49.270 [2024-04-26 13:14:54.258484] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:49.270 [2024-04-26 13:14:54.258505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:14991 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.270 [2024-04-26 13:14:54.258512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:49.270 [2024-04-26 13:14:54.272622] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:49.270 [2024-04-26 13:14:54.272640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:1255 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.270 [2024-04-26 13:14:54.272647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:49.270 [2024-04-26 13:14:54.283097] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:49.270 [2024-04-26 13:14:54.283114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:17454 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.270 [2024-04-26 13:14:54.283121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:49.270 [2024-04-26 13:14:54.297051] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:49.270 [2024-04-26 13:14:54.297069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:2532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.270 [2024-04-26 13:14:54.297075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:49.270 [2024-04-26 13:14:54.310659] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:49.270 [2024-04-26 13:14:54.310679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:11417 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.270 [2024-04-26 13:14:54.310686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:49.270 [2024-04-26 13:14:54.323938] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:49.270 [2024-04-26 13:14:54.323956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:9330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.270 [2024-04-26 13:14:54.323962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:49.531 [2024-04-26 13:14:54.336505] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:49.531 [2024-04-26 13:14:54.336526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:2243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.531 [2024-04-26 13:14:54.336536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:49.531 [2024-04-26 13:14:54.349289] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:49.531 [2024-04-26 13:14:54.349307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10057 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.531 [2024-04-26 13:14:54.349313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:49.531 [2024-04-26 13:14:54.359614] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:49.531 [2024-04-26 13:14:54.359631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:22598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.531 [2024-04-26 13:14:54.359638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:49.531 [2024-04-26 13:14:54.372562] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:49.531 [2024-04-26 13:14:54.372580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:16869 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.531 [2024-04-26 13:14:54.372587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:49.531 [2024-04-26 13:14:54.386048] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:49.532 [2024-04-26 13:14:54.386066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:23642 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.532 [2024-04-26 13:14:54.386073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:49.532 [2024-04-26 13:14:54.400112] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:49.532 [2024-04-26 13:14:54.400132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:23287 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.532 [2024-04-26 13:14:54.400139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:49.532 [2024-04-26 13:14:54.412351] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:49.532 [2024-04-26 13:14:54.412369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:3522 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.532 [2024-04-26 13:14:54.412376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:49.532 [2024-04-26 13:14:54.424472] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:49.532 [2024-04-26 13:14:54.424490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:9678 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.532 [2024-04-26 13:14:54.424497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:49.532 [2024-04-26 13:14:54.436410] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:49.532 [2024-04-26 13:14:54.436427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:23996 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.532 [2024-04-26 13:14:54.436433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:49.532 [2024-04-26 13:14:54.450782] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:49.532 [2024-04-26 13:14:54.450803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:6864 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.532 [2024-04-26 13:14:54.450810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:49.532 [2024-04-26 13:14:54.461182] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:49.532 [2024-04-26 13:14:54.461201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:25071 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.532 [2024-04-26 13:14:54.461208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:49.532 [2024-04-26 13:14:54.474508] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:49.532 [2024-04-26 13:14:54.474526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:3814 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.532 [2024-04-26 13:14:54.474532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:49.532 [2024-04-26 13:14:54.488562] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:49.532 [2024-04-26 13:14:54.488581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:2912 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.532 [2024-04-26 13:14:54.488588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:49.532 [2024-04-26 13:14:54.501572] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:49.532 [2024-04-26 13:14:54.501590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19947 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.532 [2024-04-26 13:14:54.501597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:49.532 [2024-04-26 13:14:54.514456] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:49.532 [2024-04-26 13:14:54.514474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:15453 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.532 [2024-04-26 13:14:54.514481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:49.532 [2024-04-26 13:14:54.528141] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:49.532 [2024-04-26 13:14:54.528160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:22554 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.532 [2024-04-26 13:14:54.528167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:49.532 [2024-04-26 13:14:54.540159] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:49.532 [2024-04-26 13:14:54.540177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:17775 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.532 [2024-04-26 13:14:54.540183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:49.532 [2024-04-26 13:14:54.550402] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:49.532 [2024-04-26 13:14:54.550419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:14983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.532 [2024-04-26 13:14:54.550430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:49.532 [2024-04-26 13:14:54.565559] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:49.532 [2024-04-26 13:14:54.565576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:12618 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.532 [2024-04-26 13:14:54.565582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:49.532 [2024-04-26 13:14:54.577418] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:49.532 [2024-04-26 13:14:54.577436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:2693 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.532 [2024-04-26 13:14:54.577444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:49.532 [2024-04-26 13:14:54.590190] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:49.532 [2024-04-26 13:14:54.590208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14270 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.532 [2024-04-26 13:14:54.590215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:49.793 [2024-04-26 13:14:54.603070] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:49.793 [2024-04-26 13:14:54.603089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:9243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.793 [2024-04-26 13:14:54.603095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:49.793 [2024-04-26 13:14:54.616352] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:49.793 [2024-04-26 13:14:54.616371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7078 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.793 [2024-04-26 13:14:54.616378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:49.793 [2024-04-26 13:14:54.628964] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:49.793 [2024-04-26 13:14:54.628985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:7026 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.793 [2024-04-26 13:14:54.628992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:49.793 [2024-04-26 13:14:54.640970] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:49.793 [2024-04-26 13:14:54.640988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:10199 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.793 [2024-04-26 13:14:54.640995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:49.793 [2024-04-26 13:14:54.652529] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:49.793 [2024-04-26 13:14:54.652547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10568 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.793 [2024-04-26 13:14:54.652553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:49.793 [2024-04-26 13:14:54.666956] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:49.793 [2024-04-26 13:14:54.666979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:1809 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.793 [2024-04-26 13:14:54.666985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:49.793 [2024-04-26 13:14:54.679960] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:49.793 [2024-04-26 13:14:54.679981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:7773 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.794 [2024-04-26 13:14:54.679988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:49.794 [2024-04-26 13:14:54.690903] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:49.794 [2024-04-26 13:14:54.690922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:752 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.794 [2024-04-26 13:14:54.690931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:49.794 [2024-04-26 13:14:54.704170] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:49.794 [2024-04-26 13:14:54.704190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:23012 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.794 [2024-04-26 13:14:54.704197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:49.794 [2024-04-26 13:14:54.717204] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:49.794 [2024-04-26 13:14:54.717225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:15833 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.794 [2024-04-26 13:14:54.717231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:49.794 [2024-04-26 13:14:54.730988] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:49.794 [2024-04-26 13:14:54.731009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:23282 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.794 [2024-04-26 13:14:54.731015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:49.794 [2024-04-26 13:14:54.742704] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:49.794 [2024-04-26 13:14:54.742725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:11248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.794 [2024-04-26 13:14:54.742732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:49.794 [2024-04-26 13:14:54.755138] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:49.794 [2024-04-26 13:14:54.755157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:240 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.794 [2024-04-26 13:14:54.755167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:49.794 [2024-04-26 13:14:54.767436] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:49.794 [2024-04-26 13:14:54.767454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:11007 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.794 [2024-04-26 13:14:54.767460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:49.794 [2024-04-26 13:14:54.780150] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:49.794 [2024-04-26 13:14:54.780169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:16139 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.794 [2024-04-26 13:14:54.780175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:49.794 [2024-04-26 13:14:54.792436] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:49.794 [2024-04-26 13:14:54.792454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:22896 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.794 [2024-04-26 13:14:54.792461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:49.794 [2024-04-26 13:14:54.806005] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:49.794 [2024-04-26 13:14:54.806022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:13985 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.794 [2024-04-26 13:14:54.806029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:49.794 [2024-04-26 13:14:54.817971] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:49.794 [2024-04-26 13:14:54.817989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:21555 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.794 [2024-04-26 13:14:54.817995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:49.794 [2024-04-26 13:14:54.830184] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:49.794 [2024-04-26 13:14:54.830201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:8373 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.794 [2024-04-26 13:14:54.830208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:49.794 [2024-04-26 13:14:54.843929] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:49.794 [2024-04-26 13:14:54.843947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4022 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.794 [2024-04-26 13:14:54.843954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:50.055 [2024-04-26 13:14:54.856996] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:50.055 [2024-04-26 13:14:54.857014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:21083 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.055 [2024-04-26 13:14:54.857021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:50.055 [2024-04-26 13:14:54.870315] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:50.055 [2024-04-26 13:14:54.870334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:15314 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.055 [2024-04-26 13:14:54.870341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:50.055 [2024-04-26 13:14:54.882431] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:50.055 [2024-04-26 13:14:54.882451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17450 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.055 [2024-04-26 13:14:54.882463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:50.055 [2024-04-26 13:14:54.893655] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:50.055 [2024-04-26 13:14:54.893672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:19131 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.055 [2024-04-26 13:14:54.893679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:50.055 [2024-04-26 13:14:54.908197] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:50.055 [2024-04-26 13:14:54.908213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:18608 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.055 [2024-04-26 13:14:54.908220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:50.055 [2024-04-26 13:14:54.919363] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:50.055 [2024-04-26 13:14:54.919381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:9967 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.055 [2024-04-26 13:14:54.919387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:50.055 [2024-04-26 13:14:54.932050] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2452260) 00:31:50.055 [2024-04-26 13:14:54.932068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:25047 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.055 [2024-04-26 13:14:54.932074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:50.055 00:31:50.055 Latency(us) 00:31:50.055 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:50.055 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:31:50.055 nvme0n1 : 2.00 20028.67 78.24 0.00 0.00 6384.49 2457.60 17039.36 00:31:50.055 =================================================================================================================== 00:31:50.055 Total : 20028.67 78.24 0.00 0.00 6384.49 2457.60 17039.36 00:31:50.055 0 00:31:50.055 13:14:54 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:31:50.055 13:14:54 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:31:50.055 13:14:54 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:31:50.055 | .driver_specific 00:31:50.055 | .nvme_error 00:31:50.055 | .status_code 00:31:50.055 | .command_transient_transport_error' 00:31:50.055 13:14:54 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:31:50.316 13:14:55 -- host/digest.sh@71 -- # (( 157 > 0 )) 00:31:50.316 13:14:55 -- host/digest.sh@73 -- # killprocess 3547 00:31:50.316 13:14:55 -- common/autotest_common.sh@936 -- # '[' -z 3547 ']' 00:31:50.316 13:14:55 -- common/autotest_common.sh@940 -- # kill -0 3547 00:31:50.316 13:14:55 -- common/autotest_common.sh@941 -- # uname 00:31:50.316 13:14:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:31:50.316 13:14:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3547 00:31:50.316 13:14:55 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:31:50.316 13:14:55 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:31:50.316 13:14:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3547' 00:31:50.316 killing process with pid 3547 00:31:50.316 13:14:55 -- common/autotest_common.sh@955 -- # kill 3547 00:31:50.316 Received shutdown signal, test time was about 2.000000 seconds 00:31:50.316 00:31:50.316 Latency(us) 00:31:50.316 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:50.316 =================================================================================================================== 00:31:50.316 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:50.316 13:14:55 -- common/autotest_common.sh@960 -- # wait 3547 00:31:50.316 13:14:55 -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:31:50.316 13:14:55 -- host/digest.sh@54 -- # local rw bs qd 00:31:50.316 13:14:55 -- host/digest.sh@56 -- # rw=randread 00:31:50.316 13:14:55 -- host/digest.sh@56 -- # bs=131072 00:31:50.316 13:14:55 -- host/digest.sh@56 -- # qd=16 00:31:50.316 13:14:55 -- host/digest.sh@58 -- # bperfpid=4302 00:31:50.316 13:14:55 -- host/digest.sh@60 -- # waitforlisten 4302 /var/tmp/bperf.sock 00:31:50.316 13:14:55 -- common/autotest_common.sh@817 -- # '[' -z 4302 ']' 00:31:50.316 13:14:55 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:31:50.316 13:14:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:50.316 13:14:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:31:50.316 13:14:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:50.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:50.316 13:14:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:31:50.316 13:14:55 -- common/autotest_common.sh@10 -- # set +x 00:31:50.316 [2024-04-26 13:14:55.332133] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:31:50.316 [2024-04-26 13:14:55.332187] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4302 ] 00:31:50.316 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:50.316 Zero copy mechanism will not be used. 00:31:50.316 EAL: No free 2048 kB hugepages reported on node 1 00:31:50.577 [2024-04-26 13:14:55.407507] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:50.577 [2024-04-26 13:14:55.458257] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:51.152 13:14:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:31:51.152 13:14:56 -- common/autotest_common.sh@850 -- # return 0 00:31:51.152 13:14:56 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:51.152 13:14:56 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:51.459 13:14:56 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:31:51.459 13:14:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:51.459 13:14:56 -- common/autotest_common.sh@10 -- # set +x 00:31:51.459 13:14:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:51.459 13:14:56 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:51.459 13:14:56 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:51.787 nvme0n1 00:31:51.787 13:14:56 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:31:51.787 13:14:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:51.787 13:14:56 -- common/autotest_common.sh@10 -- # set +x 00:31:51.787 13:14:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:51.787 13:14:56 -- host/digest.sh@69 -- # bperf_py perform_tests 00:31:51.787 13:14:56 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:51.787 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:51.787 Zero copy mechanism will not be used. 00:31:51.787 Running I/O for 2 seconds... 00:31:51.787 [2024-04-26 13:14:56.623766] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:51.787 [2024-04-26 13:14:56.623803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.787 [2024-04-26 13:14:56.623812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:51.787 [2024-04-26 13:14:56.630101] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:51.787 [2024-04-26 13:14:56.630121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.787 [2024-04-26 13:14:56.630129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:51.787 [2024-04-26 13:14:56.636246] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:51.787 [2024-04-26 13:14:56.636264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.787 [2024-04-26 13:14:56.636271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:51.787 [2024-04-26 13:14:56.642161] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:51.787 [2024-04-26 13:14:56.642179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.787 [2024-04-26 13:14:56.642185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:51.787 [2024-04-26 13:14:56.651621] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:51.787 [2024-04-26 13:14:56.651639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.787 [2024-04-26 13:14:56.651646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:51.787 [2024-04-26 13:14:56.661637] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:51.787 [2024-04-26 13:14:56.661655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.787 [2024-04-26 13:14:56.661662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:51.787 [2024-04-26 13:14:56.671169] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:51.788 [2024-04-26 13:14:56.671186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.788 [2024-04-26 13:14:56.671193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:51.788 [2024-04-26 13:14:56.680703] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:51.788 [2024-04-26 13:14:56.680721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.788 [2024-04-26 13:14:56.680728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:51.788 [2024-04-26 13:14:56.690926] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:51.788 [2024-04-26 13:14:56.690943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.788 [2024-04-26 13:14:56.690950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:51.788 [2024-04-26 13:14:56.702700] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:51.788 [2024-04-26 13:14:56.702718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.788 [2024-04-26 13:14:56.702724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:51.788 [2024-04-26 13:14:56.714019] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:51.788 [2024-04-26 13:14:56.714037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.788 [2024-04-26 13:14:56.714043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:51.788 [2024-04-26 13:14:56.723871] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:51.788 [2024-04-26 13:14:56.723888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.788 [2024-04-26 13:14:56.723894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:51.788 [2024-04-26 13:14:56.734371] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:51.788 [2024-04-26 13:14:56.734388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.788 [2024-04-26 13:14:56.734395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:51.788 [2024-04-26 13:14:56.744123] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:51.788 [2024-04-26 13:14:56.744141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.788 [2024-04-26 13:14:56.744147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:51.788 [2024-04-26 13:14:56.755479] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:51.788 [2024-04-26 13:14:56.755496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.788 [2024-04-26 13:14:56.755503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:51.788 [2024-04-26 13:14:56.766348] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:51.788 [2024-04-26 13:14:56.766365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.788 [2024-04-26 13:14:56.766371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:51.788 [2024-04-26 13:14:56.779228] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:51.788 [2024-04-26 13:14:56.779245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.788 [2024-04-26 13:14:56.779252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:51.788 [2024-04-26 13:14:56.792534] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:51.788 [2024-04-26 13:14:56.792552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.788 [2024-04-26 13:14:56.792561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:52.063 [2024-04-26 13:14:56.805213] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:52.063 [2024-04-26 13:14:56.805231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.063 [2024-04-26 13:14:56.805238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:52.063 [2024-04-26 13:14:56.818182] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:52.063 [2024-04-26 13:14:56.818200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.063 [2024-04-26 13:14:56.818206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:52.063 [2024-04-26 13:14:56.831580] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:52.063 [2024-04-26 13:14:56.831597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.063 [2024-04-26 13:14:56.831603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:52.063 [2024-04-26 13:14:56.845312] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:52.063 [2024-04-26 13:14:56.845330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.063 [2024-04-26 13:14:56.845336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:52.063 [2024-04-26 13:14:56.858888] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:52.063 [2024-04-26 13:14:56.858906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.063 [2024-04-26 13:14:56.858913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:52.063 [2024-04-26 13:14:56.872336] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:52.063 [2024-04-26 13:14:56.872353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.063 [2024-04-26 13:14:56.872359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:52.063 [2024-04-26 13:14:56.885739] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:52.063 [2024-04-26 13:14:56.885757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.063 [2024-04-26 13:14:56.885763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:52.063 [2024-04-26 13:14:56.899210] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:52.063 [2024-04-26 13:14:56.899231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.063 [2024-04-26 13:14:56.899238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:52.063 [2024-04-26 13:14:56.912595] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:52.063 [2024-04-26 13:14:56.912613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.063 [2024-04-26 13:14:56.912620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:52.063 [2024-04-26 13:14:56.925312] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:52.063 [2024-04-26 13:14:56.925330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.063 [2024-04-26 13:14:56.925336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:52.063 [2024-04-26 13:14:56.937148] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:52.063 [2024-04-26 13:14:56.937166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.063 [2024-04-26 13:14:56.937173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:52.063 [2024-04-26 13:14:56.947456] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:52.063 [2024-04-26 13:14:56.947474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.063 [2024-04-26 13:14:56.947480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:52.063 [2024-04-26 13:14:56.957222] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:52.063 [2024-04-26 13:14:56.957239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.063 [2024-04-26 13:14:56.957246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:52.063 [2024-04-26 13:14:56.968442] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:52.063 [2024-04-26 13:14:56.968460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.063 [2024-04-26 13:14:56.968467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:52.063 [2024-04-26 13:14:56.979393] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:52.063 [2024-04-26 13:14:56.979411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.063 [2024-04-26 13:14:56.979417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:52.063 [2024-04-26 13:14:56.988990] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:52.063 [2024-04-26 13:14:56.989009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.063 [2024-04-26 13:14:56.989015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:52.063 [2024-04-26 13:14:56.999593] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:52.063 [2024-04-26 13:14:56.999610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.063 [2024-04-26 13:14:56.999620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:52.063 [2024-04-26 13:14:57.011449] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:52.063 [2024-04-26 13:14:57.011468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.063 [2024-04-26 13:14:57.011474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:52.063 [2024-04-26 13:14:57.020057] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:52.063 [2024-04-26 13:14:57.020075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.063 [2024-04-26 13:14:57.020081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:52.063 [2024-04-26 13:14:57.030792] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:52.063 [2024-04-26 13:14:57.030809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.063 [2024-04-26 13:14:57.030815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:52.063 [2024-04-26 13:14:57.040964] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:52.063 [2024-04-26 13:14:57.040981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.063 [2024-04-26 13:14:57.040987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:52.063 [2024-04-26 13:14:57.053134] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:52.063 [2024-04-26 13:14:57.053151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.064 [2024-04-26 13:14:57.053157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:52.064 [2024-04-26 13:14:57.064251] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:52.064 [2024-04-26 13:14:57.064269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.064 [2024-04-26 13:14:57.064275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:52.064 [2024-04-26 13:14:57.074198] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:52.064 [2024-04-26 13:14:57.074215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.064 [2024-04-26 13:14:57.074221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:52.064 [2024-04-26 13:14:57.083571] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:52.064 [2024-04-26 13:14:57.083588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.064 [2024-04-26 13:14:57.083594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:52.064 [2024-04-26 13:14:57.093640] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:52.064 [2024-04-26 13:14:57.093660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.064 [2024-04-26 13:14:57.093667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:52.064 [2024-04-26 13:14:57.103624] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:52.064 [2024-04-26 13:14:57.103642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.064 [2024-04-26 13:14:57.103648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:52.064 [2024-04-26 13:14:57.112830] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:52.064 [2024-04-26 13:14:57.112852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.064 [2024-04-26 13:14:57.112859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:52.325 [2024-04-26 13:14:57.122524] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:52.325 [2024-04-26 13:14:57.122541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.325 [2024-04-26 13:14:57.122548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:52.325 [2024-04-26 13:14:57.131945] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:52.325 [2024-04-26 13:14:57.131962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.325 [2024-04-26 13:14:57.131968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:52.325 [2024-04-26 13:14:57.140927] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:52.325 [2024-04-26 13:14:57.140944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.325 [2024-04-26 13:14:57.140950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:52.325 [2024-04-26 13:14:57.151698] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:52.325 [2024-04-26 13:14:57.151717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.325 [2024-04-26 13:14:57.151723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:52.325 [2024-04-26 13:14:57.162335] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:52.325 [2024-04-26 13:14:57.162354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.325 [2024-04-26 13:14:57.162361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:52.325 [2024-04-26 13:14:57.172395] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:52.325 [2024-04-26 13:14:57.172414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.325 [2024-04-26 13:14:57.172421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:52.325 [2024-04-26 13:14:57.179807] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:52.325 [2024-04-26 13:14:57.179825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.325 [2024-04-26 13:14:57.179832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:52.325 [2024-04-26 13:14:57.190230] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:52.325 [2024-04-26 13:14:57.190248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.325 [2024-04-26 13:14:57.190254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:52.325 [2024-04-26 13:14:57.200207] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:52.325 [2024-04-26 13:14:57.200226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.325 [2024-04-26 13:14:57.200232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:52.325 [2024-04-26 13:14:57.210619] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:52.325 [2024-04-26 13:14:57.210637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.325 [2024-04-26 13:14:57.210644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:52.325 [2024-04-26 13:14:57.221135] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:52.325 [2024-04-26 13:14:57.221153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.325 [2024-04-26 13:14:57.221159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:52.325 [2024-04-26 13:14:57.231285] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:52.325 [2024-04-26 13:14:57.231303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.325 [2024-04-26 13:14:57.231309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:52.325 [2024-04-26 13:14:57.241292] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:52.325 [2024-04-26 13:14:57.241310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.325 [2024-04-26 13:14:57.241317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:52.325 [2024-04-26 13:14:57.251118] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:52.325 [2024-04-26 13:14:57.251137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.325 [2024-04-26 13:14:57.251143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:52.325 [2024-04-26 13:14:57.260992] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:52.325 [2024-04-26 13:14:57.261009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.325 [2024-04-26 13:14:57.261019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:52.325 [2024-04-26 13:14:57.270195] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:52.325 [2024-04-26 13:14:57.270214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.325 [2024-04-26 13:14:57.270221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:52.325 [2024-04-26 13:14:57.280247] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:52.325 [2024-04-26 13:14:57.280265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.325 [2024-04-26 13:14:57.280271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:52.325 [2024-04-26 13:14:57.288817] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:52.325 [2024-04-26 13:14:57.288835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.325 [2024-04-26 13:14:57.288846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:52.325 [2024-04-26 13:14:57.298896] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:52.325 [2024-04-26 13:14:57.298914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.325 [2024-04-26 13:14:57.298921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:52.325 [2024-04-26 13:14:57.308649] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:52.325 [2024-04-26 13:14:57.308667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.325 [2024-04-26 13:14:57.308673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:52.325 [2024-04-26 13:14:57.318893] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:52.325 [2024-04-26 13:14:57.318911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.325 [2024-04-26 13:14:57.318917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:52.325 [2024-04-26 13:14:57.329730] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:52.325 [2024-04-26 13:14:57.329748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.326 [2024-04-26 13:14:57.329755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:52.326 [2024-04-26 13:14:57.339409] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:52.326 [2024-04-26 13:14:57.339427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.326 [2024-04-26 13:14:57.339434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:52.326 [2024-04-26 13:14:57.349869] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:52.326 [2024-04-26 13:14:57.349887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.326 [2024-04-26 13:14:57.349894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:52.326 [2024-04-26 13:14:57.360485] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:52.326 [2024-04-26 13:14:57.360503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.326 [2024-04-26 13:14:57.360510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:52.326 [2024-04-26 13:14:57.370769] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:52.326 [2024-04-26 13:14:57.370787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.326 [2024-04-26 13:14:57.370794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:52.326 [2024-04-26 13:14:57.382061] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:52.326 [2024-04-26 13:14:57.382079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.326 [2024-04-26 13:14:57.382086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:52.587 [2024-04-26 13:14:57.392300] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:52.587 [2024-04-26 13:14:57.392318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.587 [2024-04-26 13:14:57.392325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:52.587 [2024-04-26 13:14:57.402764] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:52.587 [2024-04-26 13:14:57.402783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.587 [2024-04-26 13:14:57.402790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:52.587 [2024-04-26 13:14:57.410946] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:52.587 [2024-04-26 13:14:57.410965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.587 [2024-04-26 13:14:57.410971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:52.587 [2024-04-26 13:14:57.422076] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:52.587 [2024-04-26 13:14:57.422093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.587 [2024-04-26 13:14:57.422100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:52.587 [2024-04-26 13:14:57.429352] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:52.587 [2024-04-26 13:14:57.429369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.587 [2024-04-26 13:14:57.429378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:52.587 [2024-04-26 13:14:57.440180] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:52.587 [2024-04-26 13:14:57.440198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.587 [2024-04-26 13:14:57.440204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:52.587 [2024-04-26 13:14:57.451116] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:52.587 [2024-04-26 13:14:57.451134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.587 [2024-04-26 13:14:57.451141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:52.587 [2024-04-26 13:14:57.462240] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:52.587 [2024-04-26 13:14:57.462258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.587 [2024-04-26 13:14:57.462265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:52.587 [2024-04-26 13:14:57.473610] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:52.587 [2024-04-26 13:14:57.473628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.587 [2024-04-26 13:14:57.473634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:52.587 [2024-04-26 13:14:57.484033] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:52.587 [2024-04-26 13:14:57.484050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.587 [2024-04-26 13:14:57.484057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:52.587 [2024-04-26 13:14:57.493774] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:52.587 [2024-04-26 13:14:57.493791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.587 [2024-04-26 13:14:57.493798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:52.587 [2024-04-26 13:14:57.504338] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:52.587 [2024-04-26 13:14:57.504355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.587 [2024-04-26 13:14:57.504362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:52.587 [2024-04-26 13:14:57.516592] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:52.587 [2024-04-26 13:14:57.516609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.587 [2024-04-26 13:14:57.516616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:52.587 [2024-04-26 13:14:57.526128] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:52.587 [2024-04-26 13:14:57.526151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.587 [2024-04-26 13:14:57.526157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:52.587 [2024-04-26 13:14:57.535612] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:52.587 [2024-04-26 13:14:57.535630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.587 [2024-04-26 13:14:57.535636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:52.587 [2024-04-26 13:14:57.547071] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:52.587 [2024-04-26 13:14:57.547090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.587 [2024-04-26 13:14:57.547096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:52.587 [2024-04-26 13:14:57.557056] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:52.587 [2024-04-26 13:14:57.557075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.587 [2024-04-26 13:14:57.557081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:52.587 [2024-04-26 13:14:57.566460] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:52.587 [2024-04-26 13:14:57.566478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.587 [2024-04-26 13:14:57.566484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:52.587 [2024-04-26 13:14:57.575385] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:52.587 [2024-04-26 13:14:57.575402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.587 [2024-04-26 13:14:57.575408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:52.587 [2024-04-26 13:14:57.586682] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:52.587 [2024-04-26 13:14:57.586700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.588 [2024-04-26 13:14:57.586706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:52.588 [2024-04-26 13:14:57.597743] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:52.588 [2024-04-26 13:14:57.597761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.588 [2024-04-26 13:14:57.597768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:52.588 [2024-04-26 13:14:57.607767] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:52.588 [2024-04-26 13:14:57.607785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.588 [2024-04-26 13:14:57.607791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:52.588 [2024-04-26 13:14:57.618325] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:52.588 [2024-04-26 13:14:57.618343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.588 [2024-04-26 13:14:57.618350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:52.588 [2024-04-26 13:14:57.629170] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:52.588 [2024-04-26 13:14:57.629189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.588 [2024-04-26 13:14:57.629195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:52.588 [2024-04-26 13:14:57.639366] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:52.588 [2024-04-26 13:14:57.639384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.588 [2024-04-26 13:14:57.639391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:52.849 [2024-04-26 13:14:57.649565] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:52.849 [2024-04-26 13:14:57.649584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.849 [2024-04-26 13:14:57.649591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:52.849 [2024-04-26 13:14:57.660314] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:52.849 [2024-04-26 13:14:57.660333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.849 [2024-04-26 13:14:57.660339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:52.849 [2024-04-26 13:14:57.670210] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:52.849 [2024-04-26 13:14:57.670229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.849 [2024-04-26 13:14:57.670236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:52.849 [2024-04-26 13:14:57.679389] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:52.849 [2024-04-26 13:14:57.679408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.849 [2024-04-26 13:14:57.679414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:52.849 [2024-04-26 13:14:57.689568] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:52.849 [2024-04-26 13:14:57.689586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.849 [2024-04-26 13:14:57.689593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:52.849 [2024-04-26 13:14:57.700740] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:52.849 [2024-04-26 13:14:57.700758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.849 [2024-04-26 13:14:57.700768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:52.849 [2024-04-26 13:14:57.711187] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:52.849 [2024-04-26 13:14:57.711205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.849 [2024-04-26 13:14:57.711212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:52.850 [2024-04-26 13:14:57.718434] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:52.850 [2024-04-26 13:14:57.718452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.850 [2024-04-26 13:14:57.718458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:52.850 [2024-04-26 13:14:57.729109] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:52.850 [2024-04-26 13:14:57.729128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.850 [2024-04-26 13:14:57.729134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:52.850 [2024-04-26 13:14:57.739096] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:52.850 [2024-04-26 13:14:57.739115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.850 [2024-04-26 13:14:57.739121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:52.850 [2024-04-26 13:14:57.750714] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:52.850 [2024-04-26 13:14:57.750733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.850 [2024-04-26 13:14:57.750739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:52.850 [2024-04-26 13:14:57.760696] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:52.850 [2024-04-26 13:14:57.760714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.850 [2024-04-26 13:14:57.760721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:52.850 [2024-04-26 13:14:57.770786] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:52.850 [2024-04-26 13:14:57.770804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.850 [2024-04-26 13:14:57.770811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:52.850 [2024-04-26 13:14:57.780408] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:52.850 [2024-04-26 13:14:57.780426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.850 [2024-04-26 13:14:57.780433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:52.850 [2024-04-26 13:14:57.791517] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:52.850 [2024-04-26 13:14:57.791538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.850 [2024-04-26 13:14:57.791544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:52.850 [2024-04-26 13:14:57.799870] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:52.850 [2024-04-26 13:14:57.799888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.850 [2024-04-26 13:14:57.799894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:52.850 [2024-04-26 13:14:57.810122] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:52.850 [2024-04-26 13:14:57.810140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.850 [2024-04-26 13:14:57.810147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:52.850 [2024-04-26 13:14:57.822665] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:52.850 [2024-04-26 13:14:57.822683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.850 [2024-04-26 13:14:57.822690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:52.850 [2024-04-26 13:14:57.832623] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:52.850 [2024-04-26 13:14:57.832642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.850 [2024-04-26 13:14:57.832648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:52.850 [2024-04-26 13:14:57.841959] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:52.850 [2024-04-26 13:14:57.841981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.850 [2024-04-26 13:14:57.841990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:52.850 [2024-04-26 13:14:57.853345] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:52.850 [2024-04-26 13:14:57.853364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.850 [2024-04-26 13:14:57.853370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:52.850 [2024-04-26 13:14:57.863248] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:52.850 [2024-04-26 13:14:57.863267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.850 [2024-04-26 13:14:57.863273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:52.850 [2024-04-26 13:14:57.873590] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:52.850 [2024-04-26 13:14:57.873608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.850 [2024-04-26 13:14:57.873615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:52.850 [2024-04-26 13:14:57.883527] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:52.850 [2024-04-26 13:14:57.883546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.850 [2024-04-26 13:14:57.883552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:52.850 [2024-04-26 13:14:57.893377] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:52.850 [2024-04-26 13:14:57.893395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.850 [2024-04-26 13:14:57.893401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:52.850 [2024-04-26 13:14:57.904054] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:52.850 [2024-04-26 13:14:57.904073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.850 [2024-04-26 13:14:57.904080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:53.111 [2024-04-26 13:14:57.913545] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:53.111 [2024-04-26 13:14:57.913565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.111 [2024-04-26 13:14:57.913571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:53.111 [2024-04-26 13:14:57.924558] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:53.111 [2024-04-26 13:14:57.924576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.111 [2024-04-26 13:14:57.924583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:53.111 [2024-04-26 13:14:57.934529] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:53.111 [2024-04-26 13:14:57.934548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.111 [2024-04-26 13:14:57.934554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:53.111 [2024-04-26 13:14:57.944350] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:53.111 [2024-04-26 13:14:57.944368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.111 [2024-04-26 13:14:57.944375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:53.111 [2024-04-26 13:14:57.954787] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:53.111 [2024-04-26 13:14:57.954806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.111 [2024-04-26 13:14:57.954812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:53.111 [2024-04-26 13:14:57.966943] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:53.111 [2024-04-26 13:14:57.966962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.111 [2024-04-26 13:14:57.966971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:53.111 [2024-04-26 13:14:57.976832] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:53.111 [2024-04-26 13:14:57.976855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.111 [2024-04-26 13:14:57.976862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:53.111 [2024-04-26 13:14:57.986671] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:53.111 [2024-04-26 13:14:57.986689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.111 [2024-04-26 13:14:57.986695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:53.111 [2024-04-26 13:14:57.997829] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:53.111 [2024-04-26 13:14:57.997852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.111 [2024-04-26 13:14:57.997858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:53.111 [2024-04-26 13:14:58.007681] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:53.111 [2024-04-26 13:14:58.007699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.111 [2024-04-26 13:14:58.007706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:53.111 [2024-04-26 13:14:58.016631] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:53.111 [2024-04-26 13:14:58.016649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.111 [2024-04-26 13:14:58.016655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:53.111 [2024-04-26 13:14:58.027359] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:53.111 [2024-04-26 13:14:58.027378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.111 [2024-04-26 13:14:58.027384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:53.111 [2024-04-26 13:14:58.037080] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:53.111 [2024-04-26 13:14:58.037098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.111 [2024-04-26 13:14:58.037105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:53.111 [2024-04-26 13:14:58.047145] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:53.111 [2024-04-26 13:14:58.047164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.111 [2024-04-26 13:14:58.047170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:53.111 [2024-04-26 13:14:58.058176] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:53.111 [2024-04-26 13:14:58.058194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.111 [2024-04-26 13:14:58.058200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:53.111 [2024-04-26 13:14:58.069662] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:53.111 [2024-04-26 13:14:58.069680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.111 [2024-04-26 13:14:58.069687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:53.111 [2024-04-26 13:14:58.080326] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:53.111 [2024-04-26 13:14:58.080345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.111 [2024-04-26 13:14:58.080351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:53.111 [2024-04-26 13:14:58.089538] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:53.111 [2024-04-26 13:14:58.089556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.111 [2024-04-26 13:14:58.089563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:53.111 [2024-04-26 13:14:58.099621] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:53.111 [2024-04-26 13:14:58.099640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.111 [2024-04-26 13:14:58.099646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:53.111 [2024-04-26 13:14:58.109599] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:53.111 [2024-04-26 13:14:58.109618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.111 [2024-04-26 13:14:58.109625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:53.111 [2024-04-26 13:14:58.118535] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:53.111 [2024-04-26 13:14:58.118553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.111 [2024-04-26 13:14:58.118560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:53.111 [2024-04-26 13:14:58.127988] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:53.112 [2024-04-26 13:14:58.128007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.112 [2024-04-26 13:14:58.128013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:53.112 [2024-04-26 13:14:58.138010] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:53.112 [2024-04-26 13:14:58.138028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.112 [2024-04-26 13:14:58.138037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:53.112 [2024-04-26 13:14:58.147443] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:53.112 [2024-04-26 13:14:58.147462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.112 [2024-04-26 13:14:58.147469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:53.112 [2024-04-26 13:14:58.157045] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:53.112 [2024-04-26 13:14:58.157064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.112 [2024-04-26 13:14:58.157070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:53.112 [2024-04-26 13:14:58.166123] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:53.112 [2024-04-26 13:14:58.166142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.112 [2024-04-26 13:14:58.166149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:53.373 [2024-04-26 13:14:58.174780] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:53.373 [2024-04-26 13:14:58.174799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.373 [2024-04-26 13:14:58.174806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:53.373 [2024-04-26 13:14:58.184310] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:53.373 [2024-04-26 13:14:58.184329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.373 [2024-04-26 13:14:58.184336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:53.373 [2024-04-26 13:14:58.193377] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:53.373 [2024-04-26 13:14:58.193396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.373 [2024-04-26 13:14:58.193403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:53.373 [2024-04-26 13:14:58.202871] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:53.373 [2024-04-26 13:14:58.202889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.373 [2024-04-26 13:14:58.202896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:53.373 [2024-04-26 13:14:58.213138] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:53.373 [2024-04-26 13:14:58.213157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.373 [2024-04-26 13:14:58.213163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:53.373 [2024-04-26 13:14:58.223669] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:53.373 [2024-04-26 13:14:58.223690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.373 [2024-04-26 13:14:58.223696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:53.373 [2024-04-26 13:14:58.233971] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:53.373 [2024-04-26 13:14:58.233989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.373 [2024-04-26 13:14:58.233996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:53.373 [2024-04-26 13:14:58.244126] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:53.373 [2024-04-26 13:14:58.244145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.373 [2024-04-26 13:14:58.244152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:53.373 [2024-04-26 13:14:58.253877] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:53.373 [2024-04-26 13:14:58.253895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.373 [2024-04-26 13:14:58.253901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:53.373 [2024-04-26 13:14:58.263488] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:53.373 [2024-04-26 13:14:58.263506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.373 [2024-04-26 13:14:58.263513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:53.373 [2024-04-26 13:14:58.272392] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:53.373 [2024-04-26 13:14:58.272410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.373 [2024-04-26 13:14:58.272416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:53.373 [2024-04-26 13:14:58.280918] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:53.373 [2024-04-26 13:14:58.280936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.373 [2024-04-26 13:14:58.280943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:53.373 [2024-04-26 13:14:58.288440] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:53.373 [2024-04-26 13:14:58.288458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.373 [2024-04-26 13:14:58.288465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:53.373 [2024-04-26 13:14:58.296996] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:53.373 [2024-04-26 13:14:58.297014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.374 [2024-04-26 13:14:58.297021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:53.374 [2024-04-26 13:14:58.307802] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:53.374 [2024-04-26 13:14:58.307820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.374 [2024-04-26 13:14:58.307827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:53.374 [2024-04-26 13:14:58.317580] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:53.374 [2024-04-26 13:14:58.317598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.374 [2024-04-26 13:14:58.317605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:53.374 [2024-04-26 13:14:58.326716] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:53.374 [2024-04-26 13:14:58.326734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.374 [2024-04-26 13:14:58.326740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:53.374 [2024-04-26 13:14:58.334913] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:53.374 [2024-04-26 13:14:58.334932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.374 [2024-04-26 13:14:58.334938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:53.374 [2024-04-26 13:14:58.345914] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:53.374 [2024-04-26 13:14:58.345932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.374 [2024-04-26 13:14:58.345939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:53.374 [2024-04-26 13:14:58.356208] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:53.374 [2024-04-26 13:14:58.356227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.374 [2024-04-26 13:14:58.356233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:53.374 [2024-04-26 13:14:58.365672] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:53.374 [2024-04-26 13:14:58.365690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.374 [2024-04-26 13:14:58.365697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:53.374 [2024-04-26 13:14:58.375552] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:53.374 [2024-04-26 13:14:58.375571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.374 [2024-04-26 13:14:58.375577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:53.374 [2024-04-26 13:14:58.385447] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:53.374 [2024-04-26 13:14:58.385465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.374 [2024-04-26 13:14:58.385475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:53.374 [2024-04-26 13:14:58.394748] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:53.374 [2024-04-26 13:14:58.394767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.374 [2024-04-26 13:14:58.394773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:53.374 [2024-04-26 13:14:58.404566] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:53.374 [2024-04-26 13:14:58.404584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.374 [2024-04-26 13:14:58.404590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:53.374 [2024-04-26 13:14:58.414531] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:53.374 [2024-04-26 13:14:58.414549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.374 [2024-04-26 13:14:58.414556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:53.374 [2024-04-26 13:14:58.423409] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:53.374 [2024-04-26 13:14:58.423428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.374 [2024-04-26 13:14:58.423435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:53.374 [2024-04-26 13:14:58.432238] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:53.374 [2024-04-26 13:14:58.432257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.374 [2024-04-26 13:14:58.432263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:53.635 [2024-04-26 13:14:58.441576] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:53.636 [2024-04-26 13:14:58.441594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.636 [2024-04-26 13:14:58.441601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:53.636 [2024-04-26 13:14:58.450786] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:53.636 [2024-04-26 13:14:58.450804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.636 [2024-04-26 13:14:58.450811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:53.636 [2024-04-26 13:14:58.461630] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:53.636 [2024-04-26 13:14:58.461648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.636 [2024-04-26 13:14:58.461655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:53.636 [2024-04-26 13:14:58.470814] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:53.636 [2024-04-26 13:14:58.470841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.636 [2024-04-26 13:14:58.470847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:53.636 [2024-04-26 13:14:58.482368] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:53.636 [2024-04-26 13:14:58.482385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.636 [2024-04-26 13:14:58.482392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:53.636 [2024-04-26 13:14:58.492896] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:53.636 [2024-04-26 13:14:58.492914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.636 [2024-04-26 13:14:58.492920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:53.636 [2024-04-26 13:14:58.501970] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:53.636 [2024-04-26 13:14:58.501988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.636 [2024-04-26 13:14:58.501996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:53.636 [2024-04-26 13:14:58.511583] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:53.636 [2024-04-26 13:14:58.511601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.636 [2024-04-26 13:14:58.511607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:53.636 [2024-04-26 13:14:58.521060] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:53.636 [2024-04-26 13:14:58.521078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.636 [2024-04-26 13:14:58.521085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:53.636 [2024-04-26 13:14:58.531431] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:53.636 [2024-04-26 13:14:58.531451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.636 [2024-04-26 13:14:58.531457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:53.636 [2024-04-26 13:14:58.541083] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:53.636 [2024-04-26 13:14:58.541101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.636 [2024-04-26 13:14:58.541108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:53.636 [2024-04-26 13:14:58.551058] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:53.636 [2024-04-26 13:14:58.551077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.636 [2024-04-26 13:14:58.551083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:53.636 [2024-04-26 13:14:58.561198] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:53.636 [2024-04-26 13:14:58.561218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.636 [2024-04-26 13:14:58.561224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:53.636 [2024-04-26 13:14:58.570696] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:53.636 [2024-04-26 13:14:58.570715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.636 [2024-04-26 13:14:58.570721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:53.636 [2024-04-26 13:14:58.579135] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:53.636 [2024-04-26 13:14:58.579153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.636 [2024-04-26 13:14:58.579159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:53.636 [2024-04-26 13:14:58.588841] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:53.636 [2024-04-26 13:14:58.588859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.636 [2024-04-26 13:14:58.588865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:53.636 [2024-04-26 13:14:58.600157] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:53.636 [2024-04-26 13:14:58.600176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.636 [2024-04-26 13:14:58.600182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:53.636 [2024-04-26 13:14:58.609351] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:53.636 [2024-04-26 13:14:58.609369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.636 [2024-04-26 13:14:58.609376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:53.636 [2024-04-26 13:14:58.617669] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9da40) 00:31:53.636 [2024-04-26 13:14:58.617688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.636 [2024-04-26 13:14:58.617695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:53.636 00:31:53.636 Latency(us) 00:31:53.636 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:53.636 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:31:53.636 nvme0n1 : 2.00 3062.31 382.79 0.00 0.00 5222.95 914.77 13762.56 00:31:53.636 =================================================================================================================== 00:31:53.636 Total : 3062.31 382.79 0.00 0.00 5222.95 914.77 13762.56 00:31:53.636 0 00:31:53.636 13:14:58 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:31:53.636 13:14:58 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:31:53.636 13:14:58 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:31:53.636 13:14:58 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:31:53.636 | .driver_specific 00:31:53.636 | .nvme_error 00:31:53.636 | .status_code 00:31:53.636 | .command_transient_transport_error' 00:31:53.896 13:14:58 -- host/digest.sh@71 -- # (( 197 > 0 )) 00:31:53.896 13:14:58 -- host/digest.sh@73 -- # killprocess 4302 00:31:53.896 13:14:58 -- common/autotest_common.sh@936 -- # '[' -z 4302 ']' 00:31:53.896 13:14:58 -- common/autotest_common.sh@940 -- # kill -0 4302 00:31:53.896 13:14:58 -- common/autotest_common.sh@941 -- # uname 00:31:53.896 13:14:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:31:53.896 13:14:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4302 00:31:53.896 13:14:58 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:31:53.896 13:14:58 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:31:53.896 13:14:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4302' 00:31:53.896 killing process with pid 4302 00:31:53.896 13:14:58 -- common/autotest_common.sh@955 -- # kill 4302 00:31:53.896 Received shutdown signal, test time was about 2.000000 seconds 00:31:53.896 00:31:53.896 Latency(us) 00:31:53.896 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:53.897 =================================================================================================================== 00:31:53.897 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:53.897 13:14:58 -- common/autotest_common.sh@960 -- # wait 4302 00:31:54.156 13:14:58 -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:31:54.156 13:14:58 -- host/digest.sh@54 -- # local rw bs qd 00:31:54.156 13:14:58 -- host/digest.sh@56 -- # rw=randwrite 00:31:54.156 13:14:58 -- host/digest.sh@56 -- # bs=4096 00:31:54.156 13:14:58 -- host/digest.sh@56 -- # qd=128 00:31:54.156 13:14:58 -- host/digest.sh@58 -- # bperfpid=5006 00:31:54.156 13:14:58 -- host/digest.sh@60 -- # waitforlisten 5006 /var/tmp/bperf.sock 00:31:54.156 13:14:58 -- common/autotest_common.sh@817 -- # '[' -z 5006 ']' 00:31:54.156 13:14:58 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:31:54.156 13:14:58 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:54.156 13:14:58 -- common/autotest_common.sh@822 -- # local max_retries=100 00:31:54.156 13:14:58 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:54.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:54.156 13:14:58 -- common/autotest_common.sh@826 -- # xtrace_disable 00:31:54.156 13:14:58 -- common/autotest_common.sh@10 -- # set +x 00:31:54.156 [2024-04-26 13:14:59.022350] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:31:54.156 [2024-04-26 13:14:59.022402] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid5006 ] 00:31:54.156 EAL: No free 2048 kB hugepages reported on node 1 00:31:54.156 [2024-04-26 13:14:59.099697] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:54.156 [2024-04-26 13:14:59.151092] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:54.727 13:14:59 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:31:54.727 13:14:59 -- common/autotest_common.sh@850 -- # return 0 00:31:54.727 13:14:59 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:54.727 13:14:59 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:54.988 13:14:59 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:31:54.988 13:14:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:54.988 13:14:59 -- common/autotest_common.sh@10 -- # set +x 00:31:54.988 13:14:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:54.988 13:14:59 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:54.988 13:14:59 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:55.248 nvme0n1 00:31:55.248 13:15:00 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:31:55.248 13:15:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:55.248 13:15:00 -- common/autotest_common.sh@10 -- # set +x 00:31:55.248 13:15:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:55.248 13:15:00 -- host/digest.sh@69 -- # bperf_py perform_tests 00:31:55.248 13:15:00 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:55.248 Running I/O for 2 seconds... 00:31:55.248 [2024-04-26 13:15:00.293369] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190eb760 00:31:55.248 [2024-04-26 13:15:00.295223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:3009 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.248 [2024-04-26 13:15:00.295253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:55.248 [2024-04-26 13:15:00.303260] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190ef6a8 00:31:55.248 [2024-04-26 13:15:00.304383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:11354 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.248 [2024-04-26 13:15:00.304402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:55.509 [2024-04-26 13:15:00.316244] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190ee5c8 00:31:55.509 [2024-04-26 13:15:00.317359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:10153 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.509 [2024-04-26 13:15:00.317376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:55.509 [2024-04-26 13:15:00.328460] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190ed4e8 00:31:55.509 [2024-04-26 13:15:00.329575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:20022 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.509 [2024-04-26 13:15:00.329590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:55.509 [2024-04-26 13:15:00.340697] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190ec408 00:31:55.509 [2024-04-26 13:15:00.341812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:2160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.509 [2024-04-26 13:15:00.341828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:55.509 [2024-04-26 13:15:00.352925] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190eb328 00:31:55.509 [2024-04-26 13:15:00.354038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:8169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.509 [2024-04-26 13:15:00.354054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:55.509 [2024-04-26 13:15:00.365071] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190ea248 00:31:55.509 [2024-04-26 13:15:00.366187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:15155 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.509 [2024-04-26 13:15:00.366206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:55.509 [2024-04-26 13:15:00.377298] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190e9168 00:31:55.509 [2024-04-26 13:15:00.378406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:1939 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.509 [2024-04-26 13:15:00.378422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:55.509 [2024-04-26 13:15:00.389462] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190e8088 00:31:55.509 [2024-04-26 13:15:00.390558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:22291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.509 [2024-04-26 13:15:00.390573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:55.509 [2024-04-26 13:15:00.401620] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190e6fa8 00:31:55.509 [2024-04-26 13:15:00.402744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:7216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.509 [2024-04-26 13:15:00.402759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:55.510 [2024-04-26 13:15:00.413808] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190e5ec8 00:31:55.510 [2024-04-26 13:15:00.414917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:11171 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.510 [2024-04-26 13:15:00.414933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:55.510 [2024-04-26 13:15:00.426032] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190de470 00:31:55.510 [2024-04-26 13:15:00.427157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:20359 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.510 [2024-04-26 13:15:00.427173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:55.510 [2024-04-26 13:15:00.438190] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190f4f40 00:31:55.510 [2024-04-26 13:15:00.439301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:24153 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.510 [2024-04-26 13:15:00.439317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:55.510 [2024-04-26 13:15:00.450391] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190f3e60 00:31:55.510 [2024-04-26 13:15:00.451503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:18200 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.510 [2024-04-26 13:15:00.451519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:55.510 [2024-04-26 13:15:00.462556] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190f2d80 00:31:55.510 [2024-04-26 13:15:00.463665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:20220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.510 [2024-04-26 13:15:00.463681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:55.510 [2024-04-26 13:15:00.474746] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190fdeb0 00:31:55.510 [2024-04-26 13:15:00.475855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.510 [2024-04-26 13:15:00.475871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:55.510 [2024-04-26 13:15:00.486922] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190ff3c8 00:31:55.510 [2024-04-26 13:15:00.488029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.510 [2024-04-26 13:15:00.488045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:55.510 [2024-04-26 13:15:00.499056] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190fc998 00:31:55.510 [2024-04-26 13:15:00.500194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4663 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.510 [2024-04-26 13:15:00.500209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:55.510 [2024-04-26 13:15:00.511249] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190fb8b8 00:31:55.510 [2024-04-26 13:15:00.512383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:6259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.510 [2024-04-26 13:15:00.512399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:55.510 [2024-04-26 13:15:00.522607] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190eee38 00:31:55.510 [2024-04-26 13:15:00.523723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1502 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.510 [2024-04-26 13:15:00.523738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:55.510 [2024-04-26 13:15:00.537155] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190eff18 00:31:55.510 [2024-04-26 13:15:00.538962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:1637 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.510 [2024-04-26 13:15:00.538979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:55.510 [2024-04-26 13:15:00.547770] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190fb480 00:31:55.510 [2024-04-26 13:15:00.548955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:8404 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.510 [2024-04-26 13:15:00.548971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:55.510 [2024-04-26 13:15:00.559928] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190fa3a0 00:31:55.510 [2024-04-26 13:15:00.561052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:2665 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.510 [2024-04-26 13:15:00.561068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:55.771 [2024-04-26 13:15:00.572072] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190f92c0 00:31:55.771 [2024-04-26 13:15:00.573196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:16977 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.771 [2024-04-26 13:15:00.573212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:55.771 [2024-04-26 13:15:00.583461] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190f35f0 00:31:55.771 [2024-04-26 13:15:00.584575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:15219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.771 [2024-04-26 13:15:00.584590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:55.771 [2024-04-26 13:15:00.595489] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190f3e60 00:31:55.771 [2024-04-26 13:15:00.596577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:24040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.771 [2024-04-26 13:15:00.596593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:55.771 [2024-04-26 13:15:00.608418] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190ff3c8 00:31:55.771 [2024-04-26 13:15:00.609524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21618 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.771 [2024-04-26 13:15:00.609541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:55.771 [2024-04-26 13:15:00.620579] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190fc998 00:31:55.771 [2024-04-26 13:15:00.621683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.771 [2024-04-26 13:15:00.621699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:55.771 [2024-04-26 13:15:00.631959] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190eee38 00:31:55.771 [2024-04-26 13:15:00.633021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5554 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.771 [2024-04-26 13:15:00.633037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:55.771 [2024-04-26 13:15:00.644887] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190f9b30 00:31:55.772 [2024-04-26 13:15:00.646005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:7706 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.772 [2024-04-26 13:15:00.646021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:55.772 [2024-04-26 13:15:00.657097] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190f8a50 00:31:55.772 [2024-04-26 13:15:00.658215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:11603 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.772 [2024-04-26 13:15:00.658230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:55.772 [2024-04-26 13:15:00.669288] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190f7970 00:31:55.772 [2024-04-26 13:15:00.670393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:18449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.772 [2024-04-26 13:15:00.670408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:55.772 [2024-04-26 13:15:00.681463] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190f35f0 00:31:55.772 [2024-04-26 13:15:00.682565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:4795 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.772 [2024-04-26 13:15:00.682585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:55.772 [2024-04-26 13:15:00.693627] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190f2510 00:31:55.772 [2024-04-26 13:15:00.694720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:16832 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.772 [2024-04-26 13:15:00.694735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:55.772 [2024-04-26 13:15:00.705849] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190f1430 00:31:55.772 [2024-04-26 13:15:00.706921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:16639 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.772 [2024-04-26 13:15:00.706936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:55.772 [2024-04-26 13:15:00.717988] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190f0350 00:31:55.772 [2024-04-26 13:15:00.719094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:20879 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.772 [2024-04-26 13:15:00.719109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:55.772 [2024-04-26 13:15:00.730127] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190fe720 00:31:55.772 [2024-04-26 13:15:00.731241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:24666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.772 [2024-04-26 13:15:00.731257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:55.772 [2024-04-26 13:15:00.742397] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190fd208 00:31:55.772 [2024-04-26 13:15:00.743502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:5859 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.772 [2024-04-26 13:15:00.743518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:55.772 [2024-04-26 13:15:00.754532] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190ebb98 00:31:55.772 [2024-04-26 13:15:00.755621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:17048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.772 [2024-04-26 13:15:00.755637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:55.772 [2024-04-26 13:15:00.766675] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190f8a50 00:31:55.772 [2024-04-26 13:15:00.767786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:5647 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.772 [2024-04-26 13:15:00.767801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:55.772 [2024-04-26 13:15:00.780380] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190f0350 00:31:55.772 [2024-04-26 13:15:00.782179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:14987 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.772 [2024-04-26 13:15:00.782194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:55.772 [2024-04-26 13:15:00.790990] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190fc998 00:31:55.772 [2024-04-26 13:15:00.792094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:827 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.772 [2024-04-26 13:15:00.792109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:55.772 [2024-04-26 13:15:00.803145] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190fb8b8 00:31:55.772 [2024-04-26 13:15:00.804233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:24180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.772 [2024-04-26 13:15:00.804249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:55.772 [2024-04-26 13:15:00.815270] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190e01f8 00:31:55.772 [2024-04-26 13:15:00.816379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:10336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.772 [2024-04-26 13:15:00.816394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:55.772 [2024-04-26 13:15:00.827446] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190df118 00:31:55.772 [2024-04-26 13:15:00.828510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:12666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.772 [2024-04-26 13:15:00.828525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:56.033 [2024-04-26 13:15:00.838753] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190de470 00:31:56.033 [2024-04-26 13:15:00.839829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:7727 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.033 [2024-04-26 13:15:00.839848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:56.033 [2024-04-26 13:15:00.851697] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190df988 00:31:56.034 [2024-04-26 13:15:00.852775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:13332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.034 [2024-04-26 13:15:00.852790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:56.034 [2024-04-26 13:15:00.863849] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190e0a68 00:31:56.034 [2024-04-26 13:15:00.864915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:23556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.034 [2024-04-26 13:15:00.864930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:56.034 [2024-04-26 13:15:00.875270] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190eb328 00:31:56.034 [2024-04-26 13:15:00.876318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:17281 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.034 [2024-04-26 13:15:00.876333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:56.034 [2024-04-26 13:15:00.888233] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190ec408 00:31:56.034 [2024-04-26 13:15:00.889305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:14038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.034 [2024-04-26 13:15:00.889321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:56.034 [2024-04-26 13:15:00.900634] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190ed4e8 00:31:56.034 [2024-04-26 13:15:00.901704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:1070 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.034 [2024-04-26 13:15:00.901719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:56.034 [2024-04-26 13:15:00.912776] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190ee5c8 00:31:56.034 [2024-04-26 13:15:00.913808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5928 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.034 [2024-04-26 13:15:00.913823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:56.034 [2024-04-26 13:15:00.924926] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190ef6a8 00:31:56.034 [2024-04-26 13:15:00.925984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:5933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.034 [2024-04-26 13:15:00.926000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:56.034 [2024-04-26 13:15:00.937057] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190f0788 00:31:56.034 [2024-04-26 13:15:00.938142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:13816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.034 [2024-04-26 13:15:00.938158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:56.034 [2024-04-26 13:15:00.950806] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190f1868 00:31:56.034 [2024-04-26 13:15:00.952579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:9892 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.034 [2024-04-26 13:15:00.952595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:56.034 [2024-04-26 13:15:00.961420] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190f8a50 00:31:56.034 [2024-04-26 13:15:00.962507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:10413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.034 [2024-04-26 13:15:00.962523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:56.034 [2024-04-26 13:15:00.973575] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190f7970 00:31:56.034 [2024-04-26 13:15:00.974664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:7481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.034 [2024-04-26 13:15:00.974679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:56.034 [2024-04-26 13:15:00.985751] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190f6890 00:31:56.034 [2024-04-26 13:15:00.986840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:20492 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.034 [2024-04-26 13:15:00.986856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:56.034 [2024-04-26 13:15:00.997907] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190e99d8 00:31:56.034 [2024-04-26 13:15:00.998963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:24713 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.034 [2024-04-26 13:15:00.998981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:56.034 [2024-04-26 13:15:01.010046] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190ebb98 00:31:56.034 [2024-04-26 13:15:01.011126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:13858 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.034 [2024-04-26 13:15:01.011142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:56.034 [2024-04-26 13:15:01.022205] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190e5ec8 00:31:56.034 [2024-04-26 13:15:01.023292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:22160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.034 [2024-04-26 13:15:01.023307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:56.034 [2024-04-26 13:15:01.034394] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190e38d0 00:31:56.034 [2024-04-26 13:15:01.035475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:22330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.034 [2024-04-26 13:15:01.035491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:56.034 [2024-04-26 13:15:01.046598] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190f57b0 00:31:56.034 [2024-04-26 13:15:01.047681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:11809 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.034 [2024-04-26 13:15:01.047697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:56.034 [2024-04-26 13:15:01.058730] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190e7818 00:31:56.034 [2024-04-26 13:15:01.059743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:21958 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.034 [2024-04-26 13:15:01.059759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:56.034 [2024-04-26 13:15:01.070919] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190f7538 00:31:56.034 [2024-04-26 13:15:01.072001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.034 [2024-04-26 13:15:01.072017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:56.034 [2024-04-26 13:15:01.083072] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190f8618 00:31:56.034 [2024-04-26 13:15:01.084159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:11608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.034 [2024-04-26 13:15:01.084174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:56.296 [2024-04-26 13:15:01.095263] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190f96f8 00:31:56.296 [2024-04-26 13:15:01.096370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:22752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.296 [2024-04-26 13:15:01.096386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:56.296 [2024-04-26 13:15:01.106653] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190f0788 00:31:56.296 [2024-04-26 13:15:01.107727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:14474 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.296 [2024-04-26 13:15:01.107742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:56.296 [2024-04-26 13:15:01.119615] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190f1868 00:31:56.296 [2024-04-26 13:15:01.120682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:16320 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.296 [2024-04-26 13:15:01.120698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:56.296 [2024-04-26 13:15:01.131797] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190f2948 00:31:56.296 [2024-04-26 13:15:01.132858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.296 [2024-04-26 13:15:01.132874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:56.296 [2024-04-26 13:15:01.145500] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190f3a28 00:31:56.296 [2024-04-26 13:15:01.147235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:20841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.296 [2024-04-26 13:15:01.147250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:56.296 [2024-04-26 13:15:01.156101] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190e5ec8 00:31:56.296 [2024-04-26 13:15:01.157188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.296 [2024-04-26 13:15:01.157203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:56.297 [2024-04-26 13:15:01.168229] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190e38d0 00:31:56.297 [2024-04-26 13:15:01.169317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.297 [2024-04-26 13:15:01.169333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:56.297 [2024-04-26 13:15:01.180419] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190f57b0 00:31:56.297 [2024-04-26 13:15:01.181507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:6201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.297 [2024-04-26 13:15:01.181523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:56.297 [2024-04-26 13:15:01.192612] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190df118 00:31:56.297 [2024-04-26 13:15:01.193695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:5520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.297 [2024-04-26 13:15:01.193711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:56.297 [2024-04-26 13:15:01.204794] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190edd58 00:31:56.297 [2024-04-26 13:15:01.205885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:18589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.297 [2024-04-26 13:15:01.205901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:56.297 [2024-04-26 13:15:01.216995] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190fb8b8 00:31:56.297 [2024-04-26 13:15:01.218072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:470 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.297 [2024-04-26 13:15:01.218088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:56.297 [2024-04-26 13:15:01.229137] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190fc998 00:31:56.297 [2024-04-26 13:15:01.230220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:18712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.297 [2024-04-26 13:15:01.230236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:56.297 [2024-04-26 13:15:01.240531] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190e3060 00:31:56.297 [2024-04-26 13:15:01.241595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:25176 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.297 [2024-04-26 13:15:01.241610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:56.297 [2024-04-26 13:15:01.252652] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190ff3c8 00:31:56.297 [2024-04-26 13:15:01.253720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:9018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.297 [2024-04-26 13:15:01.253735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:56.297 [2024-04-26 13:15:01.267123] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190f2948 00:31:56.297 [2024-04-26 13:15:01.268731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:11266 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.297 [2024-04-26 13:15:01.268747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:56.297 [2024-04-26 13:15:01.277651] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190f20d8 00:31:56.297 [2024-04-26 13:15:01.278691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:17704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.297 [2024-04-26 13:15:01.278708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:56.297 [2024-04-26 13:15:01.289951] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190f0ff8 00:31:56.297 [2024-04-26 13:15:01.291016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:3572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.297 [2024-04-26 13:15:01.291032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:56.297 [2024-04-26 13:15:01.302118] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190eff18 00:31:56.297 [2024-04-26 13:15:01.303164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.297 [2024-04-26 13:15:01.303181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:56.297 [2024-04-26 13:15:01.313429] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190ef6a8 00:31:56.297 [2024-04-26 13:15:01.314461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:13040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.297 [2024-04-26 13:15:01.314476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:56.297 [2024-04-26 13:15:01.328486] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190e88f8 00:31:56.297 [2024-04-26 13:15:01.330387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6830 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.297 [2024-04-26 13:15:01.330403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:56.297 [2024-04-26 13:15:01.339105] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190eff18 00:31:56.297 [2024-04-26 13:15:01.340292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:2505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.297 [2024-04-26 13:15:01.340308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:56.297 [2024-04-26 13:15:01.351245] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190dfdc0 00:31:56.297 [2024-04-26 13:15:01.352445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:5644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.297 [2024-04-26 13:15:01.352461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:56.559 [2024-04-26 13:15:01.363423] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190fa3a0 00:31:56.559 [2024-04-26 13:15:01.364583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:12505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.559 [2024-04-26 13:15:01.364599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:56.559 [2024-04-26 13:15:01.375607] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190fb480 00:31:56.559 [2024-04-26 13:15:01.376807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:22435 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.559 [2024-04-26 13:15:01.376824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:56.559 [2024-04-26 13:15:01.387749] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190ea680 00:31:56.559 [2024-04-26 13:15:01.388915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:2154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.559 [2024-04-26 13:15:01.388931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:56.559 [2024-04-26 13:15:01.399963] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190feb58 00:31:56.559 [2024-04-26 13:15:01.401174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:7599 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.559 [2024-04-26 13:15:01.401190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:56.559 [2024-04-26 13:15:01.412176] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190fc560 00:31:56.559 [2024-04-26 13:15:01.413372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:42 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.559 [2024-04-26 13:15:01.413388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:56.559 [2024-04-26 13:15:01.424356] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190e0ea0 00:31:56.559 [2024-04-26 13:15:01.425557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:8109 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.559 [2024-04-26 13:15:01.425575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:56.559 [2024-04-26 13:15:01.436537] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190efae0 00:31:56.559 [2024-04-26 13:15:01.437758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:921 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.559 [2024-04-26 13:15:01.437775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:56.559 [2024-04-26 13:15:01.447943] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190fb8b8 00:31:56.559 [2024-04-26 13:15:01.449136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:17982 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.559 [2024-04-26 13:15:01.449152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:56.559 [2024-04-26 13:15:01.460864] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190fc998 00:31:56.559 [2024-04-26 13:15:01.462089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:24957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.559 [2024-04-26 13:15:01.462104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:56.559 [2024-04-26 13:15:01.473007] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190ff3c8 00:31:56.559 [2024-04-26 13:15:01.474182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:13860 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.559 [2024-04-26 13:15:01.474198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:56.559 [2024-04-26 13:15:01.485164] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190eaab8 00:31:56.559 [2024-04-26 13:15:01.486366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:4684 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.559 [2024-04-26 13:15:01.486382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:56.560 [2024-04-26 13:15:01.497359] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190fb048 00:31:56.560 [2024-04-26 13:15:01.498562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:25129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.560 [2024-04-26 13:15:01.498578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:56.560 [2024-04-26 13:15:01.509505] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190f9f68 00:31:56.560 [2024-04-26 13:15:01.510707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:8607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.560 [2024-04-26 13:15:01.510723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:56.560 [2024-04-26 13:15:01.521706] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190eaef0 00:31:56.560 [2024-04-26 13:15:01.522907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:17424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.560 [2024-04-26 13:15:01.522923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:56.560 [2024-04-26 13:15:01.535373] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190f0788 00:31:56.560 [2024-04-26 13:15:01.537273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:24229 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.560 [2024-04-26 13:15:01.537290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:56.560 [2024-04-26 13:15:01.547670] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190e27f0 00:31:56.560 [2024-04-26 13:15:01.549535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:6285 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.560 [2024-04-26 13:15:01.549551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:56.560 [2024-04-26 13:15:01.557465] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190e1b48 00:31:56.560 [2024-04-26 13:15:01.558643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:18315 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.560 [2024-04-26 13:15:01.558659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:56.560 [2024-04-26 13:15:01.570564] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190ef6a8 00:31:56.560 [2024-04-26 13:15:01.571842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:6697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.560 [2024-04-26 13:15:01.571858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:56.560 [2024-04-26 13:15:01.582908] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190f0788 00:31:56.560 [2024-04-26 13:15:01.584264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:4087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.560 [2024-04-26 13:15:01.584280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:56.560 [2024-04-26 13:15:01.595121] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190f1868 00:31:56.560 [2024-04-26 13:15:01.596475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:5948 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.560 [2024-04-26 13:15:01.596491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:56.560 [2024-04-26 13:15:01.608775] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190e3060 00:31:56.560 [2024-04-26 13:15:01.610816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:4717 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.560 [2024-04-26 13:15:01.610832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:56.821 [2024-04-26 13:15:01.619363] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190f1868 00:31:56.821 [2024-04-26 13:15:01.620726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16085 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.821 [2024-04-26 13:15:01.620741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:56.821 [2024-04-26 13:15:01.631516] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190f1868 00:31:56.822 [2024-04-26 13:15:01.632857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:3270 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.822 [2024-04-26 13:15:01.632873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:56.822 [2024-04-26 13:15:01.643659] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190f1868 00:31:56.822 [2024-04-26 13:15:01.645014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:13492 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.822 [2024-04-26 13:15:01.645030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:56.822 [2024-04-26 13:15:01.655776] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190f1868 00:31:56.822 [2024-04-26 13:15:01.657148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:15263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.822 [2024-04-26 13:15:01.657163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:56.822 [2024-04-26 13:15:01.667922] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190f1868 00:31:56.822 [2024-04-26 13:15:01.669266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:9261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.822 [2024-04-26 13:15:01.669282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:56.822 [2024-04-26 13:15:01.680084] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190f1868 00:31:56.822 [2024-04-26 13:15:01.681435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:6342 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.822 [2024-04-26 13:15:01.681451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:56.822 [2024-04-26 13:15:01.692231] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190f1868 00:31:56.822 [2024-04-26 13:15:01.693571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:5669 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.822 [2024-04-26 13:15:01.693587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:56.822 [2024-04-26 13:15:01.704347] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190f1868 00:31:56.822 [2024-04-26 13:15:01.705692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:12462 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.822 [2024-04-26 13:15:01.705707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:56.822 [2024-04-26 13:15:01.716466] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190f1868 00:31:56.822 [2024-04-26 13:15:01.717812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:14840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.822 [2024-04-26 13:15:01.717827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:56.822 [2024-04-26 13:15:01.728628] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190f1868 00:31:56.822 [2024-04-26 13:15:01.729999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:25307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.822 [2024-04-26 13:15:01.730015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:56.822 [2024-04-26 13:15:01.740764] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190f1868 00:31:56.822 [2024-04-26 13:15:01.742088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:8558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.822 [2024-04-26 13:15:01.742107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:56.822 [2024-04-26 13:15:01.753019] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190f1868 00:31:56.822 [2024-04-26 13:15:01.754353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:11604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.822 [2024-04-26 13:15:01.754369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:56.822 [2024-04-26 13:15:01.765167] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190f1868 00:31:56.822 [2024-04-26 13:15:01.766518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:17324 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.822 [2024-04-26 13:15:01.766535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:56.822 [2024-04-26 13:15:01.777302] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190f1868 00:31:56.822 [2024-04-26 13:15:01.778657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:7958 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.822 [2024-04-26 13:15:01.778673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:56.822 [2024-04-26 13:15:01.789441] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190f1868 00:31:56.822 [2024-04-26 13:15:01.790779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:19232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.822 [2024-04-26 13:15:01.790795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:56.822 [2024-04-26 13:15:01.801573] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190f1868 00:31:56.822 [2024-04-26 13:15:01.802913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:15742 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.822 [2024-04-26 13:15:01.802928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:56.822 [2024-04-26 13:15:01.813738] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190f1868 00:31:56.822 [2024-04-26 13:15:01.815080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:6771 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.822 [2024-04-26 13:15:01.815096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:56.822 [2024-04-26 13:15:01.825873] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190f1868 00:31:56.822 [2024-04-26 13:15:01.827240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.822 [2024-04-26 13:15:01.827255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:56.822 [2024-04-26 13:15:01.838035] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190f1868 00:31:56.822 [2024-04-26 13:15:01.839374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:13433 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.822 [2024-04-26 13:15:01.839391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:56.822 [2024-04-26 13:15:01.850174] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190f1868 00:31:56.822 [2024-04-26 13:15:01.851509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:22343 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.822 [2024-04-26 13:15:01.851525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:56.822 [2024-04-26 13:15:01.862306] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190f1868 00:31:56.822 [2024-04-26 13:15:01.863654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:22307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.822 [2024-04-26 13:15:01.863670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:56.822 [2024-04-26 13:15:01.874449] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190f1868 00:31:56.822 [2024-04-26 13:15:01.875805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:8259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.822 [2024-04-26 13:15:01.875821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:57.084 [2024-04-26 13:15:01.886592] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190f1868 00:31:57.084 [2024-04-26 13:15:01.887912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.084 [2024-04-26 13:15:01.887928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:57.084 [2024-04-26 13:15:01.898739] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190f1868 00:31:57.084 [2024-04-26 13:15:01.900285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:14586 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.084 [2024-04-26 13:15:01.900301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:57.084 [2024-04-26 13:15:01.911048] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190e1710 00:31:57.084 [2024-04-26 13:15:01.912362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:15428 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.084 [2024-04-26 13:15:01.912378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:57.084 [2024-04-26 13:15:01.923245] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190fdeb0 00:31:57.084 [2024-04-26 13:15:01.924593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:6475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.084 [2024-04-26 13:15:01.924608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:57.084 [2024-04-26 13:15:01.935446] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190f7100 00:31:57.084 [2024-04-26 13:15:01.936775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:1680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.084 [2024-04-26 13:15:01.936791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:57.084 [2024-04-26 13:15:01.947622] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190e8088 00:31:57.084 [2024-04-26 13:15:01.948921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:13075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.084 [2024-04-26 13:15:01.948937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:57.084 [2024-04-26 13:15:01.959802] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190f35f0 00:31:57.084 [2024-04-26 13:15:01.961152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:2060 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.084 [2024-04-26 13:15:01.961169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:57.084 [2024-04-26 13:15:01.973522] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190e49b0 00:31:57.084 [2024-04-26 13:15:01.975535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:10409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.084 [2024-04-26 13:15:01.975550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:57.084 [2024-04-26 13:15:01.984134] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190ef6a8 00:31:57.084 [2024-04-26 13:15:01.985485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:12093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.084 [2024-04-26 13:15:01.985500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:57.084 [2024-04-26 13:15:01.996290] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190e23b8 00:31:57.084 [2024-04-26 13:15:01.997629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.084 [2024-04-26 13:15:01.997645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:57.084 [2024-04-26 13:15:02.007665] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190ff3c8 00:31:57.084 [2024-04-26 13:15:02.009008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.084 [2024-04-26 13:15:02.009024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:57.084 [2024-04-26 13:15:02.020599] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190eaab8 00:31:57.084 [2024-04-26 13:15:02.021916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:6751 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.084 [2024-04-26 13:15:02.021932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:57.084 [2024-04-26 13:15:02.032764] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190eff18 00:31:57.084 [2024-04-26 13:15:02.034097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:8048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.084 [2024-04-26 13:15:02.034112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:57.084 [2024-04-26 13:15:02.044957] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190f0ff8 00:31:57.084 [2024-04-26 13:15:02.046288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4438 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.084 [2024-04-26 13:15:02.046304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:57.084 [2024-04-26 13:15:02.057131] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190f20d8 00:31:57.084 [2024-04-26 13:15:02.058431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.084 [2024-04-26 13:15:02.058449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:57.084 [2024-04-26 13:15:02.069324] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190f57b0 00:31:57.084 [2024-04-26 13:15:02.070651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:4003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.084 [2024-04-26 13:15:02.070666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:57.084 [2024-04-26 13:15:02.083060] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190e6738 00:31:57.084 [2024-04-26 13:15:02.084999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.084 [2024-04-26 13:15:02.085014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:57.084 [2024-04-26 13:15:02.093598] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190e5ec8 00:31:57.084 [2024-04-26 13:15:02.094936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:13158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.084 [2024-04-26 13:15:02.094951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:57.084 [2024-04-26 13:15:02.105739] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190ddc00 00:31:57.084 [2024-04-26 13:15:02.107055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:18132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.084 [2024-04-26 13:15:02.107070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:57.084 [2024-04-26 13:15:02.117901] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190f1868 00:31:57.084 [2024-04-26 13:15:02.119226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:5334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.084 [2024-04-26 13:15:02.119241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:57.084 [2024-04-26 13:15:02.130069] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190e95a0 00:31:57.084 [2024-04-26 13:15:02.131390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:4563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.084 [2024-04-26 13:15:02.131405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:57.084 [2024-04-26 13:15:02.142259] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190ee190 00:31:57.084 [2024-04-26 13:15:02.143599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:18858 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.084 [2024-04-26 13:15:02.143615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:57.345 [2024-04-26 13:15:02.153669] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190e1710 00:31:57.346 [2024-04-26 13:15:02.154987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:19241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.346 [2024-04-26 13:15:02.155002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:57.346 [2024-04-26 13:15:02.166594] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190df550 00:31:57.346 [2024-04-26 13:15:02.167872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:3981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.346 [2024-04-26 13:15:02.167888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:57.346 [2024-04-26 13:15:02.178788] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190e88f8 00:31:57.346 [2024-04-26 13:15:02.180108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:6516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.346 [2024-04-26 13:15:02.180123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:57.346 [2024-04-26 13:15:02.190936] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190f20d8 00:31:57.346 [2024-04-26 13:15:02.192273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:9614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.346 [2024-04-26 13:15:02.192288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:57.346 [2024-04-26 13:15:02.203085] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190e2c28 00:31:57.346 [2024-04-26 13:15:02.204380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:11916 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.346 [2024-04-26 13:15:02.204395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:57.346 [2024-04-26 13:15:02.215274] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190e8d30 00:31:57.346 [2024-04-26 13:15:02.216554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:4236 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.346 [2024-04-26 13:15:02.216569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:57.346 [2024-04-26 13:15:02.227450] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190e23b8 00:31:57.346 [2024-04-26 13:15:02.228759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:18295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.346 [2024-04-26 13:15:02.228774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:57.346 [2024-04-26 13:15:02.239595] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190e0ea0 00:31:57.346 [2024-04-26 13:15:02.240920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:7852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.346 [2024-04-26 13:15:02.240936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:57.346 [2024-04-26 13:15:02.250977] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190eb328 00:31:57.346 [2024-04-26 13:15:02.252292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:1876 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.346 [2024-04-26 13:15:02.252307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:57.346 [2024-04-26 13:15:02.263813] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190fbcf0 00:31:57.346 [2024-04-26 13:15:02.265086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:16837 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.346 [2024-04-26 13:15:02.265101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:57.346 [2024-04-26 13:15:02.275973] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e41df0) with pdu=0x2000190ee190 00:31:57.346 [2024-04-26 13:15:02.277249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.346 [2024-04-26 13:15:02.277265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:57.346 00:31:57.346 Latency(us) 00:31:57.346 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:57.346 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:57.346 nvme0n1 : 2.01 20967.43 81.90 0.00 0.00 6096.13 2225.49 14417.92 00:31:57.346 =================================================================================================================== 00:31:57.346 Total : 20967.43 81.90 0.00 0.00 6096.13 2225.49 14417.92 00:31:57.346 0 00:31:57.346 13:15:02 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:31:57.346 13:15:02 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:31:57.346 13:15:02 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:31:57.346 | .driver_specific 00:31:57.346 | .nvme_error 00:31:57.346 | .status_code 00:31:57.346 | .command_transient_transport_error' 00:31:57.346 13:15:02 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:31:57.606 13:15:02 -- host/digest.sh@71 -- # (( 164 > 0 )) 00:31:57.606 13:15:02 -- host/digest.sh@73 -- # killprocess 5006 00:31:57.606 13:15:02 -- common/autotest_common.sh@936 -- # '[' -z 5006 ']' 00:31:57.606 13:15:02 -- common/autotest_common.sh@940 -- # kill -0 5006 00:31:57.606 13:15:02 -- common/autotest_common.sh@941 -- # uname 00:31:57.606 13:15:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:31:57.606 13:15:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 5006 00:31:57.606 13:15:02 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:31:57.606 13:15:02 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:31:57.606 13:15:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 5006' 00:31:57.606 killing process with pid 5006 00:31:57.606 13:15:02 -- common/autotest_common.sh@955 -- # kill 5006 00:31:57.606 Received shutdown signal, test time was about 2.000000 seconds 00:31:57.606 00:31:57.606 Latency(us) 00:31:57.606 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:57.606 =================================================================================================================== 00:31:57.606 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:57.606 13:15:02 -- common/autotest_common.sh@960 -- # wait 5006 00:31:57.606 13:15:02 -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:31:57.606 13:15:02 -- host/digest.sh@54 -- # local rw bs qd 00:31:57.606 13:15:02 -- host/digest.sh@56 -- # rw=randwrite 00:31:57.606 13:15:02 -- host/digest.sh@56 -- # bs=131072 00:31:57.606 13:15:02 -- host/digest.sh@56 -- # qd=16 00:31:57.606 13:15:02 -- host/digest.sh@58 -- # bperfpid=5797 00:31:57.606 13:15:02 -- host/digest.sh@60 -- # waitforlisten 5797 /var/tmp/bperf.sock 00:31:57.606 13:15:02 -- common/autotest_common.sh@817 -- # '[' -z 5797 ']' 00:31:57.606 13:15:02 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:31:57.606 13:15:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:57.606 13:15:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:31:57.607 13:15:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:57.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:57.607 13:15:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:31:57.607 13:15:02 -- common/autotest_common.sh@10 -- # set +x 00:31:57.868 [2024-04-26 13:15:02.688502] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:31:57.868 [2024-04-26 13:15:02.688557] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid5797 ] 00:31:57.868 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:57.868 Zero copy mechanism will not be used. 00:31:57.868 EAL: No free 2048 kB hugepages reported on node 1 00:31:57.868 [2024-04-26 13:15:02.765222] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:57.868 [2024-04-26 13:15:02.817188] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:58.441 13:15:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:31:58.441 13:15:03 -- common/autotest_common.sh@850 -- # return 0 00:31:58.441 13:15:03 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:58.441 13:15:03 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:58.702 13:15:03 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:31:58.702 13:15:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:58.702 13:15:03 -- common/autotest_common.sh@10 -- # set +x 00:31:58.702 13:15:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:58.702 13:15:03 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:58.702 13:15:03 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:58.963 nvme0n1 00:31:58.963 13:15:03 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:31:58.963 13:15:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:58.963 13:15:03 -- common/autotest_common.sh@10 -- # set +x 00:31:58.963 13:15:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:58.963 13:15:03 -- host/digest.sh@69 -- # bperf_py perform_tests 00:31:58.963 13:15:03 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:59.225 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:59.225 Zero copy mechanism will not be used. 00:31:59.225 Running I/O for 2 seconds... 00:31:59.225 [2024-04-26 13:15:04.081566] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:31:59.225 [2024-04-26 13:15:04.081939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.225 [2024-04-26 13:15:04.081965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:59.225 [2024-04-26 13:15:04.089139] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:31:59.225 [2024-04-26 13:15:04.089481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.225 [2024-04-26 13:15:04.089501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:59.225 [2024-04-26 13:15:04.097112] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:31:59.225 [2024-04-26 13:15:04.097488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.225 [2024-04-26 13:15:04.097506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:59.225 [2024-04-26 13:15:04.105187] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:31:59.225 [2024-04-26 13:15:04.105544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.225 [2024-04-26 13:15:04.105566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.225 [2024-04-26 13:15:04.113301] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:31:59.225 [2024-04-26 13:15:04.113525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.225 [2024-04-26 13:15:04.113542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:59.225 [2024-04-26 13:15:04.119789] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:31:59.225 [2024-04-26 13:15:04.120102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.225 [2024-04-26 13:15:04.120120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:59.225 [2024-04-26 13:15:04.124977] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:31:59.225 [2024-04-26 13:15:04.125191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.225 [2024-04-26 13:15:04.125207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:59.225 [2024-04-26 13:15:04.130191] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:31:59.225 [2024-04-26 13:15:04.130410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.225 [2024-04-26 13:15:04.130425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.225 [2024-04-26 13:15:04.139822] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:31:59.225 [2024-04-26 13:15:04.140160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.225 [2024-04-26 13:15:04.140177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:59.225 [2024-04-26 13:15:04.145229] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:31:59.225 [2024-04-26 13:15:04.145579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.225 [2024-04-26 13:15:04.145595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:59.225 [2024-04-26 13:15:04.150683] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:31:59.225 [2024-04-26 13:15:04.150898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.225 [2024-04-26 13:15:04.150915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:59.225 [2024-04-26 13:15:04.156617] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:31:59.225 [2024-04-26 13:15:04.156987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.225 [2024-04-26 13:15:04.157004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.225 [2024-04-26 13:15:04.165639] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:31:59.225 [2024-04-26 13:15:04.165994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.225 [2024-04-26 13:15:04.166011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:59.225 [2024-04-26 13:15:04.171874] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:31:59.225 [2024-04-26 13:15:04.172212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.225 [2024-04-26 13:15:04.172229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:59.225 [2024-04-26 13:15:04.178875] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:31:59.225 [2024-04-26 13:15:04.179204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.225 [2024-04-26 13:15:04.179221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:59.225 [2024-04-26 13:15:04.185010] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:31:59.225 [2024-04-26 13:15:04.185343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.225 [2024-04-26 13:15:04.185360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.225 [2024-04-26 13:15:04.193944] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:31:59.225 [2024-04-26 13:15:04.194317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.225 [2024-04-26 13:15:04.194333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:59.225 [2024-04-26 13:15:04.201896] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:31:59.225 [2024-04-26 13:15:04.202303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.226 [2024-04-26 13:15:04.202320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:59.226 [2024-04-26 13:15:04.210452] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:31:59.226 [2024-04-26 13:15:04.210798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.226 [2024-04-26 13:15:04.210815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:59.226 [2024-04-26 13:15:04.218088] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:31:59.226 [2024-04-26 13:15:04.218401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.226 [2024-04-26 13:15:04.218418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.226 [2024-04-26 13:15:04.226693] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:31:59.226 [2024-04-26 13:15:04.227044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.226 [2024-04-26 13:15:04.227061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:59.226 [2024-04-26 13:15:04.232251] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:31:59.226 [2024-04-26 13:15:04.232600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.226 [2024-04-26 13:15:04.232617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:59.226 [2024-04-26 13:15:04.240070] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:31:59.226 [2024-04-26 13:15:04.240379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.226 [2024-04-26 13:15:04.240396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:59.226 [2024-04-26 13:15:04.246010] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:31:59.226 [2024-04-26 13:15:04.246224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.226 [2024-04-26 13:15:04.246241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.226 [2024-04-26 13:15:04.250990] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:31:59.226 [2024-04-26 13:15:04.251342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.226 [2024-04-26 13:15:04.251359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:59.226 [2024-04-26 13:15:04.257934] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:31:59.226 [2024-04-26 13:15:04.258277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.226 [2024-04-26 13:15:04.258294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:59.226 [2024-04-26 13:15:04.263682] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:31:59.226 [2024-04-26 13:15:04.264033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.226 [2024-04-26 13:15:04.264050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:59.226 [2024-04-26 13:15:04.268405] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:31:59.226 [2024-04-26 13:15:04.268477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.226 [2024-04-26 13:15:04.268492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.226 [2024-04-26 13:15:04.275311] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:31:59.226 [2024-04-26 13:15:04.275387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.226 [2024-04-26 13:15:04.275402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:59.489 [2024-04-26 13:15:04.285583] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:31:59.489 [2024-04-26 13:15:04.285938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.489 [2024-04-26 13:15:04.285958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:59.489 [2024-04-26 13:15:04.293469] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:31:59.489 [2024-04-26 13:15:04.293552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.489 [2024-04-26 13:15:04.293567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:59.489 [2024-04-26 13:15:04.300794] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:31:59.489 [2024-04-26 13:15:04.301160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.489 [2024-04-26 13:15:04.301177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.489 [2024-04-26 13:15:04.308439] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:31:59.490 [2024-04-26 13:15:04.308753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.490 [2024-04-26 13:15:04.308770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:59.490 [2024-04-26 13:15:04.317481] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:31:59.490 [2024-04-26 13:15:04.317555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.490 [2024-04-26 13:15:04.317569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:59.490 [2024-04-26 13:15:04.325038] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:31:59.490 [2024-04-26 13:15:04.325405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.490 [2024-04-26 13:15:04.325422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:59.490 [2024-04-26 13:15:04.330758] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:31:59.490 [2024-04-26 13:15:04.330973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.490 [2024-04-26 13:15:04.330989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.490 [2024-04-26 13:15:04.337461] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:31:59.490 [2024-04-26 13:15:04.337782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.490 [2024-04-26 13:15:04.337799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:59.490 [2024-04-26 13:15:04.342657] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:31:59.490 [2024-04-26 13:15:04.343008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.490 [2024-04-26 13:15:04.343024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:59.490 [2024-04-26 13:15:04.349978] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:31:59.490 [2024-04-26 13:15:04.350314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.490 [2024-04-26 13:15:04.350331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:59.490 [2024-04-26 13:15:04.356054] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:31:59.490 [2024-04-26 13:15:04.356385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.490 [2024-04-26 13:15:04.356402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.490 [2024-04-26 13:15:04.363144] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:31:59.490 [2024-04-26 13:15:04.363491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.490 [2024-04-26 13:15:04.363508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:59.490 [2024-04-26 13:15:04.369919] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:31:59.490 [2024-04-26 13:15:04.370163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.490 [2024-04-26 13:15:04.370179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:59.490 [2024-04-26 13:15:04.377013] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:31:59.490 [2024-04-26 13:15:04.377344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.490 [2024-04-26 13:15:04.377361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:59.490 [2024-04-26 13:15:04.384680] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:31:59.490 [2024-04-26 13:15:04.384987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.490 [2024-04-26 13:15:04.385004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.490 [2024-04-26 13:15:04.392541] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:31:59.490 [2024-04-26 13:15:04.392896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.490 [2024-04-26 13:15:04.392913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:59.490 [2024-04-26 13:15:04.402887] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:31:59.490 [2024-04-26 13:15:04.403226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.490 [2024-04-26 13:15:04.403243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:59.490 [2024-04-26 13:15:04.413766] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:31:59.490 [2024-04-26 13:15:04.413849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.490 [2024-04-26 13:15:04.413864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:59.490 [2024-04-26 13:15:04.424136] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:31:59.490 [2024-04-26 13:15:04.424464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.490 [2024-04-26 13:15:04.424480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.490 [2024-04-26 13:15:04.434901] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:31:59.490 [2024-04-26 13:15:04.435360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.490 [2024-04-26 13:15:04.435377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:59.490 [2024-04-26 13:15:04.446243] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:31:59.490 [2024-04-26 13:15:04.446658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.490 [2024-04-26 13:15:04.446675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:59.490 [2024-04-26 13:15:04.453298] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:31:59.490 [2024-04-26 13:15:04.453516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.490 [2024-04-26 13:15:04.453532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:59.490 [2024-04-26 13:15:04.458185] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:31:59.490 [2024-04-26 13:15:04.458270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.490 [2024-04-26 13:15:04.458285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.490 [2024-04-26 13:15:04.465895] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:31:59.490 [2024-04-26 13:15:04.466239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.490 [2024-04-26 13:15:04.466255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:59.490 [2024-04-26 13:15:04.472789] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:31:59.490 [2024-04-26 13:15:04.473124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.490 [2024-04-26 13:15:04.473141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:59.490 [2024-04-26 13:15:04.480300] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:31:59.490 [2024-04-26 13:15:04.480719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.490 [2024-04-26 13:15:04.480736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:59.490 [2024-04-26 13:15:04.489171] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:31:59.490 [2024-04-26 13:15:04.489516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.490 [2024-04-26 13:15:04.489536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.490 [2024-04-26 13:15:04.499858] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:31:59.490 [2024-04-26 13:15:04.500172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.490 [2024-04-26 13:15:04.500189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:59.490 [2024-04-26 13:15:04.510728] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:31:59.490 [2024-04-26 13:15:04.511151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.490 [2024-04-26 13:15:04.511169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:59.491 [2024-04-26 13:15:04.521023] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:31:59.491 [2024-04-26 13:15:04.521355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.491 [2024-04-26 13:15:04.521371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:59.491 [2024-04-26 13:15:04.532654] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:31:59.491 [2024-04-26 13:15:04.533062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.491 [2024-04-26 13:15:04.533079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.491 [2024-04-26 13:15:04.541089] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:31:59.491 [2024-04-26 13:15:04.541399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.491 [2024-04-26 13:15:04.541416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:59.491 [2024-04-26 13:15:04.546039] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:31:59.491 [2024-04-26 13:15:04.546491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.491 [2024-04-26 13:15:04.546508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:59.754 [2024-04-26 13:15:04.555816] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:31:59.754 [2024-04-26 13:15:04.556252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.754 [2024-04-26 13:15:04.556269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:59.754 [2024-04-26 13:15:04.562489] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:31:59.754 [2024-04-26 13:15:04.562812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.754 [2024-04-26 13:15:04.562828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.754 [2024-04-26 13:15:04.569663] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:31:59.754 [2024-04-26 13:15:04.569749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.754 [2024-04-26 13:15:04.569764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:59.754 [2024-04-26 13:15:04.575542] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:31:59.754 [2024-04-26 13:15:04.575902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.754 [2024-04-26 13:15:04.575919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:59.754 [2024-04-26 13:15:04.580461] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:31:59.754 [2024-04-26 13:15:04.580869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.754 [2024-04-26 13:15:04.580887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:59.754 [2024-04-26 13:15:04.589565] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:31:59.754 [2024-04-26 13:15:04.589874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.754 [2024-04-26 13:15:04.589891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.754 [2024-04-26 13:15:04.597020] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:31:59.754 [2024-04-26 13:15:04.597354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.754 [2024-04-26 13:15:04.597371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:59.754 [2024-04-26 13:15:04.604716] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:31:59.754 [2024-04-26 13:15:04.605059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.754 [2024-04-26 13:15:04.605076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:59.754 [2024-04-26 13:15:04.611217] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:31:59.754 [2024-04-26 13:15:04.611526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.754 [2024-04-26 13:15:04.611542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:59.754 [2024-04-26 13:15:04.616888] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:31:59.754 [2024-04-26 13:15:04.617263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.754 [2024-04-26 13:15:04.617280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.754 [2024-04-26 13:15:04.623803] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:31:59.754 [2024-04-26 13:15:04.624246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.754 [2024-04-26 13:15:04.624266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:59.754 [2024-04-26 13:15:04.630031] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:31:59.754 [2024-04-26 13:15:04.630437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.754 [2024-04-26 13:15:04.630454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:59.754 [2024-04-26 13:15:04.636885] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:31:59.754 [2024-04-26 13:15:04.637324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.754 [2024-04-26 13:15:04.637341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:59.754 [2024-04-26 13:15:04.643890] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:31:59.754 [2024-04-26 13:15:04.644243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.754 [2024-04-26 13:15:04.644259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.754 [2024-04-26 13:15:04.653895] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:31:59.754 [2024-04-26 13:15:04.654207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.754 [2024-04-26 13:15:04.654224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:59.754 [2024-04-26 13:15:04.660506] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:31:59.754 [2024-04-26 13:15:04.660727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.754 [2024-04-26 13:15:04.660743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:59.754 [2024-04-26 13:15:04.666258] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:31:59.754 [2024-04-26 13:15:04.666478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.754 [2024-04-26 13:15:04.666495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:59.754 [2024-04-26 13:15:04.675879] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:31:59.754 [2024-04-26 13:15:04.676309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.754 [2024-04-26 13:15:04.676325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.754 [2024-04-26 13:15:04.687805] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:31:59.754 [2024-04-26 13:15:04.687927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.754 [2024-04-26 13:15:04.687942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:59.754 [2024-04-26 13:15:04.698695] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:31:59.754 [2024-04-26 13:15:04.699058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.754 [2024-04-26 13:15:04.699076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:59.754 [2024-04-26 13:15:04.707710] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:31:59.754 [2024-04-26 13:15:04.708037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.754 [2024-04-26 13:15:04.708053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:59.754 [2024-04-26 13:15:04.716880] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:31:59.754 [2024-04-26 13:15:04.717277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.754 [2024-04-26 13:15:04.717293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.754 [2024-04-26 13:15:04.727808] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:31:59.754 [2024-04-26 13:15:04.728155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.754 [2024-04-26 13:15:04.728172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:59.755 [2024-04-26 13:15:04.735456] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:31:59.755 [2024-04-26 13:15:04.735833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.755 [2024-04-26 13:15:04.735854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:59.755 [2024-04-26 13:15:04.744368] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:31:59.755 [2024-04-26 13:15:04.744692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.755 [2024-04-26 13:15:04.744709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:59.755 [2024-04-26 13:15:04.752622] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:31:59.755 [2024-04-26 13:15:04.753069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.755 [2024-04-26 13:15:04.753086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.755 [2024-04-26 13:15:04.759519] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:31:59.755 [2024-04-26 13:15:04.759862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.755 [2024-04-26 13:15:04.759878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:59.755 [2024-04-26 13:15:04.768095] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:31:59.755 [2024-04-26 13:15:04.768451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.755 [2024-04-26 13:15:04.768467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:59.755 [2024-04-26 13:15:04.777103] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:31:59.755 [2024-04-26 13:15:04.777454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.755 [2024-04-26 13:15:04.777471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:59.755 [2024-04-26 13:15:04.784217] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:31:59.755 [2024-04-26 13:15:04.784428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.755 [2024-04-26 13:15:04.784444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.755 [2024-04-26 13:15:04.790799] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:31:59.755 [2024-04-26 13:15:04.791130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.755 [2024-04-26 13:15:04.791147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:59.755 [2024-04-26 13:15:04.797522] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:31:59.755 [2024-04-26 13:15:04.797861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.755 [2024-04-26 13:15:04.797878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:59.755 [2024-04-26 13:15:04.804450] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:31:59.755 [2024-04-26 13:15:04.804770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.755 [2024-04-26 13:15:04.804787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:59.755 [2024-04-26 13:15:04.809954] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:31:59.755 [2024-04-26 13:15:04.810391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.755 [2024-04-26 13:15:04.810408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:00.018 [2024-04-26 13:15:04.817167] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:00.018 [2024-04-26 13:15:04.817504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.018 [2024-04-26 13:15:04.817522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:00.018 [2024-04-26 13:15:04.823381] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:00.018 [2024-04-26 13:15:04.823691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.018 [2024-04-26 13:15:04.823707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:00.018 [2024-04-26 13:15:04.832120] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:00.018 [2024-04-26 13:15:04.832426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.018 [2024-04-26 13:15:04.832446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:00.018 [2024-04-26 13:15:04.838292] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:00.018 [2024-04-26 13:15:04.838503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.018 [2024-04-26 13:15:04.838518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:00.018 [2024-04-26 13:15:04.844182] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:00.018 [2024-04-26 13:15:04.844534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.018 [2024-04-26 13:15:04.844550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:00.018 [2024-04-26 13:15:04.849303] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:00.018 [2024-04-26 13:15:04.849642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.018 [2024-04-26 13:15:04.849660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:00.018 [2024-04-26 13:15:04.856130] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:00.018 [2024-04-26 13:15:04.856481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.018 [2024-04-26 13:15:04.856498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:00.018 [2024-04-26 13:15:04.864978] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:00.018 [2024-04-26 13:15:04.865319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.018 [2024-04-26 13:15:04.865336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:00.018 [2024-04-26 13:15:04.874601] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:00.018 [2024-04-26 13:15:04.874811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.018 [2024-04-26 13:15:04.874826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:00.018 [2024-04-26 13:15:04.881293] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:00.018 [2024-04-26 13:15:04.881712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.018 [2024-04-26 13:15:04.881729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:00.018 [2024-04-26 13:15:04.889229] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:00.018 [2024-04-26 13:15:04.889460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.018 [2024-04-26 13:15:04.889476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:00.018 [2024-04-26 13:15:04.897148] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:00.019 [2024-04-26 13:15:04.897584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.019 [2024-04-26 13:15:04.897601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:00.019 [2024-04-26 13:15:04.904122] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:00.019 [2024-04-26 13:15:04.904449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.019 [2024-04-26 13:15:04.904466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:00.019 [2024-04-26 13:15:04.909948] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:00.019 [2024-04-26 13:15:04.910161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.019 [2024-04-26 13:15:04.910176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:00.019 [2024-04-26 13:15:04.916937] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:00.019 [2024-04-26 13:15:04.917265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.019 [2024-04-26 13:15:04.917282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:00.019 [2024-04-26 13:15:04.923510] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:00.019 [2024-04-26 13:15:04.923957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.019 [2024-04-26 13:15:04.923974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:00.019 [2024-04-26 13:15:04.932669] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:00.019 [2024-04-26 13:15:04.933017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.019 [2024-04-26 13:15:04.933034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:00.019 [2024-04-26 13:15:04.940160] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:00.019 [2024-04-26 13:15:04.940506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.019 [2024-04-26 13:15:04.940522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:00.019 [2024-04-26 13:15:04.949527] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:00.019 [2024-04-26 13:15:04.949949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.019 [2024-04-26 13:15:04.949966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:00.019 [2024-04-26 13:15:04.959084] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:00.019 [2024-04-26 13:15:04.959298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.019 [2024-04-26 13:15:04.959314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:00.019 [2024-04-26 13:15:04.965891] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:00.019 [2024-04-26 13:15:04.966281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.019 [2024-04-26 13:15:04.966298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:00.019 [2024-04-26 13:15:04.972132] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:00.019 [2024-04-26 13:15:04.972436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.019 [2024-04-26 13:15:04.972453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:00.019 [2024-04-26 13:15:04.978478] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:00.019 [2024-04-26 13:15:04.978569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.019 [2024-04-26 13:15:04.978584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:00.019 [2024-04-26 13:15:04.985194] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:00.019 [2024-04-26 13:15:04.985612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.019 [2024-04-26 13:15:04.985629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:00.019 [2024-04-26 13:15:04.992745] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:00.019 [2024-04-26 13:15:04.993094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.019 [2024-04-26 13:15:04.993111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:00.019 [2024-04-26 13:15:04.997375] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:00.019 [2024-04-26 13:15:04.997718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.019 [2024-04-26 13:15:04.997735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:00.019 [2024-04-26 13:15:05.003537] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:00.019 [2024-04-26 13:15:05.003887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.019 [2024-04-26 13:15:05.003904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:00.019 [2024-04-26 13:15:05.010317] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:00.019 [2024-04-26 13:15:05.010632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.019 [2024-04-26 13:15:05.010649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:00.019 [2024-04-26 13:15:05.017680] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:00.019 [2024-04-26 13:15:05.018038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.019 [2024-04-26 13:15:05.018061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:00.019 [2024-04-26 13:15:05.023897] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:00.019 [2024-04-26 13:15:05.024257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.019 [2024-04-26 13:15:05.024274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:00.019 [2024-04-26 13:15:05.033371] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:00.019 [2024-04-26 13:15:05.033680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.019 [2024-04-26 13:15:05.033697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:00.019 [2024-04-26 13:15:05.040728] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:00.019 [2024-04-26 13:15:05.041036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.019 [2024-04-26 13:15:05.041053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:00.019 [2024-04-26 13:15:05.047287] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:00.019 [2024-04-26 13:15:05.047498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.019 [2024-04-26 13:15:05.047514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:00.019 [2024-04-26 13:15:05.053104] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:00.019 [2024-04-26 13:15:05.053434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.019 [2024-04-26 13:15:05.053450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:00.019 [2024-04-26 13:15:05.058750] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:00.019 [2024-04-26 13:15:05.058962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.019 [2024-04-26 13:15:05.058978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:00.019 [2024-04-26 13:15:05.066503] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:00.019 [2024-04-26 13:15:05.066952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.019 [2024-04-26 13:15:05.066969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:00.019 [2024-04-26 13:15:05.073919] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:00.019 [2024-04-26 13:15:05.074284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.019 [2024-04-26 13:15:05.074301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:00.281 [2024-04-26 13:15:05.081778] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:00.282 [2024-04-26 13:15:05.082106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.282 [2024-04-26 13:15:05.082122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:00.282 [2024-04-26 13:15:05.086985] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:00.282 [2024-04-26 13:15:05.087426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.282 [2024-04-26 13:15:05.087443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:00.282 [2024-04-26 13:15:05.094854] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:00.282 [2024-04-26 13:15:05.095280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.282 [2024-04-26 13:15:05.095297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:00.282 [2024-04-26 13:15:05.102932] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:00.282 [2024-04-26 13:15:05.103304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.282 [2024-04-26 13:15:05.103321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:00.282 [2024-04-26 13:15:05.109873] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:00.282 [2024-04-26 13:15:05.110197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.282 [2024-04-26 13:15:05.110213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:00.282 [2024-04-26 13:15:05.116617] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:00.282 [2024-04-26 13:15:05.117078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.282 [2024-04-26 13:15:05.117095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:00.282 [2024-04-26 13:15:05.122091] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:00.282 [2024-04-26 13:15:05.122315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.282 [2024-04-26 13:15:05.122331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:00.282 [2024-04-26 13:15:05.127295] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:00.282 [2024-04-26 13:15:05.127615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.282 [2024-04-26 13:15:05.127632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:00.282 [2024-04-26 13:15:05.134290] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:00.282 [2024-04-26 13:15:05.134697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.282 [2024-04-26 13:15:05.134713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:00.282 [2024-04-26 13:15:05.139237] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:00.282 [2024-04-26 13:15:05.139574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.282 [2024-04-26 13:15:05.139591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:00.282 [2024-04-26 13:15:05.147181] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:00.282 [2024-04-26 13:15:05.147631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.282 [2024-04-26 13:15:05.147649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:00.282 [2024-04-26 13:15:05.153611] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:00.282 [2024-04-26 13:15:05.154153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.282 [2024-04-26 13:15:05.154170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:00.282 [2024-04-26 13:15:05.164344] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:00.282 [2024-04-26 13:15:05.164747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.282 [2024-04-26 13:15:05.164764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:00.282 [2024-04-26 13:15:05.172760] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:00.282 [2024-04-26 13:15:05.173123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.282 [2024-04-26 13:15:05.173140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:00.282 [2024-04-26 13:15:05.183467] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:00.282 [2024-04-26 13:15:05.183889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.282 [2024-04-26 13:15:05.183906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:00.282 [2024-04-26 13:15:05.191151] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:00.282 [2024-04-26 13:15:05.191498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.282 [2024-04-26 13:15:05.191515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:00.282 [2024-04-26 13:15:05.200859] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:00.282 [2024-04-26 13:15:05.201229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.282 [2024-04-26 13:15:05.201246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:00.282 [2024-04-26 13:15:05.209022] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:00.282 [2024-04-26 13:15:05.209103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.282 [2024-04-26 13:15:05.209120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:00.282 [2024-04-26 13:15:05.219360] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:00.282 [2024-04-26 13:15:05.219779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.282 [2024-04-26 13:15:05.219796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:00.282 [2024-04-26 13:15:05.226973] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:00.282 [2024-04-26 13:15:05.227317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.282 [2024-04-26 13:15:05.227334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:00.282 [2024-04-26 13:15:05.234326] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:00.282 [2024-04-26 13:15:05.234675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.282 [2024-04-26 13:15:05.234692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:00.282 [2024-04-26 13:15:05.246599] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:00.282 [2024-04-26 13:15:05.246926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.282 [2024-04-26 13:15:05.246943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:00.283 [2024-04-26 13:15:05.256651] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:00.283 [2024-04-26 13:15:05.256997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.283 [2024-04-26 13:15:05.257014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:00.283 [2024-04-26 13:15:05.266330] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:00.283 [2024-04-26 13:15:05.266395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.283 [2024-04-26 13:15:05.266409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:00.283 [2024-04-26 13:15:05.277477] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:00.283 [2024-04-26 13:15:05.277789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.283 [2024-04-26 13:15:05.277806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:00.283 [2024-04-26 13:15:05.286879] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:00.283 [2024-04-26 13:15:05.287193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.283 [2024-04-26 13:15:05.287209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:00.283 [2024-04-26 13:15:05.296477] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:00.283 [2024-04-26 13:15:05.296825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.283 [2024-04-26 13:15:05.296847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:00.283 [2024-04-26 13:15:05.306906] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:00.283 [2024-04-26 13:15:05.307288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.283 [2024-04-26 13:15:05.307307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:00.283 [2024-04-26 13:15:05.317402] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:00.283 [2024-04-26 13:15:05.317728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.283 [2024-04-26 13:15:05.317745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:00.283 [2024-04-26 13:15:05.328501] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:00.283 [2024-04-26 13:15:05.328807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.283 [2024-04-26 13:15:05.328822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:00.283 [2024-04-26 13:15:05.338560] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:00.283 [2024-04-26 13:15:05.338878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.283 [2024-04-26 13:15:05.338895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:00.545 [2024-04-26 13:15:05.346472] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:00.545 [2024-04-26 13:15:05.346694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.545 [2024-04-26 13:15:05.346711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:00.545 [2024-04-26 13:15:05.351864] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:00.545 [2024-04-26 13:15:05.352084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.545 [2024-04-26 13:15:05.352100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:00.545 [2024-04-26 13:15:05.360126] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:00.545 [2024-04-26 13:15:05.360541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.545 [2024-04-26 13:15:05.360558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:00.545 [2024-04-26 13:15:05.369566] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:00.545 [2024-04-26 13:15:05.369787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.545 [2024-04-26 13:15:05.369807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:00.545 [2024-04-26 13:15:05.380915] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:00.545 [2024-04-26 13:15:05.381000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.545 [2024-04-26 13:15:05.381015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:00.545 [2024-04-26 13:15:05.391377] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:00.545 [2024-04-26 13:15:05.391806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.545 [2024-04-26 13:15:05.391822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:00.545 [2024-04-26 13:15:05.398547] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:00.545 [2024-04-26 13:15:05.398760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.545 [2024-04-26 13:15:05.398775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:00.545 [2024-04-26 13:15:05.404547] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:00.545 [2024-04-26 13:15:05.404892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.545 [2024-04-26 13:15:05.404910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:00.545 [2024-04-26 13:15:05.411543] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:00.545 [2024-04-26 13:15:05.411846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.545 [2024-04-26 13:15:05.411862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:00.545 [2024-04-26 13:15:05.417903] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:00.545 [2024-04-26 13:15:05.418262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.545 [2024-04-26 13:15:05.418279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:00.545 [2024-04-26 13:15:05.426306] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:00.545 [2024-04-26 13:15:05.426629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.545 [2024-04-26 13:15:05.426645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:00.545 [2024-04-26 13:15:05.432889] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:00.545 [2024-04-26 13:15:05.433251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.545 [2024-04-26 13:15:05.433268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:00.545 [2024-04-26 13:15:05.439086] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:00.545 [2024-04-26 13:15:05.439302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.545 [2024-04-26 13:15:05.439318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:00.545 [2024-04-26 13:15:05.445584] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:00.545 [2024-04-26 13:15:05.445934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.545 [2024-04-26 13:15:05.445951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:00.545 [2024-04-26 13:15:05.452197] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:00.545 [2024-04-26 13:15:05.452538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.545 [2024-04-26 13:15:05.452555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:00.545 [2024-04-26 13:15:05.458807] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:00.545 [2024-04-26 13:15:05.459271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.545 [2024-04-26 13:15:05.459289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:00.545 [2024-04-26 13:15:05.467469] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:00.545 [2024-04-26 13:15:05.467800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.545 [2024-04-26 13:15:05.467817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:00.545 [2024-04-26 13:15:05.474074] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:00.545 [2024-04-26 13:15:05.474405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.545 [2024-04-26 13:15:05.474422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:00.545 [2024-04-26 13:15:05.480598] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:00.545 [2024-04-26 13:15:05.480938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.545 [2024-04-26 13:15:05.480955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:00.545 [2024-04-26 13:15:05.487761] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:00.545 [2024-04-26 13:15:05.487974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.545 [2024-04-26 13:15:05.487990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:00.545 [2024-04-26 13:15:05.493015] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:00.545 [2024-04-26 13:15:05.493411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.545 [2024-04-26 13:15:05.493429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:00.545 [2024-04-26 13:15:05.498057] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:00.545 [2024-04-26 13:15:05.498264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.545 [2024-04-26 13:15:05.498281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:00.545 [2024-04-26 13:15:05.505235] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:00.545 [2024-04-26 13:15:05.505567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.545 [2024-04-26 13:15:05.505584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:00.545 [2024-04-26 13:15:05.512945] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:00.545 [2024-04-26 13:15:05.513259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.545 [2024-04-26 13:15:05.513275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:00.545 [2024-04-26 13:15:05.520924] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:00.545 [2024-04-26 13:15:05.521231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.545 [2024-04-26 13:15:05.521247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:00.545 [2024-04-26 13:15:05.528738] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:00.545 [2024-04-26 13:15:05.529194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.546 [2024-04-26 13:15:05.529211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:00.546 [2024-04-26 13:15:05.535524] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:00.546 [2024-04-26 13:15:05.535856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.546 [2024-04-26 13:15:05.535872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:00.546 [2024-04-26 13:15:05.542444] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:00.546 [2024-04-26 13:15:05.542657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.546 [2024-04-26 13:15:05.542673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:00.546 [2024-04-26 13:15:05.547595] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:00.546 [2024-04-26 13:15:05.547805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.546 [2024-04-26 13:15:05.547821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:00.546 [2024-04-26 13:15:05.555110] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:00.546 [2024-04-26 13:15:05.555414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.546 [2024-04-26 13:15:05.555434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:00.546 [2024-04-26 13:15:05.565559] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:00.546 [2024-04-26 13:15:05.565963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.546 [2024-04-26 13:15:05.565981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:00.546 [2024-04-26 13:15:05.577030] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:00.546 [2024-04-26 13:15:05.577382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.546 [2024-04-26 13:15:05.577398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:00.546 [2024-04-26 13:15:05.588887] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:00.546 [2024-04-26 13:15:05.589322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.546 [2024-04-26 13:15:05.589338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:00.546 [2024-04-26 13:15:05.600919] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:00.546 [2024-04-26 13:15:05.601008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.546 [2024-04-26 13:15:05.601023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:00.808 [2024-04-26 13:15:05.612016] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:00.808 [2024-04-26 13:15:05.612325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.808 [2024-04-26 13:15:05.612342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:00.808 [2024-04-26 13:15:05.623872] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:00.808 [2024-04-26 13:15:05.624282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.808 [2024-04-26 13:15:05.624298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:00.808 [2024-04-26 13:15:05.632304] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:00.808 [2024-04-26 13:15:05.632643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.808 [2024-04-26 13:15:05.632660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:00.808 [2024-04-26 13:15:05.640340] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:00.808 [2024-04-26 13:15:05.640689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.808 [2024-04-26 13:15:05.640705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:00.808 [2024-04-26 13:15:05.651170] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:00.808 [2024-04-26 13:15:05.651492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.808 [2024-04-26 13:15:05.651510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:00.808 [2024-04-26 13:15:05.660174] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:00.808 [2024-04-26 13:15:05.660524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.808 [2024-04-26 13:15:05.660542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:00.808 [2024-04-26 13:15:05.670259] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:00.808 [2024-04-26 13:15:05.670562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.808 [2024-04-26 13:15:05.670578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:00.809 [2024-04-26 13:15:05.679991] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:00.809 [2024-04-26 13:15:05.680090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.809 [2024-04-26 13:15:05.680106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:00.809 [2024-04-26 13:15:05.688027] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:00.809 [2024-04-26 13:15:05.688371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.809 [2024-04-26 13:15:05.688387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:00.809 [2024-04-26 13:15:05.694587] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:00.809 [2024-04-26 13:15:05.694901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.809 [2024-04-26 13:15:05.694918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:00.809 [2024-04-26 13:15:05.703477] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:00.809 [2024-04-26 13:15:05.703783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.809 [2024-04-26 13:15:05.703800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:00.809 [2024-04-26 13:15:05.710880] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:00.809 [2024-04-26 13:15:05.711198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.809 [2024-04-26 13:15:05.711215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:00.809 [2024-04-26 13:15:05.717431] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:00.809 [2024-04-26 13:15:05.717772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.809 [2024-04-26 13:15:05.717788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:00.809 [2024-04-26 13:15:05.723175] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:00.809 [2024-04-26 13:15:05.723263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.809 [2024-04-26 13:15:05.723277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:00.809 [2024-04-26 13:15:05.730116] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:00.809 [2024-04-26 13:15:05.730466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.809 [2024-04-26 13:15:05.730482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:00.809 [2024-04-26 13:15:05.737558] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:00.809 [2024-04-26 13:15:05.737900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.809 [2024-04-26 13:15:05.737917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:00.809 [2024-04-26 13:15:05.742863] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:00.809 [2024-04-26 13:15:05.743221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.809 [2024-04-26 13:15:05.743237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:00.809 [2024-04-26 13:15:05.751849] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:00.809 [2024-04-26 13:15:05.752191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.809 [2024-04-26 13:15:05.752208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:00.809 [2024-04-26 13:15:05.760166] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:00.809 [2024-04-26 13:15:05.760492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.809 [2024-04-26 13:15:05.760509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:00.809 [2024-04-26 13:15:05.769752] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:00.809 [2024-04-26 13:15:05.770088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.809 [2024-04-26 13:15:05.770104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:00.809 [2024-04-26 13:15:05.779561] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:00.809 [2024-04-26 13:15:05.779874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.809 [2024-04-26 13:15:05.779890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:00.809 [2024-04-26 13:15:05.788473] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:00.809 [2024-04-26 13:15:05.788827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.809 [2024-04-26 13:15:05.788851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:00.809 [2024-04-26 13:15:05.793975] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:00.809 [2024-04-26 13:15:05.794287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.809 [2024-04-26 13:15:05.794303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:00.809 [2024-04-26 13:15:05.800821] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:00.809 [2024-04-26 13:15:05.801171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.809 [2024-04-26 13:15:05.801188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:00.809 [2024-04-26 13:15:05.808621] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:00.809 [2024-04-26 13:15:05.808885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.809 [2024-04-26 13:15:05.808901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:00.809 [2024-04-26 13:15:05.816936] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:00.809 [2024-04-26 13:15:05.817025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.809 [2024-04-26 13:15:05.817039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:00.809 [2024-04-26 13:15:05.824570] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:00.809 [2024-04-26 13:15:05.824784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.809 [2024-04-26 13:15:05.824800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:00.809 [2024-04-26 13:15:05.830652] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:00.809 [2024-04-26 13:15:05.830960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.809 [2024-04-26 13:15:05.830977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:00.809 [2024-04-26 13:15:05.838412] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:00.809 [2024-04-26 13:15:05.838758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.809 [2024-04-26 13:15:05.838775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:00.809 [2024-04-26 13:15:05.847801] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:00.809 [2024-04-26 13:15:05.848150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.809 [2024-04-26 13:15:05.848167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:00.809 [2024-04-26 13:15:05.856120] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:00.809 [2024-04-26 13:15:05.856427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.809 [2024-04-26 13:15:05.856444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:00.809 [2024-04-26 13:15:05.865231] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:00.809 [2024-04-26 13:15:05.865570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.809 [2024-04-26 13:15:05.865587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:01.084 [2024-04-26 13:15:05.872184] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:01.085 [2024-04-26 13:15:05.872395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.085 [2024-04-26 13:15:05.872410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:01.085 [2024-04-26 13:15:05.880193] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:01.085 [2024-04-26 13:15:05.880627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.085 [2024-04-26 13:15:05.880643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:01.085 [2024-04-26 13:15:05.890040] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:01.085 [2024-04-26 13:15:05.890409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.085 [2024-04-26 13:15:05.890425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:01.085 [2024-04-26 13:15:05.900078] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:01.085 [2024-04-26 13:15:05.900156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.085 [2024-04-26 13:15:05.900171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:01.085 [2024-04-26 13:15:05.910718] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:01.085 [2024-04-26 13:15:05.910805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.085 [2024-04-26 13:15:05.910820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:01.085 [2024-04-26 13:15:05.921423] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:01.085 [2024-04-26 13:15:05.921645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.085 [2024-04-26 13:15:05.921661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:01.085 [2024-04-26 13:15:05.930519] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:01.085 [2024-04-26 13:15:05.930857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.085 [2024-04-26 13:15:05.930877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:01.085 [2024-04-26 13:15:05.939248] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:01.085 [2024-04-26 13:15:05.939594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.085 [2024-04-26 13:15:05.939611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:01.085 [2024-04-26 13:15:05.945518] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:01.085 [2024-04-26 13:15:05.945885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.085 [2024-04-26 13:15:05.945902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:01.085 [2024-04-26 13:15:05.953185] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:01.085 [2024-04-26 13:15:05.953513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.085 [2024-04-26 13:15:05.953530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:01.085 [2024-04-26 13:15:05.958112] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:01.085 [2024-04-26 13:15:05.958426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.085 [2024-04-26 13:15:05.958443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:01.085 [2024-04-26 13:15:05.966229] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:01.085 [2024-04-26 13:15:05.966619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.085 [2024-04-26 13:15:05.966635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:01.085 [2024-04-26 13:15:05.973894] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:01.085 [2024-04-26 13:15:05.974257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.085 [2024-04-26 13:15:05.974273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:01.085 [2024-04-26 13:15:05.980295] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:01.085 [2024-04-26 13:15:05.980601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.085 [2024-04-26 13:15:05.980618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:01.085 [2024-04-26 13:15:05.985484] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:01.085 [2024-04-26 13:15:05.985705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.085 [2024-04-26 13:15:05.985721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:01.085 [2024-04-26 13:15:05.994193] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:01.085 [2024-04-26 13:15:05.994518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.085 [2024-04-26 13:15:05.994535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:01.085 [2024-04-26 13:15:06.004131] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:01.085 [2024-04-26 13:15:06.004497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.085 [2024-04-26 13:15:06.004513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:01.085 [2024-04-26 13:15:06.016633] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:01.085 [2024-04-26 13:15:06.016868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.085 [2024-04-26 13:15:06.016885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:01.085 [2024-04-26 13:15:06.028749] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:01.085 [2024-04-26 13:15:06.029021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.085 [2024-04-26 13:15:06.029038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:01.085 [2024-04-26 13:15:06.039272] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:01.085 [2024-04-26 13:15:06.039663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.085 [2024-04-26 13:15:06.039680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:01.085 [2024-04-26 13:15:06.050769] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:01.085 [2024-04-26 13:15:06.051187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.085 [2024-04-26 13:15:06.051204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:01.086 [2024-04-26 13:15:06.061562] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:01.086 [2024-04-26 13:15:06.061831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.086 [2024-04-26 13:15:06.061853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:01.086 [2024-04-26 13:15:06.071405] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e422d0) with pdu=0x2000190fef90 00:32:01.086 [2024-04-26 13:15:06.071855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.086 [2024-04-26 13:15:06.071873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:01.086 00:32:01.086 Latency(us) 00:32:01.086 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:01.086 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:32:01.086 nvme0n1 : 2.01 3922.51 490.31 0.00 0.00 4070.78 2116.27 13052.59 00:32:01.086 =================================================================================================================== 00:32:01.086 Total : 3922.51 490.31 0.00 0.00 4070.78 2116.27 13052.59 00:32:01.086 0 00:32:01.086 13:15:06 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:32:01.086 13:15:06 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:32:01.086 13:15:06 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:32:01.086 | .driver_specific 00:32:01.086 | .nvme_error 00:32:01.086 | .status_code 00:32:01.086 | .command_transient_transport_error' 00:32:01.086 13:15:06 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:32:01.353 13:15:06 -- host/digest.sh@71 -- # (( 253 > 0 )) 00:32:01.353 13:15:06 -- host/digest.sh@73 -- # killprocess 5797 00:32:01.353 13:15:06 -- common/autotest_common.sh@936 -- # '[' -z 5797 ']' 00:32:01.353 13:15:06 -- common/autotest_common.sh@940 -- # kill -0 5797 00:32:01.353 13:15:06 -- common/autotest_common.sh@941 -- # uname 00:32:01.353 13:15:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:32:01.353 13:15:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 5797 00:32:01.353 13:15:06 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:32:01.353 13:15:06 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:32:01.353 13:15:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 5797' 00:32:01.353 killing process with pid 5797 00:32:01.353 13:15:06 -- common/autotest_common.sh@955 -- # kill 5797 00:32:01.353 Received shutdown signal, test time was about 2.000000 seconds 00:32:01.353 00:32:01.353 Latency(us) 00:32:01.353 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:01.353 =================================================================================================================== 00:32:01.353 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:01.353 13:15:06 -- common/autotest_common.sh@960 -- # wait 5797 00:32:01.614 13:15:06 -- host/digest.sh@116 -- # killprocess 3218 00:32:01.614 13:15:06 -- common/autotest_common.sh@936 -- # '[' -z 3218 ']' 00:32:01.614 13:15:06 -- common/autotest_common.sh@940 -- # kill -0 3218 00:32:01.614 13:15:06 -- common/autotest_common.sh@941 -- # uname 00:32:01.614 13:15:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:32:01.614 13:15:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3218 00:32:01.614 13:15:06 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:32:01.614 13:15:06 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:32:01.614 13:15:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3218' 00:32:01.614 killing process with pid 3218 00:32:01.614 13:15:06 -- common/autotest_common.sh@955 -- # kill 3218 00:32:01.614 13:15:06 -- common/autotest_common.sh@960 -- # wait 3218 00:32:01.614 00:32:01.614 real 0m15.937s 00:32:01.614 user 0m31.398s 00:32:01.614 sys 0m3.229s 00:32:01.614 13:15:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:32:01.614 13:15:06 -- common/autotest_common.sh@10 -- # set +x 00:32:01.614 ************************************ 00:32:01.614 END TEST nvmf_digest_error 00:32:01.614 ************************************ 00:32:01.614 13:15:06 -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:32:01.614 13:15:06 -- host/digest.sh@150 -- # nvmftestfini 00:32:01.614 13:15:06 -- nvmf/common.sh@477 -- # nvmfcleanup 00:32:01.614 13:15:06 -- nvmf/common.sh@117 -- # sync 00:32:01.614 13:15:06 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:01.614 13:15:06 -- nvmf/common.sh@120 -- # set +e 00:32:01.614 13:15:06 -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:01.614 13:15:06 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:01.614 rmmod nvme_tcp 00:32:01.877 rmmod nvme_fabrics 00:32:01.877 rmmod nvme_keyring 00:32:01.877 13:15:06 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:01.877 13:15:06 -- nvmf/common.sh@124 -- # set -e 00:32:01.877 13:15:06 -- nvmf/common.sh@125 -- # return 0 00:32:01.877 13:15:06 -- nvmf/common.sh@478 -- # '[' -n 3218 ']' 00:32:01.877 13:15:06 -- nvmf/common.sh@479 -- # killprocess 3218 00:32:01.877 13:15:06 -- common/autotest_common.sh@936 -- # '[' -z 3218 ']' 00:32:01.877 13:15:06 -- common/autotest_common.sh@940 -- # kill -0 3218 00:32:01.877 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (3218) - No such process 00:32:01.877 13:15:06 -- common/autotest_common.sh@963 -- # echo 'Process with pid 3218 is not found' 00:32:01.877 Process with pid 3218 is not found 00:32:01.877 13:15:06 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:32:01.877 13:15:06 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:32:01.877 13:15:06 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:32:01.877 13:15:06 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:01.877 13:15:06 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:01.877 13:15:06 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:01.877 13:15:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:01.877 13:15:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:03.792 13:15:08 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:03.792 00:32:03.792 real 0m42.232s 00:32:03.792 user 1m5.624s 00:32:03.792 sys 0m12.203s 00:32:03.792 13:15:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:32:03.792 13:15:08 -- common/autotest_common.sh@10 -- # set +x 00:32:03.792 ************************************ 00:32:03.792 END TEST nvmf_digest 00:32:03.792 ************************************ 00:32:03.792 13:15:08 -- nvmf/nvmf.sh@108 -- # [[ 0 -eq 1 ]] 00:32:03.792 13:15:08 -- nvmf/nvmf.sh@113 -- # [[ 0 -eq 1 ]] 00:32:03.792 13:15:08 -- nvmf/nvmf.sh@118 -- # [[ phy == phy ]] 00:32:03.792 13:15:08 -- nvmf/nvmf.sh@119 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:32:03.792 13:15:08 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:32:03.792 13:15:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:32:03.792 13:15:08 -- common/autotest_common.sh@10 -- # set +x 00:32:04.054 ************************************ 00:32:04.054 START TEST nvmf_bdevperf 00:32:04.054 ************************************ 00:32:04.054 13:15:09 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:32:04.054 * Looking for test storage... 00:32:04.054 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:04.054 13:15:09 -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:04.054 13:15:09 -- nvmf/common.sh@7 -- # uname -s 00:32:04.054 13:15:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:04.054 13:15:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:04.054 13:15:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:04.054 13:15:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:04.316 13:15:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:04.316 13:15:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:04.316 13:15:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:04.316 13:15:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:04.316 13:15:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:04.316 13:15:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:04.316 13:15:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:04.316 13:15:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:04.316 13:15:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:04.316 13:15:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:04.316 13:15:09 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:04.316 13:15:09 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:04.316 13:15:09 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:04.316 13:15:09 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:04.316 13:15:09 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:04.316 13:15:09 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:04.316 13:15:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:04.316 13:15:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:04.317 13:15:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:04.317 13:15:09 -- paths/export.sh@5 -- # export PATH 00:32:04.317 13:15:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:04.317 13:15:09 -- nvmf/common.sh@47 -- # : 0 00:32:04.317 13:15:09 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:04.317 13:15:09 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:04.317 13:15:09 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:04.317 13:15:09 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:04.317 13:15:09 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:04.317 13:15:09 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:04.317 13:15:09 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:04.317 13:15:09 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:04.317 13:15:09 -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:04.317 13:15:09 -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:04.317 13:15:09 -- host/bdevperf.sh@24 -- # nvmftestinit 00:32:04.317 13:15:09 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:32:04.317 13:15:09 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:04.317 13:15:09 -- nvmf/common.sh@437 -- # prepare_net_devs 00:32:04.317 13:15:09 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:32:04.317 13:15:09 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:32:04.317 13:15:09 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:04.317 13:15:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:04.317 13:15:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:04.317 13:15:09 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:32:04.317 13:15:09 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:32:04.317 13:15:09 -- nvmf/common.sh@285 -- # xtrace_disable 00:32:04.317 13:15:09 -- common/autotest_common.sh@10 -- # set +x 00:32:12.463 13:15:16 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:32:12.463 13:15:16 -- nvmf/common.sh@291 -- # pci_devs=() 00:32:12.463 13:15:16 -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:12.463 13:15:16 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:12.463 13:15:16 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:12.463 13:15:16 -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:12.463 13:15:16 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:12.463 13:15:16 -- nvmf/common.sh@295 -- # net_devs=() 00:32:12.463 13:15:16 -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:12.463 13:15:16 -- nvmf/common.sh@296 -- # e810=() 00:32:12.463 13:15:16 -- nvmf/common.sh@296 -- # local -ga e810 00:32:12.463 13:15:16 -- nvmf/common.sh@297 -- # x722=() 00:32:12.463 13:15:16 -- nvmf/common.sh@297 -- # local -ga x722 00:32:12.463 13:15:16 -- nvmf/common.sh@298 -- # mlx=() 00:32:12.463 13:15:16 -- nvmf/common.sh@298 -- # local -ga mlx 00:32:12.463 13:15:16 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:12.463 13:15:16 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:12.463 13:15:16 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:12.463 13:15:16 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:12.463 13:15:16 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:12.463 13:15:16 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:12.463 13:15:16 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:12.463 13:15:16 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:12.463 13:15:16 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:12.463 13:15:16 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:12.463 13:15:16 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:12.463 13:15:16 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:12.463 13:15:16 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:12.463 13:15:16 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:12.463 13:15:16 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:12.463 13:15:16 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:12.463 13:15:16 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:12.463 13:15:16 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:12.463 13:15:16 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:32:12.463 Found 0000:31:00.0 (0x8086 - 0x159b) 00:32:12.463 13:15:16 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:12.463 13:15:16 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:12.463 13:15:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:12.463 13:15:16 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:12.463 13:15:16 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:12.463 13:15:16 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:12.463 13:15:16 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:32:12.463 Found 0000:31:00.1 (0x8086 - 0x159b) 00:32:12.463 13:15:16 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:12.463 13:15:16 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:12.463 13:15:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:12.463 13:15:16 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:12.463 13:15:16 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:12.463 13:15:16 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:12.463 13:15:16 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:12.463 13:15:16 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:12.463 13:15:16 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:12.463 13:15:16 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:12.463 13:15:16 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:32:12.463 13:15:16 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:12.463 13:15:16 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:32:12.463 Found net devices under 0000:31:00.0: cvl_0_0 00:32:12.463 13:15:16 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:32:12.463 13:15:16 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:12.463 13:15:16 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:12.463 13:15:16 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:32:12.463 13:15:16 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:12.463 13:15:16 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:32:12.463 Found net devices under 0000:31:00.1: cvl_0_1 00:32:12.463 13:15:16 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:32:12.463 13:15:16 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:32:12.463 13:15:16 -- nvmf/common.sh@403 -- # is_hw=yes 00:32:12.463 13:15:16 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:32:12.463 13:15:16 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:32:12.463 13:15:16 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:32:12.463 13:15:16 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:12.463 13:15:16 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:12.463 13:15:16 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:12.463 13:15:16 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:12.463 13:15:16 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:12.463 13:15:16 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:12.463 13:15:16 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:12.463 13:15:16 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:12.463 13:15:16 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:12.463 13:15:16 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:12.463 13:15:16 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:12.463 13:15:16 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:12.463 13:15:16 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:12.463 13:15:16 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:12.463 13:15:16 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:12.463 13:15:16 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:12.463 13:15:16 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:12.464 13:15:16 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:12.464 13:15:16 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:12.464 13:15:16 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:12.464 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:12.464 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.491 ms 00:32:12.464 00:32:12.464 --- 10.0.0.2 ping statistics --- 00:32:12.464 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:12.464 rtt min/avg/max/mdev = 0.491/0.491/0.491/0.000 ms 00:32:12.464 13:15:16 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:12.464 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:12.464 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.275 ms 00:32:12.464 00:32:12.464 --- 10.0.0.1 ping statistics --- 00:32:12.464 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:12.464 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:32:12.464 13:15:16 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:12.464 13:15:16 -- nvmf/common.sh@411 -- # return 0 00:32:12.464 13:15:16 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:32:12.464 13:15:16 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:12.464 13:15:16 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:32:12.464 13:15:16 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:32:12.464 13:15:16 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:12.464 13:15:16 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:32:12.464 13:15:16 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:32:12.464 13:15:16 -- host/bdevperf.sh@25 -- # tgt_init 00:32:12.464 13:15:16 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:32:12.464 13:15:16 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:32:12.464 13:15:16 -- common/autotest_common.sh@710 -- # xtrace_disable 00:32:12.464 13:15:16 -- common/autotest_common.sh@10 -- # set +x 00:32:12.464 13:15:16 -- nvmf/common.sh@470 -- # nvmfpid=11333 00:32:12.464 13:15:16 -- nvmf/common.sh@471 -- # waitforlisten 11333 00:32:12.464 13:15:16 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:32:12.464 13:15:16 -- common/autotest_common.sh@817 -- # '[' -z 11333 ']' 00:32:12.464 13:15:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:12.464 13:15:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:32:12.464 13:15:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:12.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:12.464 13:15:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:32:12.464 13:15:16 -- common/autotest_common.sh@10 -- # set +x 00:32:12.464 [2024-04-26 13:15:16.534064] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:32:12.464 [2024-04-26 13:15:16.534150] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:12.464 EAL: No free 2048 kB hugepages reported on node 1 00:32:12.464 [2024-04-26 13:15:16.627696] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:12.464 [2024-04-26 13:15:16.719430] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:12.464 [2024-04-26 13:15:16.719496] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:12.464 [2024-04-26 13:15:16.719505] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:12.464 [2024-04-26 13:15:16.719512] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:12.464 [2024-04-26 13:15:16.719518] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:12.464 [2024-04-26 13:15:16.719652] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:32:12.464 [2024-04-26 13:15:16.719818] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:12.464 [2024-04-26 13:15:16.719818] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:32:12.464 13:15:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:32:12.464 13:15:17 -- common/autotest_common.sh@850 -- # return 0 00:32:12.464 13:15:17 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:32:12.464 13:15:17 -- common/autotest_common.sh@716 -- # xtrace_disable 00:32:12.464 13:15:17 -- common/autotest_common.sh@10 -- # set +x 00:32:12.464 13:15:17 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:12.464 13:15:17 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:12.464 13:15:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:12.464 13:15:17 -- common/autotest_common.sh@10 -- # set +x 00:32:12.464 [2024-04-26 13:15:17.365276] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:12.464 13:15:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:12.464 13:15:17 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:12.464 13:15:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:12.464 13:15:17 -- common/autotest_common.sh@10 -- # set +x 00:32:12.464 Malloc0 00:32:12.464 13:15:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:12.464 13:15:17 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:12.464 13:15:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:12.464 13:15:17 -- common/autotest_common.sh@10 -- # set +x 00:32:12.464 13:15:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:12.464 13:15:17 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:12.464 13:15:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:12.464 13:15:17 -- common/autotest_common.sh@10 -- # set +x 00:32:12.464 13:15:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:12.464 13:15:17 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:12.464 13:15:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:12.464 13:15:17 -- common/autotest_common.sh@10 -- # set +x 00:32:12.464 [2024-04-26 13:15:17.435280] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:12.464 13:15:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:12.464 13:15:17 -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:32:12.464 13:15:17 -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:32:12.464 13:15:17 -- nvmf/common.sh@521 -- # config=() 00:32:12.464 13:15:17 -- nvmf/common.sh@521 -- # local subsystem config 00:32:12.464 13:15:17 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:32:12.464 13:15:17 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:32:12.464 { 00:32:12.464 "params": { 00:32:12.464 "name": "Nvme$subsystem", 00:32:12.464 "trtype": "$TEST_TRANSPORT", 00:32:12.464 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:12.464 "adrfam": "ipv4", 00:32:12.464 "trsvcid": "$NVMF_PORT", 00:32:12.464 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:12.464 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:12.464 "hdgst": ${hdgst:-false}, 00:32:12.464 "ddgst": ${ddgst:-false} 00:32:12.464 }, 00:32:12.464 "method": "bdev_nvme_attach_controller" 00:32:12.464 } 00:32:12.464 EOF 00:32:12.464 )") 00:32:12.464 13:15:17 -- nvmf/common.sh@543 -- # cat 00:32:12.464 13:15:17 -- nvmf/common.sh@545 -- # jq . 00:32:12.464 13:15:17 -- nvmf/common.sh@546 -- # IFS=, 00:32:12.464 13:15:17 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:32:12.464 "params": { 00:32:12.464 "name": "Nvme1", 00:32:12.464 "trtype": "tcp", 00:32:12.464 "traddr": "10.0.0.2", 00:32:12.464 "adrfam": "ipv4", 00:32:12.464 "trsvcid": "4420", 00:32:12.464 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:12.464 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:12.464 "hdgst": false, 00:32:12.464 "ddgst": false 00:32:12.464 }, 00:32:12.464 "method": "bdev_nvme_attach_controller" 00:32:12.464 }' 00:32:12.464 [2024-04-26 13:15:17.487455] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:32:12.464 [2024-04-26 13:15:17.487504] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid11368 ] 00:32:12.464 EAL: No free 2048 kB hugepages reported on node 1 00:32:12.724 [2024-04-26 13:15:17.537625] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:12.724 [2024-04-26 13:15:17.590367] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:12.724 Running I/O for 1 seconds... 00:32:14.107 00:32:14.107 Latency(us) 00:32:14.107 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:14.107 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:14.107 Verification LBA range: start 0x0 length 0x4000 00:32:14.107 Nvme1n1 : 1.01 9047.35 35.34 0.00 0.00 14101.73 2484.91 16711.68 00:32:14.107 =================================================================================================================== 00:32:14.107 Total : 9047.35 35.34 0.00 0.00 14101.73 2484.91 16711.68 00:32:14.107 13:15:18 -- host/bdevperf.sh@30 -- # bdevperfpid=11701 00:32:14.107 13:15:18 -- host/bdevperf.sh@32 -- # sleep 3 00:32:14.107 13:15:18 -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:32:14.107 13:15:18 -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:32:14.107 13:15:18 -- nvmf/common.sh@521 -- # config=() 00:32:14.107 13:15:18 -- nvmf/common.sh@521 -- # local subsystem config 00:32:14.107 13:15:18 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:32:14.107 13:15:18 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:32:14.107 { 00:32:14.107 "params": { 00:32:14.107 "name": "Nvme$subsystem", 00:32:14.107 "trtype": "$TEST_TRANSPORT", 00:32:14.107 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:14.107 "adrfam": "ipv4", 00:32:14.107 "trsvcid": "$NVMF_PORT", 00:32:14.107 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:14.108 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:14.108 "hdgst": ${hdgst:-false}, 00:32:14.108 "ddgst": ${ddgst:-false} 00:32:14.108 }, 00:32:14.108 "method": "bdev_nvme_attach_controller" 00:32:14.108 } 00:32:14.108 EOF 00:32:14.108 )") 00:32:14.108 13:15:18 -- nvmf/common.sh@543 -- # cat 00:32:14.108 13:15:18 -- nvmf/common.sh@545 -- # jq . 00:32:14.108 13:15:18 -- nvmf/common.sh@546 -- # IFS=, 00:32:14.108 13:15:18 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:32:14.108 "params": { 00:32:14.108 "name": "Nvme1", 00:32:14.108 "trtype": "tcp", 00:32:14.108 "traddr": "10.0.0.2", 00:32:14.108 "adrfam": "ipv4", 00:32:14.108 "trsvcid": "4420", 00:32:14.108 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:14.108 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:14.108 "hdgst": false, 00:32:14.108 "ddgst": false 00:32:14.108 }, 00:32:14.108 "method": "bdev_nvme_attach_controller" 00:32:14.108 }' 00:32:14.108 [2024-04-26 13:15:18.923123] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:32:14.108 [2024-04-26 13:15:18.923191] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid11701 ] 00:32:14.108 EAL: No free 2048 kB hugepages reported on node 1 00:32:14.108 [2024-04-26 13:15:18.984979] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:14.108 [2024-04-26 13:15:19.046337] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:14.367 Running I/O for 15 seconds... 00:32:16.911 13:15:21 -- host/bdevperf.sh@33 -- # kill -9 11333 00:32:16.911 13:15:21 -- host/bdevperf.sh@35 -- # sleep 3 00:32:16.911 [2024-04-26 13:15:21.884159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:90592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.911 [2024-04-26 13:15:21.884205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.911 [2024-04-26 13:15:21.884226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:90600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.911 [2024-04-26 13:15:21.884235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.911 [2024-04-26 13:15:21.884247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:90608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.911 [2024-04-26 13:15:21.884257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.911 [2024-04-26 13:15:21.884268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:90616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.911 [2024-04-26 13:15:21.884286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.911 [2024-04-26 13:15:21.884298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:90624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.911 [2024-04-26 13:15:21.884305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.911 [2024-04-26 13:15:21.884315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:90632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.911 [2024-04-26 13:15:21.884322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.911 [2024-04-26 13:15:21.884335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:90640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.911 [2024-04-26 13:15:21.884342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.911 [2024-04-26 13:15:21.884352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:90648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.911 [2024-04-26 13:15:21.884360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.911 [2024-04-26 13:15:21.884369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:90656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.911 [2024-04-26 13:15:21.884379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.911 [2024-04-26 13:15:21.884390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:90664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.911 [2024-04-26 13:15:21.884399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.911 [2024-04-26 13:15:21.884411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:90672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.911 [2024-04-26 13:15:21.884419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.911 [2024-04-26 13:15:21.884432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:90680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.911 [2024-04-26 13:15:21.884440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.911 [2024-04-26 13:15:21.884452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:90688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.911 [2024-04-26 13:15:21.884462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.911 [2024-04-26 13:15:21.884473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:90696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.911 [2024-04-26 13:15:21.884484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.911 [2024-04-26 13:15:21.884495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:90704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.911 [2024-04-26 13:15:21.884504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.911 [2024-04-26 13:15:21.884514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:90712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.911 [2024-04-26 13:15:21.884523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.911 [2024-04-26 13:15:21.884532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:90720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.911 [2024-04-26 13:15:21.884542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.911 [2024-04-26 13:15:21.884552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:90728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.911 [2024-04-26 13:15:21.884561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.911 [2024-04-26 13:15:21.884571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:90736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.911 [2024-04-26 13:15:21.884579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.911 [2024-04-26 13:15:21.884589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:90744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.911 [2024-04-26 13:15:21.884596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.911 [2024-04-26 13:15:21.884608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:90752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.911 [2024-04-26 13:15:21.884616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.911 [2024-04-26 13:15:21.884626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:90760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.911 [2024-04-26 13:15:21.884635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.911 [2024-04-26 13:15:21.884644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:90768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.911 [2024-04-26 13:15:21.884653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.911 [2024-04-26 13:15:21.884663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:90776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.911 [2024-04-26 13:15:21.884670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.911 [2024-04-26 13:15:21.884679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:90784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.911 [2024-04-26 13:15:21.884687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.911 [2024-04-26 13:15:21.884696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:90792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.911 [2024-04-26 13:15:21.884703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.911 [2024-04-26 13:15:21.884713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:90800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.911 [2024-04-26 13:15:21.884721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.911 [2024-04-26 13:15:21.884730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:90808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.911 [2024-04-26 13:15:21.884738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.911 [2024-04-26 13:15:21.884747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:90816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.911 [2024-04-26 13:15:21.884754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.911 [2024-04-26 13:15:21.884765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:90824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.911 [2024-04-26 13:15:21.884772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.911 [2024-04-26 13:15:21.884781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:90832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.911 [2024-04-26 13:15:21.884788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.911 [2024-04-26 13:15:21.884797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:90840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.911 [2024-04-26 13:15:21.884804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.911 [2024-04-26 13:15:21.884813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:90848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.911 [2024-04-26 13:15:21.884820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.911 [2024-04-26 13:15:21.884830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:90856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.911 [2024-04-26 13:15:21.884929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.911 [2024-04-26 13:15:21.884939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:90864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.911 [2024-04-26 13:15:21.884947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.911 [2024-04-26 13:15:21.884956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:90872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.912 [2024-04-26 13:15:21.884963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.912 [2024-04-26 13:15:21.884973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:90880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.912 [2024-04-26 13:15:21.884980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.912 [2024-04-26 13:15:21.884989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:90888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.912 [2024-04-26 13:15:21.884996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.912 [2024-04-26 13:15:21.885005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:90896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.912 [2024-04-26 13:15:21.885012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.912 [2024-04-26 13:15:21.885022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:90904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.912 [2024-04-26 13:15:21.885030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.912 [2024-04-26 13:15:21.885039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:90912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.912 [2024-04-26 13:15:21.885046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.912 [2024-04-26 13:15:21.885055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:90920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.912 [2024-04-26 13:15:21.885064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.912 [2024-04-26 13:15:21.885073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.912 [2024-04-26 13:15:21.885081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.912 [2024-04-26 13:15:21.885090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:90936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.912 [2024-04-26 13:15:21.885098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.912 [2024-04-26 13:15:21.885107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:90944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.912 [2024-04-26 13:15:21.885114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.912 [2024-04-26 13:15:21.885123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:90952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.912 [2024-04-26 13:15:21.885130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.912 [2024-04-26 13:15:21.885139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:90960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.912 [2024-04-26 13:15:21.885147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.912 [2024-04-26 13:15:21.885156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:90968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.912 [2024-04-26 13:15:21.885163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.912 [2024-04-26 13:15:21.885172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:90976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.912 [2024-04-26 13:15:21.885179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.912 [2024-04-26 13:15:21.885188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:90984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.912 [2024-04-26 13:15:21.885196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.912 [2024-04-26 13:15:21.885207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:90992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.912 [2024-04-26 13:15:21.885214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.912 [2024-04-26 13:15:21.885223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:91000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.912 [2024-04-26 13:15:21.885230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.912 [2024-04-26 13:15:21.885239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:91008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.912 [2024-04-26 13:15:21.885246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.912 [2024-04-26 13:15:21.885256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:91016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.912 [2024-04-26 13:15:21.885263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.912 [2024-04-26 13:15:21.885272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:91024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.912 [2024-04-26 13:15:21.885281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.912 [2024-04-26 13:15:21.885290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:91032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.912 [2024-04-26 13:15:21.885299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.912 [2024-04-26 13:15:21.885309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:91040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.912 [2024-04-26 13:15:21.885316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.912 [2024-04-26 13:15:21.885325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:91048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.912 [2024-04-26 13:15:21.885332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.912 [2024-04-26 13:15:21.885341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:91056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.912 [2024-04-26 13:15:21.885348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.912 [2024-04-26 13:15:21.885357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:91064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.912 [2024-04-26 13:15:21.885365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.912 [2024-04-26 13:15:21.885374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:91072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.912 [2024-04-26 13:15:21.885381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.912 [2024-04-26 13:15:21.885390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:91080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.912 [2024-04-26 13:15:21.885397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.912 [2024-04-26 13:15:21.885406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:91088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.912 [2024-04-26 13:15:21.885413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.912 [2024-04-26 13:15:21.885423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:91096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.912 [2024-04-26 13:15:21.885430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.912 [2024-04-26 13:15:21.885439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:91104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.912 [2024-04-26 13:15:21.885446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.912 [2024-04-26 13:15:21.885456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:91112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.912 [2024-04-26 13:15:21.885463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.912 [2024-04-26 13:15:21.885473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:91120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.912 [2024-04-26 13:15:21.885480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.912 [2024-04-26 13:15:21.885491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:91128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.912 [2024-04-26 13:15:21.885498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.912 [2024-04-26 13:15:21.885508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:91136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.912 [2024-04-26 13:15:21.885516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.912 [2024-04-26 13:15:21.885526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:91144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.912 [2024-04-26 13:15:21.885533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.912 [2024-04-26 13:15:21.885543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:91152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.912 [2024-04-26 13:15:21.885550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.912 [2024-04-26 13:15:21.885560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:91160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.912 [2024-04-26 13:15:21.885567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.912 [2024-04-26 13:15:21.885577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:91168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.912 [2024-04-26 13:15:21.885584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.912 [2024-04-26 13:15:21.885593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:91176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.912 [2024-04-26 13:15:21.885601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.912 [2024-04-26 13:15:21.885609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:91184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.912 [2024-04-26 13:15:21.885617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.912 [2024-04-26 13:15:21.885626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:91192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.913 [2024-04-26 13:15:21.885633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.913 [2024-04-26 13:15:21.885643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:91200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.913 [2024-04-26 13:15:21.885650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.913 [2024-04-26 13:15:21.885659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:91208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.913 [2024-04-26 13:15:21.885666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.913 [2024-04-26 13:15:21.885675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:91216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.913 [2024-04-26 13:15:21.885683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.913 [2024-04-26 13:15:21.885692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:91224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.913 [2024-04-26 13:15:21.885701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.913 [2024-04-26 13:15:21.885710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:91232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.913 [2024-04-26 13:15:21.885718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.913 [2024-04-26 13:15:21.885727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:91240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.913 [2024-04-26 13:15:21.885735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.913 [2024-04-26 13:15:21.885744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:91248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.913 [2024-04-26 13:15:21.885751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.913 [2024-04-26 13:15:21.885760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:91256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.913 [2024-04-26 13:15:21.885767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.913 [2024-04-26 13:15:21.885776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:91264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.913 [2024-04-26 13:15:21.885784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.913 [2024-04-26 13:15:21.885794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:91272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.913 [2024-04-26 13:15:21.885800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.913 [2024-04-26 13:15:21.885810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:91280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.913 [2024-04-26 13:15:21.885817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.913 [2024-04-26 13:15:21.885826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:91288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.913 [2024-04-26 13:15:21.885833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.913 [2024-04-26 13:15:21.885846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:91296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.913 [2024-04-26 13:15:21.885854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.913 [2024-04-26 13:15:21.885863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:91304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.913 [2024-04-26 13:15:21.885870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.913 [2024-04-26 13:15:21.885879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:91312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.913 [2024-04-26 13:15:21.885886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.913 [2024-04-26 13:15:21.885896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:91320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.913 [2024-04-26 13:15:21.885904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.913 [2024-04-26 13:15:21.885914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:91328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.913 [2024-04-26 13:15:21.885922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.913 [2024-04-26 13:15:21.885931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:91336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.913 [2024-04-26 13:15:21.885938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.913 [2024-04-26 13:15:21.885947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:91344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.913 [2024-04-26 13:15:21.885954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.913 [2024-04-26 13:15:21.885963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:91352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.913 [2024-04-26 13:15:21.885970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.913 [2024-04-26 13:15:21.885979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:91360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.913 [2024-04-26 13:15:21.885986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.913 [2024-04-26 13:15:21.885997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:91368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.913 [2024-04-26 13:15:21.886004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.913 [2024-04-26 13:15:21.886014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:91376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.913 [2024-04-26 13:15:21.886021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.913 [2024-04-26 13:15:21.886030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:91384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.913 [2024-04-26 13:15:21.886041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.913 [2024-04-26 13:15:21.886050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:91392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.913 [2024-04-26 13:15:21.886058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.913 [2024-04-26 13:15:21.886067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:91400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.913 [2024-04-26 13:15:21.886075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.913 [2024-04-26 13:15:21.886084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:91408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.913 [2024-04-26 13:15:21.886091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.913 [2024-04-26 13:15:21.886100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:91416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.913 [2024-04-26 13:15:21.886108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.913 [2024-04-26 13:15:21.886117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:91424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.913 [2024-04-26 13:15:21.886124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.913 [2024-04-26 13:15:21.886135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:91432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.913 [2024-04-26 13:15:21.886142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.913 [2024-04-26 13:15:21.886151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:91440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.913 [2024-04-26 13:15:21.886158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.913 [2024-04-26 13:15:21.886168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:91448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.913 [2024-04-26 13:15:21.886175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.913 [2024-04-26 13:15:21.886185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:91456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.913 [2024-04-26 13:15:21.886192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.913 [2024-04-26 13:15:21.886201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:91464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.913 [2024-04-26 13:15:21.886208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.913 [2024-04-26 13:15:21.886217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:91472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.913 [2024-04-26 13:15:21.886224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.913 [2024-04-26 13:15:21.886234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:91480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.913 [2024-04-26 13:15:21.886241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.913 [2024-04-26 13:15:21.886250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:91488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.913 [2024-04-26 13:15:21.886257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.913 [2024-04-26 13:15:21.886266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:91496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.913 [2024-04-26 13:15:21.886274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.913 [2024-04-26 13:15:21.886283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:91504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.913 [2024-04-26 13:15:21.886290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.913 [2024-04-26 13:15:21.886299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:91512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.914 [2024-04-26 13:15:21.886306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.914 [2024-04-26 13:15:21.886315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:91520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.914 [2024-04-26 13:15:21.886323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.914 [2024-04-26 13:15:21.886332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:91528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.914 [2024-04-26 13:15:21.886340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.914 [2024-04-26 13:15:21.886349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:91536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.914 [2024-04-26 13:15:21.886356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.914 [2024-04-26 13:15:21.886365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:90544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.914 [2024-04-26 13:15:21.886373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.914 [2024-04-26 13:15:21.886383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:90552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.914 [2024-04-26 13:15:21.886390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.914 [2024-04-26 13:15:21.886399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:90560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.914 [2024-04-26 13:15:21.886406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.914 [2024-04-26 13:15:21.886415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:90568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.914 [2024-04-26 13:15:21.886423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.914 [2024-04-26 13:15:21.886432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:90576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.914 [2024-04-26 13:15:21.886439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.914 [2024-04-26 13:15:21.886448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:90584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.914 [2024-04-26 13:15:21.886455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.914 [2024-04-26 13:15:21.886464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:91544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.914 [2024-04-26 13:15:21.886472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.914 [2024-04-26 13:15:21.886481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:91552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.914 [2024-04-26 13:15:21.886489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.914 [2024-04-26 13:15:21.886497] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfd890 is same with the state(5) to be set 00:32:16.914 [2024-04-26 13:15:21.886506] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:16.914 [2024-04-26 13:15:21.886512] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:16.914 [2024-04-26 13:15:21.886518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91560 len:8 PRP1 0x0 PRP2 0x0 00:32:16.914 [2024-04-26 13:15:21.886525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.914 [2024-04-26 13:15:21.886564] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xbfd890 was disconnected and freed. reset controller. 00:32:16.914 [2024-04-26 13:15:21.890085] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:16.914 [2024-04-26 13:15:21.890133] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:16.914 [2024-04-26 13:15:21.890810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.914 [2024-04-26 13:15:21.891245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.914 [2024-04-26 13:15:21.891282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:16.914 [2024-04-26 13:15:21.891294] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:16.914 [2024-04-26 13:15:21.891534] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:16.914 [2024-04-26 13:15:21.891757] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:16.914 [2024-04-26 13:15:21.891767] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:16.914 [2024-04-26 13:15:21.891775] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:16.914 [2024-04-26 13:15:21.895320] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:16.914 [2024-04-26 13:15:21.904126] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:16.914 [2024-04-26 13:15:21.904672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.914 [2024-04-26 13:15:21.905064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.914 [2024-04-26 13:15:21.905080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:16.914 [2024-04-26 13:15:21.905090] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:16.914 [2024-04-26 13:15:21.905328] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:16.914 [2024-04-26 13:15:21.905550] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:16.914 [2024-04-26 13:15:21.905559] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:16.914 [2024-04-26 13:15:21.905567] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:16.914 [2024-04-26 13:15:21.909102] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:16.914 [2024-04-26 13:15:21.918053] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:16.914 [2024-04-26 13:15:21.918702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.914 [2024-04-26 13:15:21.919064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.914 [2024-04-26 13:15:21.919080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:16.914 [2024-04-26 13:15:21.919090] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:16.914 [2024-04-26 13:15:21.919328] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:16.914 [2024-04-26 13:15:21.919550] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:16.914 [2024-04-26 13:15:21.919559] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:16.914 [2024-04-26 13:15:21.919567] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:16.914 [2024-04-26 13:15:21.923104] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:16.914 [2024-04-26 13:15:21.931859] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:16.914 [2024-04-26 13:15:21.932529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.914 [2024-04-26 13:15:21.932855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.914 [2024-04-26 13:15:21.932871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:16.914 [2024-04-26 13:15:21.932880] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:16.914 [2024-04-26 13:15:21.933118] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:16.914 [2024-04-26 13:15:21.933340] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:16.914 [2024-04-26 13:15:21.933349] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:16.914 [2024-04-26 13:15:21.933357] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:16.914 [2024-04-26 13:15:21.936894] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:16.914 [2024-04-26 13:15:21.945631] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:16.914 [2024-04-26 13:15:21.946282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.914 [2024-04-26 13:15:21.946660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.914 [2024-04-26 13:15:21.946674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:16.914 [2024-04-26 13:15:21.946683] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:16.914 [2024-04-26 13:15:21.946929] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:16.914 [2024-04-26 13:15:21.947153] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:16.914 [2024-04-26 13:15:21.947162] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:16.915 [2024-04-26 13:15:21.947170] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:16.915 [2024-04-26 13:15:21.950700] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:16.915 [2024-04-26 13:15:21.959438] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:16.915 [2024-04-26 13:15:21.960135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.915 [2024-04-26 13:15:21.960487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.915 [2024-04-26 13:15:21.960501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:16.915 [2024-04-26 13:15:21.960511] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:16.915 [2024-04-26 13:15:21.960748] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:16.915 [2024-04-26 13:15:21.960978] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:16.915 [2024-04-26 13:15:21.960989] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:16.915 [2024-04-26 13:15:21.960996] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:16.915 [2024-04-26 13:15:21.964528] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.175 [2024-04-26 13:15:21.973272] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.175 [2024-04-26 13:15:21.973907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.175 [2024-04-26 13:15:21.974308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.175 [2024-04-26 13:15:21.974326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:17.175 [2024-04-26 13:15:21.974336] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:17.175 [2024-04-26 13:15:21.974573] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:17.175 [2024-04-26 13:15:21.974795] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.175 [2024-04-26 13:15:21.974804] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.176 [2024-04-26 13:15:21.974812] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.176 [2024-04-26 13:15:21.978349] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.176 [2024-04-26 13:15:21.987096] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.176 [2024-04-26 13:15:21.987622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.176 [2024-04-26 13:15:21.987985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.176 [2024-04-26 13:15:21.988001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:17.176 [2024-04-26 13:15:21.988011] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:17.176 [2024-04-26 13:15:21.988248] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:17.176 [2024-04-26 13:15:21.988471] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.176 [2024-04-26 13:15:21.988481] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.176 [2024-04-26 13:15:21.988488] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.176 [2024-04-26 13:15:21.992022] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.176 [2024-04-26 13:15:22.000966] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.176 [2024-04-26 13:15:22.001639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.176 [2024-04-26 13:15:22.002165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.176 [2024-04-26 13:15:22.002204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:17.176 [2024-04-26 13:15:22.002215] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:17.176 [2024-04-26 13:15:22.002453] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:17.176 [2024-04-26 13:15:22.002674] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.176 [2024-04-26 13:15:22.002684] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.176 [2024-04-26 13:15:22.002692] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.176 [2024-04-26 13:15:22.006228] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.176 [2024-04-26 13:15:22.014752] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.176 [2024-04-26 13:15:22.015325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.176 [2024-04-26 13:15:22.015704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.176 [2024-04-26 13:15:22.015718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:17.176 [2024-04-26 13:15:22.015733] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:17.176 [2024-04-26 13:15:22.015980] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:17.176 [2024-04-26 13:15:22.016204] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.176 [2024-04-26 13:15:22.016212] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.176 [2024-04-26 13:15:22.016220] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.176 [2024-04-26 13:15:22.019750] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.176 [2024-04-26 13:15:22.028711] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.176 [2024-04-26 13:15:22.029381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.176 [2024-04-26 13:15:22.029761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.176 [2024-04-26 13:15:22.029775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:17.176 [2024-04-26 13:15:22.029785] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:17.176 [2024-04-26 13:15:22.030030] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:17.176 [2024-04-26 13:15:22.030252] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.176 [2024-04-26 13:15:22.030262] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.176 [2024-04-26 13:15:22.030269] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.176 [2024-04-26 13:15:22.033805] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.176 [2024-04-26 13:15:22.042545] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.176 [2024-04-26 13:15:22.043204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.176 [2024-04-26 13:15:22.043593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.176 [2024-04-26 13:15:22.043607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:17.176 [2024-04-26 13:15:22.043617] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:17.176 [2024-04-26 13:15:22.043861] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:17.176 [2024-04-26 13:15:22.044084] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.176 [2024-04-26 13:15:22.044093] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.176 [2024-04-26 13:15:22.044101] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.176 [2024-04-26 13:15:22.047628] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.176 [2024-04-26 13:15:22.056367] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.176 [2024-04-26 13:15:22.056966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.176 [2024-04-26 13:15:22.057312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.176 [2024-04-26 13:15:22.057326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:17.176 [2024-04-26 13:15:22.057336] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:17.176 [2024-04-26 13:15:22.057582] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:17.176 [2024-04-26 13:15:22.057805] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.176 [2024-04-26 13:15:22.057814] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.176 [2024-04-26 13:15:22.057821] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.176 [2024-04-26 13:15:22.061358] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.176 [2024-04-26 13:15:22.070302] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.176 [2024-04-26 13:15:22.070970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.176 [2024-04-26 13:15:22.071352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.176 [2024-04-26 13:15:22.071366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:17.176 [2024-04-26 13:15:22.071376] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:17.176 [2024-04-26 13:15:22.071613] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:17.176 [2024-04-26 13:15:22.071835] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.176 [2024-04-26 13:15:22.071854] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.176 [2024-04-26 13:15:22.071861] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.176 [2024-04-26 13:15:22.075391] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.176 [2024-04-26 13:15:22.084132] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.176 [2024-04-26 13:15:22.084808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.176 [2024-04-26 13:15:22.085210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.176 [2024-04-26 13:15:22.085224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:17.176 [2024-04-26 13:15:22.085234] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:17.176 [2024-04-26 13:15:22.085471] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:17.176 [2024-04-26 13:15:22.085693] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.176 [2024-04-26 13:15:22.085702] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.176 [2024-04-26 13:15:22.085710] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.176 [2024-04-26 13:15:22.089246] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.176 [2024-04-26 13:15:22.097984] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.176 [2024-04-26 13:15:22.098610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.176 [2024-04-26 13:15:22.098998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.176 [2024-04-26 13:15:22.099014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:17.176 [2024-04-26 13:15:22.099024] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:17.176 [2024-04-26 13:15:22.099261] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:17.176 [2024-04-26 13:15:22.099488] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.176 [2024-04-26 13:15:22.099497] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.176 [2024-04-26 13:15:22.099505] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.176 [2024-04-26 13:15:22.103041] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.176 [2024-04-26 13:15:22.111808] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.176 [2024-04-26 13:15:22.112495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.176 [2024-04-26 13:15:22.112832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.176 [2024-04-26 13:15:22.112855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:17.176 [2024-04-26 13:15:22.112865] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:17.176 [2024-04-26 13:15:22.113103] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:17.176 [2024-04-26 13:15:22.113325] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.176 [2024-04-26 13:15:22.113335] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.176 [2024-04-26 13:15:22.113342] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.176 [2024-04-26 13:15:22.116874] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.176 [2024-04-26 13:15:22.125611] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.176 [2024-04-26 13:15:22.126256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.176 [2024-04-26 13:15:22.126599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.176 [2024-04-26 13:15:22.126613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:17.176 [2024-04-26 13:15:22.126623] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:17.176 [2024-04-26 13:15:22.126868] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:17.176 [2024-04-26 13:15:22.127091] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.176 [2024-04-26 13:15:22.127100] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.176 [2024-04-26 13:15:22.127107] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.176 [2024-04-26 13:15:22.130650] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.176 [2024-04-26 13:15:22.139392] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.176 [2024-04-26 13:15:22.139963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.176 [2024-04-26 13:15:22.140318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.176 [2024-04-26 13:15:22.140331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:17.176 [2024-04-26 13:15:22.140341] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:17.176 [2024-04-26 13:15:22.140578] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:17.176 [2024-04-26 13:15:22.140800] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.176 [2024-04-26 13:15:22.140814] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.176 [2024-04-26 13:15:22.140822] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.176 [2024-04-26 13:15:22.144360] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.176 [2024-04-26 13:15:22.153309] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.176 [2024-04-26 13:15:22.154021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.176 [2024-04-26 13:15:22.154406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.176 [2024-04-26 13:15:22.154420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:17.176 [2024-04-26 13:15:22.154430] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:17.176 [2024-04-26 13:15:22.154667] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:17.176 [2024-04-26 13:15:22.154896] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.176 [2024-04-26 13:15:22.154906] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.176 [2024-04-26 13:15:22.154914] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.176 [2024-04-26 13:15:22.158445] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.176 [2024-04-26 13:15:22.167185] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.176 [2024-04-26 13:15:22.167731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.176 [2024-04-26 13:15:22.168078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.176 [2024-04-26 13:15:22.168093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:17.176 [2024-04-26 13:15:22.168103] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:17.176 [2024-04-26 13:15:22.168340] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:17.176 [2024-04-26 13:15:22.168562] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.176 [2024-04-26 13:15:22.168571] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.176 [2024-04-26 13:15:22.168579] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.176 [2024-04-26 13:15:22.172112] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.176 [2024-04-26 13:15:22.181057] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.176 [2024-04-26 13:15:22.181719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.176 [2024-04-26 13:15:22.182071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.176 [2024-04-26 13:15:22.182085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:17.176 [2024-04-26 13:15:22.182095] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:17.176 [2024-04-26 13:15:22.182332] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:17.176 [2024-04-26 13:15:22.182554] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.176 [2024-04-26 13:15:22.182563] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.176 [2024-04-26 13:15:22.182575] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.176 [2024-04-26 13:15:22.186107] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.176 [2024-04-26 13:15:22.194843] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.176 [2024-04-26 13:15:22.195496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.176 [2024-04-26 13:15:22.195827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.176 [2024-04-26 13:15:22.195848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:17.176 [2024-04-26 13:15:22.195859] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:17.176 [2024-04-26 13:15:22.196096] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:17.176 [2024-04-26 13:15:22.196317] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.177 [2024-04-26 13:15:22.196326] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.177 [2024-04-26 13:15:22.196333] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.177 [2024-04-26 13:15:22.199866] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.177 [2024-04-26 13:15:22.208809] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.177 [2024-04-26 13:15:22.209472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.177 [2024-04-26 13:15:22.209807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.177 [2024-04-26 13:15:22.209819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:17.177 [2024-04-26 13:15:22.209829] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:17.177 [2024-04-26 13:15:22.210075] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:17.177 [2024-04-26 13:15:22.210296] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.177 [2024-04-26 13:15:22.210306] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.177 [2024-04-26 13:15:22.210313] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.177 [2024-04-26 13:15:22.213844] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.177 [2024-04-26 13:15:22.222583] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.177 [2024-04-26 13:15:22.223241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.177 [2024-04-26 13:15:22.223571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.177 [2024-04-26 13:15:22.223583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:17.177 [2024-04-26 13:15:22.223593] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:17.177 [2024-04-26 13:15:22.223830] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:17.177 [2024-04-26 13:15:22.224060] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.177 [2024-04-26 13:15:22.224069] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.177 [2024-04-26 13:15:22.224077] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.177 [2024-04-26 13:15:22.227614] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.436 [2024-04-26 13:15:22.236370] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.436 [2024-04-26 13:15:22.236959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.436 [2024-04-26 13:15:22.237215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.436 [2024-04-26 13:15:22.237227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:17.436 [2024-04-26 13:15:22.237237] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:17.436 [2024-04-26 13:15:22.237474] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:17.436 [2024-04-26 13:15:22.237696] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.436 [2024-04-26 13:15:22.237703] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.436 [2024-04-26 13:15:22.237711] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.436 [2024-04-26 13:15:22.241253] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.436 [2024-04-26 13:15:22.250194] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.436 [2024-04-26 13:15:22.250755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.436 [2024-04-26 13:15:22.251148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.436 [2024-04-26 13:15:22.251158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:17.436 [2024-04-26 13:15:22.251166] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:17.436 [2024-04-26 13:15:22.251385] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:17.436 [2024-04-26 13:15:22.251603] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.436 [2024-04-26 13:15:22.251610] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.436 [2024-04-26 13:15:22.251617] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.436 [2024-04-26 13:15:22.255142] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.436 [2024-04-26 13:15:22.264077] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.436 [2024-04-26 13:15:22.264715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.436 [2024-04-26 13:15:22.265069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.436 [2024-04-26 13:15:22.265083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:17.436 [2024-04-26 13:15:22.265092] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:17.436 [2024-04-26 13:15:22.265330] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:17.436 [2024-04-26 13:15:22.265551] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.436 [2024-04-26 13:15:22.265560] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.436 [2024-04-26 13:15:22.265567] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.436 [2024-04-26 13:15:22.269104] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.436 [2024-04-26 13:15:22.277847] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.436 [2024-04-26 13:15:22.278523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.436 [2024-04-26 13:15:22.278862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.436 [2024-04-26 13:15:22.278876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:17.436 [2024-04-26 13:15:22.278885] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:17.436 [2024-04-26 13:15:22.279123] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:17.436 [2024-04-26 13:15:22.279344] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.436 [2024-04-26 13:15:22.279353] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.436 [2024-04-26 13:15:22.279360] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.436 [2024-04-26 13:15:22.282896] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.437 [2024-04-26 13:15:22.291638] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.437 [2024-04-26 13:15:22.292287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.437 [2024-04-26 13:15:22.292618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.437 [2024-04-26 13:15:22.292631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:17.437 [2024-04-26 13:15:22.292640] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:17.437 [2024-04-26 13:15:22.292886] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:17.437 [2024-04-26 13:15:22.293108] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.437 [2024-04-26 13:15:22.293118] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.437 [2024-04-26 13:15:22.293126] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.437 [2024-04-26 13:15:22.296656] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.437 [2024-04-26 13:15:22.305607] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.437 [2024-04-26 13:15:22.306259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.437 [2024-04-26 13:15:22.306592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.437 [2024-04-26 13:15:22.306605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:17.437 [2024-04-26 13:15:22.306615] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:17.437 [2024-04-26 13:15:22.306859] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:17.437 [2024-04-26 13:15:22.307081] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.437 [2024-04-26 13:15:22.307091] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.437 [2024-04-26 13:15:22.307098] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.437 [2024-04-26 13:15:22.310630] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.437 [2024-04-26 13:15:22.319402] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.437 [2024-04-26 13:15:22.319966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.437 [2024-04-26 13:15:22.320346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.437 [2024-04-26 13:15:22.320360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:17.437 [2024-04-26 13:15:22.320370] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:17.437 [2024-04-26 13:15:22.320607] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:17.437 [2024-04-26 13:15:22.320828] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.437 [2024-04-26 13:15:22.320845] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.437 [2024-04-26 13:15:22.320853] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.437 [2024-04-26 13:15:22.324385] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.437 [2024-04-26 13:15:22.333352] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.437 [2024-04-26 13:15:22.334022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.437 [2024-04-26 13:15:22.334357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.437 [2024-04-26 13:15:22.334369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:17.437 [2024-04-26 13:15:22.334379] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:17.437 [2024-04-26 13:15:22.334616] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:17.437 [2024-04-26 13:15:22.334845] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.437 [2024-04-26 13:15:22.334855] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.437 [2024-04-26 13:15:22.334863] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.437 [2024-04-26 13:15:22.338395] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.437 [2024-04-26 13:15:22.347142] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.437 [2024-04-26 13:15:22.347809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.437 [2024-04-26 13:15:22.348156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.437 [2024-04-26 13:15:22.348170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:17.437 [2024-04-26 13:15:22.348180] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:17.437 [2024-04-26 13:15:22.348417] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:17.437 [2024-04-26 13:15:22.348639] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.437 [2024-04-26 13:15:22.348648] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.437 [2024-04-26 13:15:22.348656] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.437 [2024-04-26 13:15:22.352193] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.437 [2024-04-26 13:15:22.360936] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.437 [2024-04-26 13:15:22.361600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.437 [2024-04-26 13:15:22.361938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.437 [2024-04-26 13:15:22.361956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:17.437 [2024-04-26 13:15:22.361966] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:17.437 [2024-04-26 13:15:22.362203] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:17.437 [2024-04-26 13:15:22.362424] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.437 [2024-04-26 13:15:22.362433] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.437 [2024-04-26 13:15:22.362440] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.437 [2024-04-26 13:15:22.365972] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.437 [2024-04-26 13:15:22.374708] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.437 [2024-04-26 13:15:22.375371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.437 [2024-04-26 13:15:22.375700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.437 [2024-04-26 13:15:22.375712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:17.437 [2024-04-26 13:15:22.375722] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:17.437 [2024-04-26 13:15:22.375966] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:17.437 [2024-04-26 13:15:22.376188] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.437 [2024-04-26 13:15:22.376197] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.437 [2024-04-26 13:15:22.376204] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.437 [2024-04-26 13:15:22.379732] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.437 [2024-04-26 13:15:22.388679] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.437 [2024-04-26 13:15:22.389378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.437 [2024-04-26 13:15:22.389620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.437 [2024-04-26 13:15:22.389633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:17.437 [2024-04-26 13:15:22.389642] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:17.437 [2024-04-26 13:15:22.389887] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:17.437 [2024-04-26 13:15:22.390109] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.437 [2024-04-26 13:15:22.390118] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.437 [2024-04-26 13:15:22.390125] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.437 [2024-04-26 13:15:22.393651] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.437 [2024-04-26 13:15:22.402594] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.437 [2024-04-26 13:15:22.403127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.437 [2024-04-26 13:15:22.403448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.437 [2024-04-26 13:15:22.403457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:17.437 [2024-04-26 13:15:22.403470] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:17.437 [2024-04-26 13:15:22.403689] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:17.437 [2024-04-26 13:15:22.403914] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.437 [2024-04-26 13:15:22.403923] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.437 [2024-04-26 13:15:22.403930] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.437 [2024-04-26 13:15:22.407456] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.437 [2024-04-26 13:15:22.416397] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.437 [2024-04-26 13:15:22.417068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.437 [2024-04-26 13:15:22.417399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.437 [2024-04-26 13:15:22.417412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:17.437 [2024-04-26 13:15:22.417421] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:17.437 [2024-04-26 13:15:22.417659] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:17.437 [2024-04-26 13:15:22.417887] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.437 [2024-04-26 13:15:22.417899] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.437 [2024-04-26 13:15:22.417906] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.437 [2024-04-26 13:15:22.421435] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.437 [2024-04-26 13:15:22.430176] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.437 [2024-04-26 13:15:22.430855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.437 [2024-04-26 13:15:22.431212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.437 [2024-04-26 13:15:22.431225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:17.437 [2024-04-26 13:15:22.431235] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:17.437 [2024-04-26 13:15:22.431472] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:17.437 [2024-04-26 13:15:22.431693] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.437 [2024-04-26 13:15:22.431702] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.437 [2024-04-26 13:15:22.431709] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.437 [2024-04-26 13:15:22.435245] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.437 [2024-04-26 13:15:22.443990] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.437 [2024-04-26 13:15:22.444660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.437 [2024-04-26 13:15:22.444989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.437 [2024-04-26 13:15:22.445004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:17.437 [2024-04-26 13:15:22.445013] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:17.437 [2024-04-26 13:15:22.445254] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:17.437 [2024-04-26 13:15:22.445475] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.437 [2024-04-26 13:15:22.445484] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.437 [2024-04-26 13:15:22.445492] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.437 [2024-04-26 13:15:22.449034] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.437 [2024-04-26 13:15:22.457767] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.437 [2024-04-26 13:15:22.458442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.437 [2024-04-26 13:15:22.458800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.437 [2024-04-26 13:15:22.458813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:17.437 [2024-04-26 13:15:22.458823] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:17.437 [2024-04-26 13:15:22.459068] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:17.437 [2024-04-26 13:15:22.459290] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.437 [2024-04-26 13:15:22.459299] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.437 [2024-04-26 13:15:22.459307] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.437 [2024-04-26 13:15:22.462840] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.437 [2024-04-26 13:15:22.471571] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.437 [2024-04-26 13:15:22.472223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.437 [2024-04-26 13:15:22.472553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.437 [2024-04-26 13:15:22.472566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:17.437 [2024-04-26 13:15:22.472575] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:17.437 [2024-04-26 13:15:22.472812] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:17.437 [2024-04-26 13:15:22.473043] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.437 [2024-04-26 13:15:22.473053] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.437 [2024-04-26 13:15:22.473060] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.437 [2024-04-26 13:15:22.476587] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.437 [2024-04-26 13:15:22.485533] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.437 [2024-04-26 13:15:22.486206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.437 [2024-04-26 13:15:22.486540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.437 [2024-04-26 13:15:22.486552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:17.437 [2024-04-26 13:15:22.486562] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:17.437 [2024-04-26 13:15:22.486799] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:17.437 [2024-04-26 13:15:22.487031] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.437 [2024-04-26 13:15:22.487041] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.437 [2024-04-26 13:15:22.487048] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.437 [2024-04-26 13:15:22.490582] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.698 [2024-04-26 13:15:22.499328] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.698 [2024-04-26 13:15:22.499951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.698 [2024-04-26 13:15:22.500292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.698 [2024-04-26 13:15:22.500304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:17.698 [2024-04-26 13:15:22.500314] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:17.698 [2024-04-26 13:15:22.500551] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:17.698 [2024-04-26 13:15:22.500772] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.698 [2024-04-26 13:15:22.500782] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.698 [2024-04-26 13:15:22.500789] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.698 [2024-04-26 13:15:22.504327] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.698 [2024-04-26 13:15:22.513274] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.698 [2024-04-26 13:15:22.513900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.698 [2024-04-26 13:15:22.514246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.698 [2024-04-26 13:15:22.514259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:17.698 [2024-04-26 13:15:22.514268] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:17.698 [2024-04-26 13:15:22.514505] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:17.698 [2024-04-26 13:15:22.514726] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.698 [2024-04-26 13:15:22.514734] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.698 [2024-04-26 13:15:22.514742] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.698 [2024-04-26 13:15:22.518280] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.698 [2024-04-26 13:15:22.527047] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.698 [2024-04-26 13:15:22.527727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.698 [2024-04-26 13:15:22.528076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.698 [2024-04-26 13:15:22.528090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:17.698 [2024-04-26 13:15:22.528100] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:17.698 [2024-04-26 13:15:22.528337] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:17.698 [2024-04-26 13:15:22.528558] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.698 [2024-04-26 13:15:22.528567] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.698 [2024-04-26 13:15:22.528579] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.698 [2024-04-26 13:15:22.532126] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.698 [2024-04-26 13:15:22.540865] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.698 [2024-04-26 13:15:22.541530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.698 [2024-04-26 13:15:22.541899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.698 [2024-04-26 13:15:22.541914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:17.698 [2024-04-26 13:15:22.541923] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:17.698 [2024-04-26 13:15:22.542160] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:17.698 [2024-04-26 13:15:22.542383] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.698 [2024-04-26 13:15:22.542391] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.698 [2024-04-26 13:15:22.542398] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.698 [2024-04-26 13:15:22.545935] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.698 [2024-04-26 13:15:22.554677] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.698 [2024-04-26 13:15:22.555384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.698 [2024-04-26 13:15:22.555714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.698 [2024-04-26 13:15:22.555728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:17.698 [2024-04-26 13:15:22.555737] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:17.698 [2024-04-26 13:15:22.555983] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:17.698 [2024-04-26 13:15:22.556205] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.698 [2024-04-26 13:15:22.556213] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.698 [2024-04-26 13:15:22.556220] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.698 [2024-04-26 13:15:22.559747] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.698 [2024-04-26 13:15:22.568489] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.698 [2024-04-26 13:15:22.569158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.698 [2024-04-26 13:15:22.569490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.698 [2024-04-26 13:15:22.569503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:17.698 [2024-04-26 13:15:22.569513] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:17.698 [2024-04-26 13:15:22.569750] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:17.698 [2024-04-26 13:15:22.569980] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.698 [2024-04-26 13:15:22.569990] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.698 [2024-04-26 13:15:22.569997] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.698 [2024-04-26 13:15:22.573532] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.698 [2024-04-26 13:15:22.582268] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.698 [2024-04-26 13:15:22.582940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.698 [2024-04-26 13:15:22.583337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.698 [2024-04-26 13:15:22.583350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:17.698 [2024-04-26 13:15:22.583359] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:17.698 [2024-04-26 13:15:22.583597] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:17.698 [2024-04-26 13:15:22.583818] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.698 [2024-04-26 13:15:22.583826] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.698 [2024-04-26 13:15:22.583834] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.698 [2024-04-26 13:15:22.587373] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.698 [2024-04-26 13:15:22.596108] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.698 [2024-04-26 13:15:22.596737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.698 [2024-04-26 13:15:22.597069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.698 [2024-04-26 13:15:22.597083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:17.698 [2024-04-26 13:15:22.597093] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:17.698 [2024-04-26 13:15:22.597330] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:17.698 [2024-04-26 13:15:22.597552] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.698 [2024-04-26 13:15:22.597560] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.698 [2024-04-26 13:15:22.597568] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.698 [2024-04-26 13:15:22.601101] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.698 [2024-04-26 13:15:22.610048] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.698 [2024-04-26 13:15:22.610715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.698 [2024-04-26 13:15:22.611102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.698 [2024-04-26 13:15:22.611116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:17.698 [2024-04-26 13:15:22.611126] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:17.698 [2024-04-26 13:15:22.611363] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:17.698 [2024-04-26 13:15:22.611585] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.698 [2024-04-26 13:15:22.611593] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.698 [2024-04-26 13:15:22.611600] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.698 [2024-04-26 13:15:22.615134] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.698 [2024-04-26 13:15:22.623882] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.698 [2024-04-26 13:15:22.624546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.698 [2024-04-26 13:15:22.624882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.698 [2024-04-26 13:15:22.624896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:17.698 [2024-04-26 13:15:22.624906] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:17.698 [2024-04-26 13:15:22.625143] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:17.698 [2024-04-26 13:15:22.625364] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.698 [2024-04-26 13:15:22.625373] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.698 [2024-04-26 13:15:22.625380] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.698 [2024-04-26 13:15:22.628916] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.698 [2024-04-26 13:15:22.637654] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.698 [2024-04-26 13:15:22.638310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.698 [2024-04-26 13:15:22.638635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.698 [2024-04-26 13:15:22.638648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:17.698 [2024-04-26 13:15:22.638657] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:17.698 [2024-04-26 13:15:22.638903] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:17.698 [2024-04-26 13:15:22.639125] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.698 [2024-04-26 13:15:22.639135] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.698 [2024-04-26 13:15:22.639142] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.698 [2024-04-26 13:15:22.642671] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.698 [2024-04-26 13:15:22.651621] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.698 [2024-04-26 13:15:22.652249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.698 [2024-04-26 13:15:22.652581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.698 [2024-04-26 13:15:22.652594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:17.698 [2024-04-26 13:15:22.652604] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:17.698 [2024-04-26 13:15:22.652848] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:17.698 [2024-04-26 13:15:22.653071] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.698 [2024-04-26 13:15:22.653079] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.698 [2024-04-26 13:15:22.653086] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.698 [2024-04-26 13:15:22.656621] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.698 [2024-04-26 13:15:22.665569] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.698 [2024-04-26 13:15:22.666194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.698 [2024-04-26 13:15:22.666527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.698 [2024-04-26 13:15:22.666540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:17.698 [2024-04-26 13:15:22.666549] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:17.698 [2024-04-26 13:15:22.666787] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:17.698 [2024-04-26 13:15:22.667017] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.698 [2024-04-26 13:15:22.667027] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.698 [2024-04-26 13:15:22.667034] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.698 [2024-04-26 13:15:22.670563] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.698 [2024-04-26 13:15:22.679507] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.698 [2024-04-26 13:15:22.680135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.698 [2024-04-26 13:15:22.680462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.698 [2024-04-26 13:15:22.680475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:17.698 [2024-04-26 13:15:22.680485] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:17.698 [2024-04-26 13:15:22.680722] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:17.698 [2024-04-26 13:15:22.680951] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.698 [2024-04-26 13:15:22.680967] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.698 [2024-04-26 13:15:22.680975] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.698 [2024-04-26 13:15:22.684504] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.698 [2024-04-26 13:15:22.693448] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.698 [2024-04-26 13:15:22.694132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.698 [2024-04-26 13:15:22.694463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.698 [2024-04-26 13:15:22.694476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:17.698 [2024-04-26 13:15:22.694485] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:17.698 [2024-04-26 13:15:22.694722] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:17.698 [2024-04-26 13:15:22.694952] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.698 [2024-04-26 13:15:22.694962] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.698 [2024-04-26 13:15:22.694969] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.698 [2024-04-26 13:15:22.698499] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.698 [2024-04-26 13:15:22.707238] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.698 [2024-04-26 13:15:22.707781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.698 [2024-04-26 13:15:22.708091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.698 [2024-04-26 13:15:22.708110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:17.698 [2024-04-26 13:15:22.708118] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:17.698 [2024-04-26 13:15:22.708336] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:17.698 [2024-04-26 13:15:22.708554] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.698 [2024-04-26 13:15:22.708561] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.698 [2024-04-26 13:15:22.708568] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.698 [2024-04-26 13:15:22.712099] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.698 [2024-04-26 13:15:22.721039] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.698 [2024-04-26 13:15:22.721691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.698 [2024-04-26 13:15:22.722020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.698 [2024-04-26 13:15:22.722034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:17.698 [2024-04-26 13:15:22.722044] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:17.698 [2024-04-26 13:15:22.722281] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:17.698 [2024-04-26 13:15:22.722502] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.698 [2024-04-26 13:15:22.722511] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.698 [2024-04-26 13:15:22.722518] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.698 [2024-04-26 13:15:22.726051] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.698 [2024-04-26 13:15:22.734824] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.698 [2024-04-26 13:15:22.735505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.698 [2024-04-26 13:15:22.735746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.698 [2024-04-26 13:15:22.735758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:17.698 [2024-04-26 13:15:22.735768] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:17.698 [2024-04-26 13:15:22.736014] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:17.698 [2024-04-26 13:15:22.736236] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.698 [2024-04-26 13:15:22.736244] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.698 [2024-04-26 13:15:22.736251] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.698 [2024-04-26 13:15:22.739781] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.698 [2024-04-26 13:15:22.748730] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.699 [2024-04-26 13:15:22.749388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.699 [2024-04-26 13:15:22.749717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.699 [2024-04-26 13:15:22.749730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:17.699 [2024-04-26 13:15:22.749743] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:17.699 [2024-04-26 13:15:22.749989] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:17.699 [2024-04-26 13:15:22.750211] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.699 [2024-04-26 13:15:22.750220] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.699 [2024-04-26 13:15:22.750227] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.699 [2024-04-26 13:15:22.753755] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.961 [2024-04-26 13:15:22.762503] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.961 [2024-04-26 13:15:22.763158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.961 [2024-04-26 13:15:22.763495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.961 [2024-04-26 13:15:22.763509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:17.961 [2024-04-26 13:15:22.763518] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:17.961 [2024-04-26 13:15:22.763756] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:17.961 [2024-04-26 13:15:22.763986] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.961 [2024-04-26 13:15:22.763995] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.961 [2024-04-26 13:15:22.764003] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.961 [2024-04-26 13:15:22.767533] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.961 [2024-04-26 13:15:22.776475] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.961 [2024-04-26 13:15:22.776994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.961 [2024-04-26 13:15:22.777323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.961 [2024-04-26 13:15:22.777336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:17.961 [2024-04-26 13:15:22.777346] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:17.961 [2024-04-26 13:15:22.777583] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:17.961 [2024-04-26 13:15:22.777804] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.961 [2024-04-26 13:15:22.777813] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.961 [2024-04-26 13:15:22.777821] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.961 [2024-04-26 13:15:22.781357] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.961 [2024-04-26 13:15:22.790305] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.961 [2024-04-26 13:15:22.791052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.961 [2024-04-26 13:15:22.791383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.961 [2024-04-26 13:15:22.791396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:17.961 [2024-04-26 13:15:22.791406] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:17.961 [2024-04-26 13:15:22.791648] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:17.961 [2024-04-26 13:15:22.791879] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.961 [2024-04-26 13:15:22.791889] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.961 [2024-04-26 13:15:22.791896] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.961 [2024-04-26 13:15:22.795427] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.961 [2024-04-26 13:15:22.804166] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.961 [2024-04-26 13:15:22.804863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.961 [2024-04-26 13:15:22.805261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.961 [2024-04-26 13:15:22.805274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:17.961 [2024-04-26 13:15:22.805283] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:17.961 [2024-04-26 13:15:22.805520] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:17.962 [2024-04-26 13:15:22.805741] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.962 [2024-04-26 13:15:22.805749] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.962 [2024-04-26 13:15:22.805757] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.962 [2024-04-26 13:15:22.809294] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.962 [2024-04-26 13:15:22.818029] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.962 [2024-04-26 13:15:22.818695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.962 [2024-04-26 13:15:22.818938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.962 [2024-04-26 13:15:22.818952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:17.962 [2024-04-26 13:15:22.818961] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:17.962 [2024-04-26 13:15:22.819199] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:17.962 [2024-04-26 13:15:22.819420] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.962 [2024-04-26 13:15:22.819428] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.962 [2024-04-26 13:15:22.819435] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.962 [2024-04-26 13:15:22.822969] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.962 [2024-04-26 13:15:22.831949] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.962 [2024-04-26 13:15:22.832621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.962 [2024-04-26 13:15:22.832953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.962 [2024-04-26 13:15:22.832967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:17.962 [2024-04-26 13:15:22.832977] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:17.962 [2024-04-26 13:15:22.833214] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:17.962 [2024-04-26 13:15:22.833439] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.962 [2024-04-26 13:15:22.833448] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.962 [2024-04-26 13:15:22.833455] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.962 [2024-04-26 13:15:22.836988] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.962 [2024-04-26 13:15:22.845725] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.962 [2024-04-26 13:15:22.846406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.962 [2024-04-26 13:15:22.846739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.962 [2024-04-26 13:15:22.846752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:17.962 [2024-04-26 13:15:22.846762] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:17.962 [2024-04-26 13:15:22.847008] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:17.962 [2024-04-26 13:15:22.847230] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.962 [2024-04-26 13:15:22.847239] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.962 [2024-04-26 13:15:22.847247] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.962 [2024-04-26 13:15:22.850779] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.962 [2024-04-26 13:15:22.859516] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.962 [2024-04-26 13:15:22.860180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.962 [2024-04-26 13:15:22.860515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.962 [2024-04-26 13:15:22.860528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:17.962 [2024-04-26 13:15:22.860538] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:17.962 [2024-04-26 13:15:22.860775] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:17.962 [2024-04-26 13:15:22.861004] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.962 [2024-04-26 13:15:22.861014] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.962 [2024-04-26 13:15:22.861022] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.962 [2024-04-26 13:15:22.864554] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.962 [2024-04-26 13:15:22.873295] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.962 [2024-04-26 13:15:22.874067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.962 [2024-04-26 13:15:22.874309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.962 [2024-04-26 13:15:22.874321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:17.962 [2024-04-26 13:15:22.874331] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:17.962 [2024-04-26 13:15:22.874568] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:17.962 [2024-04-26 13:15:22.874790] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.962 [2024-04-26 13:15:22.874803] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.962 [2024-04-26 13:15:22.874811] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.962 [2024-04-26 13:15:22.878351] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.962 [2024-04-26 13:15:22.887098] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.962 [2024-04-26 13:15:22.887634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.962 [2024-04-26 13:15:22.887982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.962 [2024-04-26 13:15:22.887994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:17.962 [2024-04-26 13:15:22.888002] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:17.962 [2024-04-26 13:15:22.888221] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:17.962 [2024-04-26 13:15:22.888440] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.962 [2024-04-26 13:15:22.888448] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.962 [2024-04-26 13:15:22.888456] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.962 [2024-04-26 13:15:22.891983] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.962 [2024-04-26 13:15:22.901080] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.962 [2024-04-26 13:15:22.901751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.962 [2024-04-26 13:15:22.902064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.962 [2024-04-26 13:15:22.902078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:17.962 [2024-04-26 13:15:22.902089] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:17.962 [2024-04-26 13:15:22.902327] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:17.962 [2024-04-26 13:15:22.902550] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.962 [2024-04-26 13:15:22.902559] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.962 [2024-04-26 13:15:22.902566] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.962 [2024-04-26 13:15:22.906106] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.962 [2024-04-26 13:15:22.914855] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.962 [2024-04-26 13:15:22.915381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.962 [2024-04-26 13:15:22.915725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.962 [2024-04-26 13:15:22.915738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:17.962 [2024-04-26 13:15:22.915748] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:17.962 [2024-04-26 13:15:22.915993] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:17.962 [2024-04-26 13:15:22.916215] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.962 [2024-04-26 13:15:22.916223] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.962 [2024-04-26 13:15:22.916236] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.962 [2024-04-26 13:15:22.919769] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.962 [2024-04-26 13:15:22.928726] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.962 [2024-04-26 13:15:22.929420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.962 [2024-04-26 13:15:22.929654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.962 [2024-04-26 13:15:22.929667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:17.962 [2024-04-26 13:15:22.929677] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:17.962 [2024-04-26 13:15:22.929929] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:17.962 [2024-04-26 13:15:22.930151] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.962 [2024-04-26 13:15:22.930159] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.962 [2024-04-26 13:15:22.930167] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.963 [2024-04-26 13:15:22.933698] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.963 [2024-04-26 13:15:22.942765] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.963 [2024-04-26 13:15:22.943488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.963 [2024-04-26 13:15:22.943827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.963 [2024-04-26 13:15:22.943847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:17.963 [2024-04-26 13:15:22.943858] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:17.963 [2024-04-26 13:15:22.944095] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:17.963 [2024-04-26 13:15:22.944317] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.963 [2024-04-26 13:15:22.944325] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.963 [2024-04-26 13:15:22.944333] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.963 [2024-04-26 13:15:22.947869] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.963 [2024-04-26 13:15:22.956620] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.963 [2024-04-26 13:15:22.957176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.963 [2024-04-26 13:15:22.957540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.963 [2024-04-26 13:15:22.957553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:17.963 [2024-04-26 13:15:22.957562] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:17.963 [2024-04-26 13:15:22.957800] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:17.963 [2024-04-26 13:15:22.958027] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.963 [2024-04-26 13:15:22.958036] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.963 [2024-04-26 13:15:22.958043] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.963 [2024-04-26 13:15:22.961580] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.963 [2024-04-26 13:15:22.970552] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.963 [2024-04-26 13:15:22.971107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.963 [2024-04-26 13:15:22.971348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.963 [2024-04-26 13:15:22.971358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:17.963 [2024-04-26 13:15:22.971366] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:17.963 [2024-04-26 13:15:22.971584] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:17.963 [2024-04-26 13:15:22.971802] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.963 [2024-04-26 13:15:22.971810] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.963 [2024-04-26 13:15:22.971816] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.963 [2024-04-26 13:15:22.975344] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.963 [2024-04-26 13:15:22.984493] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.963 [2024-04-26 13:15:22.984946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.963 [2024-04-26 13:15:22.985323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.963 [2024-04-26 13:15:22.985336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:17.963 [2024-04-26 13:15:22.985346] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:17.963 [2024-04-26 13:15:22.985583] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:17.963 [2024-04-26 13:15:22.985804] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.963 [2024-04-26 13:15:22.985813] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.963 [2024-04-26 13:15:22.985820] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.963 [2024-04-26 13:15:22.989355] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.963 [2024-04-26 13:15:22.998308] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.963 [2024-04-26 13:15:22.998814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.963 [2024-04-26 13:15:22.999372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.963 [2024-04-26 13:15:22.999386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:17.963 [2024-04-26 13:15:22.999396] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:17.963 [2024-04-26 13:15:22.999633] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:17.963 [2024-04-26 13:15:22.999860] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.963 [2024-04-26 13:15:22.999869] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.963 [2024-04-26 13:15:22.999876] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.963 [2024-04-26 13:15:23.003410] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.963 [2024-04-26 13:15:23.012153] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.963 [2024-04-26 13:15:23.012828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.963 [2024-04-26 13:15:23.013091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.963 [2024-04-26 13:15:23.013104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:17.963 [2024-04-26 13:15:23.013114] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:17.963 [2024-04-26 13:15:23.013351] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:17.963 [2024-04-26 13:15:23.013572] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.963 [2024-04-26 13:15:23.013580] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.963 [2024-04-26 13:15:23.013587] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.963 [2024-04-26 13:15:23.017128] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.224 [2024-04-26 13:15:23.026090] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.224 [2024-04-26 13:15:23.026628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.224 [2024-04-26 13:15:23.026976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.224 [2024-04-26 13:15:23.026987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:18.224 [2024-04-26 13:15:23.026995] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:18.224 [2024-04-26 13:15:23.027214] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:18.225 [2024-04-26 13:15:23.027434] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.225 [2024-04-26 13:15:23.027442] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.225 [2024-04-26 13:15:23.027449] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.225 [2024-04-26 13:15:23.030996] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.225 [2024-04-26 13:15:23.039953] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.225 [2024-04-26 13:15:23.040531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.225 [2024-04-26 13:15:23.040861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.225 [2024-04-26 13:15:23.040872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:18.225 [2024-04-26 13:15:23.040880] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:18.225 [2024-04-26 13:15:23.041099] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:18.225 [2024-04-26 13:15:23.041317] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.225 [2024-04-26 13:15:23.041325] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.225 [2024-04-26 13:15:23.041332] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.225 [2024-04-26 13:15:23.044865] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.225 [2024-04-26 13:15:23.053817] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.225 [2024-04-26 13:15:23.054376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.225 [2024-04-26 13:15:23.054604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.225 [2024-04-26 13:15:23.054614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:18.225 [2024-04-26 13:15:23.054622] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:18.225 [2024-04-26 13:15:23.054845] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:18.225 [2024-04-26 13:15:23.055064] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.225 [2024-04-26 13:15:23.055072] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.225 [2024-04-26 13:15:23.055078] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.225 [2024-04-26 13:15:23.058602] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.225 [2024-04-26 13:15:23.067756] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.225 [2024-04-26 13:15:23.068391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.225 [2024-04-26 13:15:23.068757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.225 [2024-04-26 13:15:23.068769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:18.225 [2024-04-26 13:15:23.068779] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:18.225 [2024-04-26 13:15:23.069025] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:18.225 [2024-04-26 13:15:23.069248] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.225 [2024-04-26 13:15:23.069256] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.225 [2024-04-26 13:15:23.069263] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.225 [2024-04-26 13:15:23.072804] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.225 [2024-04-26 13:15:23.081562] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.225 [2024-04-26 13:15:23.082244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.225 [2024-04-26 13:15:23.082596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.225 [2024-04-26 13:15:23.082609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:18.225 [2024-04-26 13:15:23.082619] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:18.225 [2024-04-26 13:15:23.082864] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:18.225 [2024-04-26 13:15:23.083087] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.225 [2024-04-26 13:15:23.083095] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.225 [2024-04-26 13:15:23.083103] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.225 [2024-04-26 13:15:23.086638] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.225 [2024-04-26 13:15:23.095391] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.225 [2024-04-26 13:15:23.095872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.225 [2024-04-26 13:15:23.096174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.225 [2024-04-26 13:15:23.096188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:18.225 [2024-04-26 13:15:23.096196] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:18.225 [2024-04-26 13:15:23.096415] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:18.225 [2024-04-26 13:15:23.096633] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.225 [2024-04-26 13:15:23.096641] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.225 [2024-04-26 13:15:23.096647] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.225 [2024-04-26 13:15:23.100186] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.225 [2024-04-26 13:15:23.109354] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.225 [2024-04-26 13:15:23.110054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.225 [2024-04-26 13:15:23.110411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.225 [2024-04-26 13:15:23.110424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:18.225 [2024-04-26 13:15:23.110434] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:18.225 [2024-04-26 13:15:23.110671] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:18.225 [2024-04-26 13:15:23.110901] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.225 [2024-04-26 13:15:23.110909] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.225 [2024-04-26 13:15:23.110917] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.225 [2024-04-26 13:15:23.114448] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.225 [2024-04-26 13:15:23.123205] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.225 [2024-04-26 13:15:23.123740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.225 [2024-04-26 13:15:23.124087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.225 [2024-04-26 13:15:23.124098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:18.225 [2024-04-26 13:15:23.124106] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:18.225 [2024-04-26 13:15:23.124324] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:18.225 [2024-04-26 13:15:23.124542] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.225 [2024-04-26 13:15:23.124549] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.225 [2024-04-26 13:15:23.124556] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.225 [2024-04-26 13:15:23.128091] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.225 [2024-04-26 13:15:23.137059] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.225 [2024-04-26 13:15:23.137618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.225 [2024-04-26 13:15:23.137959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.225 [2024-04-26 13:15:23.137970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:18.225 [2024-04-26 13:15:23.137982] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:18.225 [2024-04-26 13:15:23.138200] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:18.225 [2024-04-26 13:15:23.138418] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.225 [2024-04-26 13:15:23.138425] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.225 [2024-04-26 13:15:23.138432] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.225 [2024-04-26 13:15:23.141966] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.225 [2024-04-26 13:15:23.150947] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.225 [2024-04-26 13:15:23.151524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.225 [2024-04-26 13:15:23.151804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.225 [2024-04-26 13:15:23.151814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:18.225 [2024-04-26 13:15:23.151821] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:18.225 [2024-04-26 13:15:23.152044] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:18.225 [2024-04-26 13:15:23.152262] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.225 [2024-04-26 13:15:23.152270] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.225 [2024-04-26 13:15:23.152276] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.226 [2024-04-26 13:15:23.155808] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.226 [2024-04-26 13:15:23.164772] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.226 [2024-04-26 13:15:23.165344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.226 [2024-04-26 13:15:23.165681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.226 [2024-04-26 13:15:23.165691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:18.226 [2024-04-26 13:15:23.165698] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:18.226 [2024-04-26 13:15:23.165921] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:18.226 [2024-04-26 13:15:23.166139] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.226 [2024-04-26 13:15:23.166146] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.226 [2024-04-26 13:15:23.166153] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.226 [2024-04-26 13:15:23.169679] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.226 [2024-04-26 13:15:23.178634] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.226 [2024-04-26 13:15:23.179251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.226 [2024-04-26 13:15:23.179607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.226 [2024-04-26 13:15:23.179620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:18.226 [2024-04-26 13:15:23.179630] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:18.226 [2024-04-26 13:15:23.179879] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:18.226 [2024-04-26 13:15:23.180101] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.226 [2024-04-26 13:15:23.180109] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.226 [2024-04-26 13:15:23.180116] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.226 [2024-04-26 13:15:23.183651] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.226 [2024-04-26 13:15:23.192612] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.226 [2024-04-26 13:15:23.193159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.226 [2024-04-26 13:15:23.193495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.226 [2024-04-26 13:15:23.193505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:18.226 [2024-04-26 13:15:23.193512] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:18.226 [2024-04-26 13:15:23.193731] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:18.226 [2024-04-26 13:15:23.193954] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.226 [2024-04-26 13:15:23.193962] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.226 [2024-04-26 13:15:23.193969] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.226 [2024-04-26 13:15:23.197497] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.226 [2024-04-26 13:15:23.206451] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.226 [2024-04-26 13:15:23.206984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.226 [2024-04-26 13:15:23.207290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.226 [2024-04-26 13:15:23.207300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:18.226 [2024-04-26 13:15:23.207307] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:18.226 [2024-04-26 13:15:23.207526] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:18.226 [2024-04-26 13:15:23.207743] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.226 [2024-04-26 13:15:23.207751] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.226 [2024-04-26 13:15:23.207758] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.226 [2024-04-26 13:15:23.211287] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.226 [2024-04-26 13:15:23.220236] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.226 [2024-04-26 13:15:23.220769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.226 [2024-04-26 13:15:23.221092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.226 [2024-04-26 13:15:23.221102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:18.226 [2024-04-26 13:15:23.221110] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:18.226 [2024-04-26 13:15:23.221327] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:18.226 [2024-04-26 13:15:23.221549] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.226 [2024-04-26 13:15:23.221557] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.226 [2024-04-26 13:15:23.221564] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.226 [2024-04-26 13:15:23.225095] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.226 [2024-04-26 13:15:23.234067] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.226 [2024-04-26 13:15:23.234585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.226 [2024-04-26 13:15:23.234935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.226 [2024-04-26 13:15:23.234946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:18.226 [2024-04-26 13:15:23.234953] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:18.226 [2024-04-26 13:15:23.235172] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:18.226 [2024-04-26 13:15:23.235389] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.226 [2024-04-26 13:15:23.235397] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.226 [2024-04-26 13:15:23.235404] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.226 [2024-04-26 13:15:23.238933] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.226 [2024-04-26 13:15:23.247884] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.226 [2024-04-26 13:15:23.248404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.226 [2024-04-26 13:15:23.248756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.226 [2024-04-26 13:15:23.248765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:18.226 [2024-04-26 13:15:23.248772] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:18.226 [2024-04-26 13:15:23.248997] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:18.226 [2024-04-26 13:15:23.249215] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.226 [2024-04-26 13:15:23.249222] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.226 [2024-04-26 13:15:23.249229] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.226 [2024-04-26 13:15:23.252755] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.226 [2024-04-26 13:15:23.261710] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.226 [2024-04-26 13:15:23.262341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.226 [2024-04-26 13:15:23.262698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.226 [2024-04-26 13:15:23.262711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:18.226 [2024-04-26 13:15:23.262720] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:18.226 [2024-04-26 13:15:23.262963] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:18.226 [2024-04-26 13:15:23.263185] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.226 [2024-04-26 13:15:23.263194] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.226 [2024-04-26 13:15:23.263205] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.226 [2024-04-26 13:15:23.266741] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.226 [2024-04-26 13:15:23.275511] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.226 [2024-04-26 13:15:23.276163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.226 [2024-04-26 13:15:23.276522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.226 [2024-04-26 13:15:23.276534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:18.226 [2024-04-26 13:15:23.276544] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:18.226 [2024-04-26 13:15:23.276781] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:18.226 [2024-04-26 13:15:23.277009] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.226 [2024-04-26 13:15:23.277018] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.226 [2024-04-26 13:15:23.277025] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.226 [2024-04-26 13:15:23.280557] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.488 [2024-04-26 13:15:23.289298] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.488 [2024-04-26 13:15:23.289712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.488 [2024-04-26 13:15:23.290021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.488 [2024-04-26 13:15:23.290032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:18.488 [2024-04-26 13:15:23.290040] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:18.488 [2024-04-26 13:15:23.290259] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:18.488 [2024-04-26 13:15:23.290478] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.488 [2024-04-26 13:15:23.290485] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.488 [2024-04-26 13:15:23.290492] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.488 [2024-04-26 13:15:23.294018] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.488 [2024-04-26 13:15:23.303161] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.488 [2024-04-26 13:15:23.303691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.488 [2024-04-26 13:15:23.304074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.488 [2024-04-26 13:15:23.304084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:18.488 [2024-04-26 13:15:23.304092] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:18.488 [2024-04-26 13:15:23.304310] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:18.488 [2024-04-26 13:15:23.304527] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.488 [2024-04-26 13:15:23.304535] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.488 [2024-04-26 13:15:23.304541] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.488 [2024-04-26 13:15:23.308073] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.488 [2024-04-26 13:15:23.317006] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.488 [2024-04-26 13:15:23.317544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.488 [2024-04-26 13:15:23.317878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.488 [2024-04-26 13:15:23.317888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:18.488 [2024-04-26 13:15:23.317896] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:18.488 [2024-04-26 13:15:23.318113] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:18.488 [2024-04-26 13:15:23.318330] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.488 [2024-04-26 13:15:23.318338] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.488 [2024-04-26 13:15:23.318344] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.488 [2024-04-26 13:15:23.321871] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.488 [2024-04-26 13:15:23.330803] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.488 [2024-04-26 13:15:23.331453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.488 [2024-04-26 13:15:23.331810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.488 [2024-04-26 13:15:23.331823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:18.488 [2024-04-26 13:15:23.331832] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:18.488 [2024-04-26 13:15:23.332076] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:18.488 [2024-04-26 13:15:23.332298] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.488 [2024-04-26 13:15:23.332306] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.488 [2024-04-26 13:15:23.332313] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.488 [2024-04-26 13:15:23.335850] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.488 [2024-04-26 13:15:23.344595] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.488 [2024-04-26 13:15:23.345181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.488 [2024-04-26 13:15:23.345521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.488 [2024-04-26 13:15:23.345531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:18.488 [2024-04-26 13:15:23.345539] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:18.488 [2024-04-26 13:15:23.345758] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:18.488 [2024-04-26 13:15:23.345981] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.488 [2024-04-26 13:15:23.345989] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.488 [2024-04-26 13:15:23.345996] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.488 [2024-04-26 13:15:23.349519] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.488 [2024-04-26 13:15:23.358507] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.488 [2024-04-26 13:15:23.359201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.488 [2024-04-26 13:15:23.359564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.488 [2024-04-26 13:15:23.359578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:18.488 [2024-04-26 13:15:23.359587] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:18.488 [2024-04-26 13:15:23.359824] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:18.488 [2024-04-26 13:15:23.360051] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.488 [2024-04-26 13:15:23.360060] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.488 [2024-04-26 13:15:23.360067] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.488 [2024-04-26 13:15:23.363597] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.488 [2024-04-26 13:15:23.372339] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.488 [2024-04-26 13:15:23.372943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.488 [2024-04-26 13:15:23.373300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.488 [2024-04-26 13:15:23.373313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:18.488 [2024-04-26 13:15:23.373323] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:18.488 [2024-04-26 13:15:23.373560] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:18.489 [2024-04-26 13:15:23.373781] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.489 [2024-04-26 13:15:23.373790] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.489 [2024-04-26 13:15:23.373798] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.489 [2024-04-26 13:15:23.377333] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.489 [2024-04-26 13:15:23.386281] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.489 [2024-04-26 13:15:23.386854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.489 [2024-04-26 13:15:23.387191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.489 [2024-04-26 13:15:23.387202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:18.489 [2024-04-26 13:15:23.387210] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:18.489 [2024-04-26 13:15:23.387428] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:18.489 [2024-04-26 13:15:23.387646] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.489 [2024-04-26 13:15:23.387654] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.489 [2024-04-26 13:15:23.387661] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.489 [2024-04-26 13:15:23.391191] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.489 [2024-04-26 13:15:23.400134] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.489 [2024-04-26 13:15:23.400802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.489 [2024-04-26 13:15:23.401161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.489 [2024-04-26 13:15:23.401175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:18.489 [2024-04-26 13:15:23.401185] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:18.489 [2024-04-26 13:15:23.401422] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:18.489 [2024-04-26 13:15:23.401643] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.489 [2024-04-26 13:15:23.401652] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.489 [2024-04-26 13:15:23.401659] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.489 [2024-04-26 13:15:23.405199] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.489 [2024-04-26 13:15:23.413945] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.489 [2024-04-26 13:15:23.414522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.489 [2024-04-26 13:15:23.414846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.489 [2024-04-26 13:15:23.414858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:18.489 [2024-04-26 13:15:23.414865] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:18.489 [2024-04-26 13:15:23.415085] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:18.489 [2024-04-26 13:15:23.415303] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.489 [2024-04-26 13:15:23.415311] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.489 [2024-04-26 13:15:23.415317] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.489 [2024-04-26 13:15:23.418848] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.489 [2024-04-26 13:15:23.427791] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.489 [2024-04-26 13:15:23.428366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.489 [2024-04-26 13:15:23.428707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.489 [2024-04-26 13:15:23.428716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:18.489 [2024-04-26 13:15:23.428724] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:18.489 [2024-04-26 13:15:23.428946] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:18.489 [2024-04-26 13:15:23.429165] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.489 [2024-04-26 13:15:23.429172] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.489 [2024-04-26 13:15:23.429179] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.489 [2024-04-26 13:15:23.432712] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.489 [2024-04-26 13:15:23.441651] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.489 [2024-04-26 13:15:23.442275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.489 [2024-04-26 13:15:23.442638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.489 [2024-04-26 13:15:23.442656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:18.489 [2024-04-26 13:15:23.442666] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:18.489 [2024-04-26 13:15:23.442910] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:18.489 [2024-04-26 13:15:23.443133] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.489 [2024-04-26 13:15:23.443141] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.489 [2024-04-26 13:15:23.443148] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.489 [2024-04-26 13:15:23.446678] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.489 [2024-04-26 13:15:23.455426] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.489 [2024-04-26 13:15:23.455971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.489 [2024-04-26 13:15:23.456318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.489 [2024-04-26 13:15:23.456329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:18.489 [2024-04-26 13:15:23.456336] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:18.489 [2024-04-26 13:15:23.456555] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:18.489 [2024-04-26 13:15:23.456773] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.489 [2024-04-26 13:15:23.456780] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.489 [2024-04-26 13:15:23.456787] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.489 [2024-04-26 13:15:23.460313] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.489 [2024-04-26 13:15:23.469259] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.489 [2024-04-26 13:15:23.469785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.489 [2024-04-26 13:15:23.470159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.489 [2024-04-26 13:15:23.470170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:18.489 [2024-04-26 13:15:23.470177] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:18.489 [2024-04-26 13:15:23.470395] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:18.489 [2024-04-26 13:15:23.470612] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.489 [2024-04-26 13:15:23.470620] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.489 [2024-04-26 13:15:23.470627] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.489 [2024-04-26 13:15:23.474151] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.489 [2024-04-26 13:15:23.483095] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.489 [2024-04-26 13:15:23.483696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.489 [2024-04-26 13:15:23.484058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.489 [2024-04-26 13:15:23.484073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:18.489 [2024-04-26 13:15:23.484087] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:18.489 [2024-04-26 13:15:23.484324] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:18.489 [2024-04-26 13:15:23.484546] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.489 [2024-04-26 13:15:23.484554] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.489 [2024-04-26 13:15:23.484561] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.489 [2024-04-26 13:15:23.488101] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.489 [2024-04-26 13:15:23.497055] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.489 [2024-04-26 13:15:23.497722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.489 [2024-04-26 13:15:23.498091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.489 [2024-04-26 13:15:23.498105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:18.489 [2024-04-26 13:15:23.498115] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:18.489 [2024-04-26 13:15:23.498352] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:18.489 [2024-04-26 13:15:23.498574] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.489 [2024-04-26 13:15:23.498582] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.489 [2024-04-26 13:15:23.498589] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.489 [2024-04-26 13:15:23.502125] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.489 [2024-04-26 13:15:23.510866] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.489 [2024-04-26 13:15:23.511403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.489 [2024-04-26 13:15:23.511738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.489 [2024-04-26 13:15:23.511748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:18.489 [2024-04-26 13:15:23.511756] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:18.489 [2024-04-26 13:15:23.511978] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:18.489 [2024-04-26 13:15:23.512197] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.489 [2024-04-26 13:15:23.512204] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.489 [2024-04-26 13:15:23.512211] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.489 [2024-04-26 13:15:23.515734] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.489 [2024-04-26 13:15:23.524683] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.489 [2024-04-26 13:15:23.525351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.489 [2024-04-26 13:15:23.525707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.489 [2024-04-26 13:15:23.525720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:18.489 [2024-04-26 13:15:23.525730] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:18.489 [2024-04-26 13:15:23.525978] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:18.490 [2024-04-26 13:15:23.526200] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.490 [2024-04-26 13:15:23.526208] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.490 [2024-04-26 13:15:23.526216] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.490 [2024-04-26 13:15:23.529745] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.490 [2024-04-26 13:15:23.538498] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.490 [2024-04-26 13:15:23.539053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.490 [2024-04-26 13:15:23.539281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.490 [2024-04-26 13:15:23.539291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:18.490 [2024-04-26 13:15:23.539299] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:18.490 [2024-04-26 13:15:23.539518] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:18.490 [2024-04-26 13:15:23.539735] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.490 [2024-04-26 13:15:23.539743] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.490 [2024-04-26 13:15:23.539749] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.490 [2024-04-26 13:15:23.543273] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.751 [2024-04-26 13:15:23.552419] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.751 [2024-04-26 13:15:23.552941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.751 [2024-04-26 13:15:23.553148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.751 [2024-04-26 13:15:23.553160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:18.751 [2024-04-26 13:15:23.553167] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:18.751 [2024-04-26 13:15:23.553386] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:18.751 [2024-04-26 13:15:23.553604] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.751 [2024-04-26 13:15:23.553612] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.751 [2024-04-26 13:15:23.553618] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.751 [2024-04-26 13:15:23.557150] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.751 [2024-04-26 13:15:23.566333] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.751 [2024-04-26 13:15:23.566946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.751 [2024-04-26 13:15:23.567332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.751 [2024-04-26 13:15:23.567345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:18.751 [2024-04-26 13:15:23.567355] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:18.751 [2024-04-26 13:15:23.567592] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:18.751 [2024-04-26 13:15:23.567818] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.751 [2024-04-26 13:15:23.567826] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.751 [2024-04-26 13:15:23.567833] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.751 [2024-04-26 13:15:23.571372] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.751 [2024-04-26 13:15:23.580108] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.751 [2024-04-26 13:15:23.580680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.751 [2024-04-26 13:15:23.581005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.751 [2024-04-26 13:15:23.581015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:18.751 [2024-04-26 13:15:23.581023] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:18.751 [2024-04-26 13:15:23.581241] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:18.751 [2024-04-26 13:15:23.581459] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.751 [2024-04-26 13:15:23.581466] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.751 [2024-04-26 13:15:23.581473] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.751 [2024-04-26 13:15:23.585006] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.751 [2024-04-26 13:15:23.593954] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.751 [2024-04-26 13:15:23.594616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.751 [2024-04-26 13:15:23.594918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.751 [2024-04-26 13:15:23.594932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:18.751 [2024-04-26 13:15:23.594942] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:18.751 [2024-04-26 13:15:23.595179] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:18.751 [2024-04-26 13:15:23.595400] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.751 [2024-04-26 13:15:23.595408] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.751 [2024-04-26 13:15:23.595415] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.751 [2024-04-26 13:15:23.598948] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.751 [2024-04-26 13:15:23.607903] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.751 [2024-04-26 13:15:23.608477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.751 [2024-04-26 13:15:23.608809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.751 [2024-04-26 13:15:23.608819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:18.751 [2024-04-26 13:15:23.608826] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:18.752 [2024-04-26 13:15:23.609049] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:18.752 [2024-04-26 13:15:23.609268] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.752 [2024-04-26 13:15:23.609280] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.752 [2024-04-26 13:15:23.609287] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.752 [2024-04-26 13:15:23.612813] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.752 [2024-04-26 13:15:23.621753] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.752 [2024-04-26 13:15:23.622419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.752 [2024-04-26 13:15:23.622785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.752 [2024-04-26 13:15:23.622798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:18.752 [2024-04-26 13:15:23.622808] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:18.752 [2024-04-26 13:15:23.623052] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:18.752 [2024-04-26 13:15:23.623274] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.752 [2024-04-26 13:15:23.623282] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.752 [2024-04-26 13:15:23.623289] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.752 [2024-04-26 13:15:23.626820] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.752 [2024-04-26 13:15:23.635567] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.752 [2024-04-26 13:15:23.636274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.752 [2024-04-26 13:15:23.636616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.752 [2024-04-26 13:15:23.636629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:18.752 [2024-04-26 13:15:23.636639] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:18.752 [2024-04-26 13:15:23.636884] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:18.752 [2024-04-26 13:15:23.637106] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.752 [2024-04-26 13:15:23.637114] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.752 [2024-04-26 13:15:23.637122] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.752 [2024-04-26 13:15:23.640648] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.752 [2024-04-26 13:15:23.649385] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.752 [2024-04-26 13:15:23.650055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.752 [2024-04-26 13:15:23.650413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.752 [2024-04-26 13:15:23.650426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:18.752 [2024-04-26 13:15:23.650435] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:18.752 [2024-04-26 13:15:23.650673] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:18.752 [2024-04-26 13:15:23.650901] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.752 [2024-04-26 13:15:23.650910] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.752 [2024-04-26 13:15:23.650922] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.752 [2024-04-26 13:15:23.654449] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.752 [2024-04-26 13:15:23.663199] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.752 [2024-04-26 13:15:23.663903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.752 [2024-04-26 13:15:23.664223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.752 [2024-04-26 13:15:23.664236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:18.752 [2024-04-26 13:15:23.664245] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:18.752 [2024-04-26 13:15:23.664483] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:18.752 [2024-04-26 13:15:23.664704] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.752 [2024-04-26 13:15:23.664712] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.752 [2024-04-26 13:15:23.664719] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.752 [2024-04-26 13:15:23.668253] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.752 [2024-04-26 13:15:23.676991] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.752 [2024-04-26 13:15:23.677652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.752 [2024-04-26 13:15:23.678006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.752 [2024-04-26 13:15:23.678020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:18.752 [2024-04-26 13:15:23.678030] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:18.752 [2024-04-26 13:15:23.678267] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:18.752 [2024-04-26 13:15:23.678488] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.752 [2024-04-26 13:15:23.678496] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.752 [2024-04-26 13:15:23.678503] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.752 [2024-04-26 13:15:23.682037] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.752 [2024-04-26 13:15:23.690777] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.752 [2024-04-26 13:15:23.691428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.752 [2024-04-26 13:15:23.691786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.752 [2024-04-26 13:15:23.691799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:18.752 [2024-04-26 13:15:23.691808] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:18.752 [2024-04-26 13:15:23.692055] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:18.752 [2024-04-26 13:15:23.692277] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.752 [2024-04-26 13:15:23.692285] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.752 [2024-04-26 13:15:23.692292] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.752 [2024-04-26 13:15:23.695834] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.752 [2024-04-26 13:15:23.704573] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.752 [2024-04-26 13:15:23.705231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.752 [2024-04-26 13:15:23.705587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.752 [2024-04-26 13:15:23.705599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:18.752 [2024-04-26 13:15:23.705609] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:18.752 [2024-04-26 13:15:23.705854] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:18.752 [2024-04-26 13:15:23.706077] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.752 [2024-04-26 13:15:23.706085] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.752 [2024-04-26 13:15:23.706092] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.752 [2024-04-26 13:15:23.709621] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.752 [2024-04-26 13:15:23.718357] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.752 [2024-04-26 13:15:23.719025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.752 [2024-04-26 13:15:23.719392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.752 [2024-04-26 13:15:23.719405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:18.752 [2024-04-26 13:15:23.719414] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:18.752 [2024-04-26 13:15:23.719651] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:18.752 [2024-04-26 13:15:23.719881] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.752 [2024-04-26 13:15:23.719889] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.752 [2024-04-26 13:15:23.719897] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.752 [2024-04-26 13:15:23.723433] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.752 [2024-04-26 13:15:23.732180] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.752 [2024-04-26 13:15:23.732717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.752 [2024-04-26 13:15:23.733081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.752 [2024-04-26 13:15:23.733092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:18.752 [2024-04-26 13:15:23.733100] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:18.752 [2024-04-26 13:15:23.733318] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:18.752 [2024-04-26 13:15:23.733536] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.752 [2024-04-26 13:15:23.733544] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.752 [2024-04-26 13:15:23.733551] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.752 [2024-04-26 13:15:23.737079] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.753 [2024-04-26 13:15:23.746016] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.753 [2024-04-26 13:15:23.746585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.753 [2024-04-26 13:15:23.746882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.753 [2024-04-26 13:15:23.746892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:18.753 [2024-04-26 13:15:23.746900] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:18.753 [2024-04-26 13:15:23.747118] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:18.753 [2024-04-26 13:15:23.747335] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.753 [2024-04-26 13:15:23.747343] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.753 [2024-04-26 13:15:23.747350] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.753 [2024-04-26 13:15:23.750875] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.753 [2024-04-26 13:15:23.759811] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.753 [2024-04-26 13:15:23.760478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.753 [2024-04-26 13:15:23.760830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.753 [2024-04-26 13:15:23.760852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:18.753 [2024-04-26 13:15:23.760862] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:18.753 [2024-04-26 13:15:23.761099] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:18.753 [2024-04-26 13:15:23.761321] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.753 [2024-04-26 13:15:23.761329] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.753 [2024-04-26 13:15:23.761336] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.753 [2024-04-26 13:15:23.764867] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.753 [2024-04-26 13:15:23.773633] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.753 [2024-04-26 13:15:23.774287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.753 [2024-04-26 13:15:23.774640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.753 [2024-04-26 13:15:23.774652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:18.753 [2024-04-26 13:15:23.774662] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:18.753 [2024-04-26 13:15:23.774908] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:18.753 [2024-04-26 13:15:23.775137] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.753 [2024-04-26 13:15:23.775145] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.753 [2024-04-26 13:15:23.775152] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.753 [2024-04-26 13:15:23.778685] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.753 [2024-04-26 13:15:23.787424] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.753 [2024-04-26 13:15:23.788088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.753 [2024-04-26 13:15:23.788448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.753 [2024-04-26 13:15:23.788461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:18.753 [2024-04-26 13:15:23.788470] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:18.753 [2024-04-26 13:15:23.788708] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:18.753 [2024-04-26 13:15:23.788936] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.753 [2024-04-26 13:15:23.788945] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.753 [2024-04-26 13:15:23.788953] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.753 [2024-04-26 13:15:23.792488] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.753 [2024-04-26 13:15:23.801232] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.753 [2024-04-26 13:15:23.801895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.753 [2024-04-26 13:15:23.802255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.753 [2024-04-26 13:15:23.802267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:18.753 [2024-04-26 13:15:23.802277] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:18.753 [2024-04-26 13:15:23.802514] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:18.753 [2024-04-26 13:15:23.802735] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.753 [2024-04-26 13:15:23.802743] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.753 [2024-04-26 13:15:23.802751] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.753 [2024-04-26 13:15:23.806291] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:19.017 [2024-04-26 13:15:23.815041] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:19.017 [2024-04-26 13:15:23.815703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.017 [2024-04-26 13:15:23.816073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.018 [2024-04-26 13:15:23.816087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:19.018 [2024-04-26 13:15:23.816097] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:19.018 [2024-04-26 13:15:23.816334] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:19.018 [2024-04-26 13:15:23.816556] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:19.018 [2024-04-26 13:15:23.816564] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:19.018 [2024-04-26 13:15:23.816571] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:19.018 [2024-04-26 13:15:23.820107] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:19.018 [2024-04-26 13:15:23.828850] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:19.018 [2024-04-26 13:15:23.829518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.018 [2024-04-26 13:15:23.829752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.018 [2024-04-26 13:15:23.829769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:19.018 [2024-04-26 13:15:23.829779] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:19.018 [2024-04-26 13:15:23.830026] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:19.018 [2024-04-26 13:15:23.830248] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:19.018 [2024-04-26 13:15:23.830256] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:19.018 [2024-04-26 13:15:23.830263] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:19.018 [2024-04-26 13:15:23.833806] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:19.018 [2024-04-26 13:15:23.842756] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:19.018 [2024-04-26 13:15:23.843424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.018 [2024-04-26 13:15:23.843784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.018 [2024-04-26 13:15:23.843796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:19.018 [2024-04-26 13:15:23.843806] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:19.018 [2024-04-26 13:15:23.844052] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:19.018 [2024-04-26 13:15:23.844274] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:19.018 [2024-04-26 13:15:23.844282] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:19.018 [2024-04-26 13:15:23.844290] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:19.018 [2024-04-26 13:15:23.847912] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:19.018 [2024-04-26 13:15:23.856648] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:19.018 [2024-04-26 13:15:23.857319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.018 [2024-04-26 13:15:23.857678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.018 [2024-04-26 13:15:23.857691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:19.018 [2024-04-26 13:15:23.857701] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:19.018 [2024-04-26 13:15:23.857946] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:19.018 [2024-04-26 13:15:23.858168] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:19.018 [2024-04-26 13:15:23.858176] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:19.018 [2024-04-26 13:15:23.858183] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:19.018 [2024-04-26 13:15:23.861712] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:19.018 [2024-04-26 13:15:23.870447] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:19.018 [2024-04-26 13:15:23.871107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.018 [2024-04-26 13:15:23.871459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.018 [2024-04-26 13:15:23.871472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:19.018 [2024-04-26 13:15:23.871486] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:19.018 [2024-04-26 13:15:23.871723] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:19.018 [2024-04-26 13:15:23.871952] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:19.018 [2024-04-26 13:15:23.871961] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:19.018 [2024-04-26 13:15:23.871969] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:19.018 [2024-04-26 13:15:23.875499] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:19.018 [2024-04-26 13:15:23.884242] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:19.018 [2024-04-26 13:15:23.884788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.018 [2024-04-26 13:15:23.885092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.018 [2024-04-26 13:15:23.885103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:19.018 [2024-04-26 13:15:23.885111] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:19.018 [2024-04-26 13:15:23.885330] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:19.018 [2024-04-26 13:15:23.885547] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:19.018 [2024-04-26 13:15:23.885555] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:19.018 [2024-04-26 13:15:23.885561] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:19.018 [2024-04-26 13:15:23.889088] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:19.018 [2024-04-26 13:15:23.898023] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:19.018 [2024-04-26 13:15:23.898684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.018 [2024-04-26 13:15:23.899061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.018 [2024-04-26 13:15:23.899075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:19.018 [2024-04-26 13:15:23.899085] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:19.018 [2024-04-26 13:15:23.899480] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:19.018 [2024-04-26 13:15:23.899750] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:19.018 [2024-04-26 13:15:23.899760] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:19.018 [2024-04-26 13:15:23.899767] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:19.018 [2024-04-26 13:15:23.903308] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:19.018 [2024-04-26 13:15:23.911844] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:19.018 [2024-04-26 13:15:23.912517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.018 [2024-04-26 13:15:23.912852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.018 [2024-04-26 13:15:23.912866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:19.018 [2024-04-26 13:15:23.912875] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:19.018 [2024-04-26 13:15:23.913117] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:19.018 [2024-04-26 13:15:23.913338] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:19.018 [2024-04-26 13:15:23.913346] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:19.018 [2024-04-26 13:15:23.913354] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:19.018 [2024-04-26 13:15:23.916892] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:19.018 [2024-04-26 13:15:23.925643] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:19.018 [2024-04-26 13:15:23.926235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.018 [2024-04-26 13:15:23.926590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.018 [2024-04-26 13:15:23.926602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:19.018 [2024-04-26 13:15:23.926612] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:19.018 [2024-04-26 13:15:23.926857] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:19.018 [2024-04-26 13:15:23.927079] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:19.018 [2024-04-26 13:15:23.927087] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:19.018 [2024-04-26 13:15:23.927094] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:19.018 [2024-04-26 13:15:23.930625] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:19.018 [2024-04-26 13:15:23.939582] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:19.018 [2024-04-26 13:15:23.940241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.018 [2024-04-26 13:15:23.940602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.018 [2024-04-26 13:15:23.940615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:19.018 [2024-04-26 13:15:23.940624] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:19.018 [2024-04-26 13:15:23.940870] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:19.018 [2024-04-26 13:15:23.941092] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:19.019 [2024-04-26 13:15:23.941100] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:19.019 [2024-04-26 13:15:23.941108] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:19.019 [2024-04-26 13:15:23.944639] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:19.019 [2024-04-26 13:15:23.953376] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:19.019 [2024-04-26 13:15:23.954044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.019 [2024-04-26 13:15:23.954418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.019 [2024-04-26 13:15:23.954431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:19.019 [2024-04-26 13:15:23.954440] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:19.019 [2024-04-26 13:15:23.954677] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:19.019 [2024-04-26 13:15:23.954911] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:19.019 [2024-04-26 13:15:23.954920] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:19.019 [2024-04-26 13:15:23.954928] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:19.019 [2024-04-26 13:15:23.958459] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:19.019 [2024-04-26 13:15:23.967192] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:19.019 [2024-04-26 13:15:23.967845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.019 [2024-04-26 13:15:23.968225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.019 [2024-04-26 13:15:23.968238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:19.019 [2024-04-26 13:15:23.968248] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:19.019 [2024-04-26 13:15:23.968485] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:19.019 [2024-04-26 13:15:23.968707] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:19.019 [2024-04-26 13:15:23.968715] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:19.019 [2024-04-26 13:15:23.968722] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:19.019 [2024-04-26 13:15:23.972257] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:19.019 [2024-04-26 13:15:23.981026] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:19.019 [2024-04-26 13:15:23.981667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.019 [2024-04-26 13:15:23.982039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.019 [2024-04-26 13:15:23.982053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:19.019 [2024-04-26 13:15:23.982062] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:19.019 [2024-04-26 13:15:23.982299] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:19.019 [2024-04-26 13:15:23.982521] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:19.019 [2024-04-26 13:15:23.982529] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:19.019 [2024-04-26 13:15:23.982536] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:19.019 [2024-04-26 13:15:23.986081] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:19.019 [2024-04-26 13:15:23.994903] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:19.019 [2024-04-26 13:15:23.995572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.019 [2024-04-26 13:15:23.995821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.019 [2024-04-26 13:15:23.995833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:19.019 [2024-04-26 13:15:23.995852] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:19.019 [2024-04-26 13:15:23.996090] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:19.019 [2024-04-26 13:15:23.996311] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:19.019 [2024-04-26 13:15:23.996327] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:19.019 [2024-04-26 13:15:23.996334] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:19.019 [2024-04-26 13:15:23.999866] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:19.019 [2024-04-26 13:15:24.008808] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:19.019 [2024-04-26 13:15:24.009431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.019 [2024-04-26 13:15:24.009793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.019 [2024-04-26 13:15:24.009806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:19.019 [2024-04-26 13:15:24.009815] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:19.019 [2024-04-26 13:15:24.010061] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:19.019 [2024-04-26 13:15:24.010284] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:19.019 [2024-04-26 13:15:24.010292] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:19.019 [2024-04-26 13:15:24.010299] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:19.019 [2024-04-26 13:15:24.013827] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:19.019 [2024-04-26 13:15:24.022773] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:19.019 [2024-04-26 13:15:24.023443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.019 [2024-04-26 13:15:24.023800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.019 [2024-04-26 13:15:24.023813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:19.019 [2024-04-26 13:15:24.023823] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:19.019 [2024-04-26 13:15:24.024069] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:19.019 [2024-04-26 13:15:24.024291] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:19.019 [2024-04-26 13:15:24.024299] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:19.019 [2024-04-26 13:15:24.024307] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:19.019 [2024-04-26 13:15:24.027844] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:19.019 [2024-04-26 13:15:24.036610] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:19.019 [2024-04-26 13:15:24.037261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.019 [2024-04-26 13:15:24.037625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.019 [2024-04-26 13:15:24.037637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:19.019 [2024-04-26 13:15:24.037647] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:19.019 [2024-04-26 13:15:24.037892] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:19.019 [2024-04-26 13:15:24.038114] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:19.019 [2024-04-26 13:15:24.038122] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:19.019 [2024-04-26 13:15:24.038134] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:19.019 [2024-04-26 13:15:24.041670] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:19.019 [2024-04-26 13:15:24.050422] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:19.019 [2024-04-26 13:15:24.050970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.019 [2024-04-26 13:15:24.051320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.019 [2024-04-26 13:15:24.051333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:19.019 [2024-04-26 13:15:24.051343] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:19.019 [2024-04-26 13:15:24.051579] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:19.019 [2024-04-26 13:15:24.051800] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:19.019 [2024-04-26 13:15:24.051808] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:19.019 [2024-04-26 13:15:24.051816] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:19.019 [2024-04-26 13:15:24.055350] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:19.019 [2024-04-26 13:15:24.064299] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:19.019 [2024-04-26 13:15:24.064841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.019 [2024-04-26 13:15:24.065258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.019 [2024-04-26 13:15:24.065295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:19.019 [2024-04-26 13:15:24.065306] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:19.019 [2024-04-26 13:15:24.065543] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:19.019 [2024-04-26 13:15:24.065765] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:19.019 [2024-04-26 13:15:24.065773] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:19.019 [2024-04-26 13:15:24.065780] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:19.019 [2024-04-26 13:15:24.069318] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:19.325 [2024-04-26 13:15:24.078272] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:19.325 [2024-04-26 13:15:24.078811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.325 [2024-04-26 13:15:24.079269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.325 [2024-04-26 13:15:24.079306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:19.325 [2024-04-26 13:15:24.079317] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:19.325 [2024-04-26 13:15:24.079555] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:19.325 [2024-04-26 13:15:24.079777] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:19.325 [2024-04-26 13:15:24.079785] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:19.325 [2024-04-26 13:15:24.079792] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:19.325 [2024-04-26 13:15:24.083334] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:19.325 [2024-04-26 13:15:24.092080] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:19.325 [2024-04-26 13:15:24.092623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.325 [2024-04-26 13:15:24.092911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.325 [2024-04-26 13:15:24.092923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:19.326 [2024-04-26 13:15:24.092931] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:19.326 [2024-04-26 13:15:24.093150] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:19.326 [2024-04-26 13:15:24.093368] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:19.326 [2024-04-26 13:15:24.093375] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:19.326 [2024-04-26 13:15:24.093382] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:19.326 [2024-04-26 13:15:24.096905] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:19.326 [2024-04-26 13:15:24.106052] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:19.326 [2024-04-26 13:15:24.106711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.326 [2024-04-26 13:15:24.107089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.326 [2024-04-26 13:15:24.107104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:19.326 [2024-04-26 13:15:24.107113] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:19.326 [2024-04-26 13:15:24.107351] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:19.326 [2024-04-26 13:15:24.107573] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:19.326 [2024-04-26 13:15:24.107580] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:19.326 [2024-04-26 13:15:24.107588] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:19.326 [2024-04-26 13:15:24.111123] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:19.326 [2024-04-26 13:15:24.119870] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:19.326 [2024-04-26 13:15:24.120531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.326 [2024-04-26 13:15:24.120932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.326 [2024-04-26 13:15:24.120945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:19.326 [2024-04-26 13:15:24.120955] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:19.326 [2024-04-26 13:15:24.121192] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:19.326 [2024-04-26 13:15:24.121413] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:19.326 [2024-04-26 13:15:24.121421] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:19.326 [2024-04-26 13:15:24.121428] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:19.326 [2024-04-26 13:15:24.124961] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:19.326 [2024-04-26 13:15:24.133711] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:19.326 [2024-04-26 13:15:24.134356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.326 [2024-04-26 13:15:24.134714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.326 [2024-04-26 13:15:24.134727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:19.326 [2024-04-26 13:15:24.134737] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:19.326 [2024-04-26 13:15:24.134982] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:19.326 [2024-04-26 13:15:24.135205] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:19.326 [2024-04-26 13:15:24.135213] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:19.326 [2024-04-26 13:15:24.135220] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:19.326 [2024-04-26 13:15:24.138747] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:19.326 [2024-04-26 13:15:24.147689] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:19.326 [2024-04-26 13:15:24.148317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.326 [2024-04-26 13:15:24.148678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.326 [2024-04-26 13:15:24.148691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:19.326 [2024-04-26 13:15:24.148700] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:19.326 [2024-04-26 13:15:24.148946] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:19.326 [2024-04-26 13:15:24.149168] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:19.326 [2024-04-26 13:15:24.149176] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:19.326 [2024-04-26 13:15:24.149184] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:19.326 [2024-04-26 13:15:24.152714] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:19.326 [2024-04-26 13:15:24.161656] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:19.326 [2024-04-26 13:15:24.162316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.326 [2024-04-26 13:15:24.162689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.326 [2024-04-26 13:15:24.162702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:19.326 [2024-04-26 13:15:24.162711] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:19.326 [2024-04-26 13:15:24.162957] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:19.326 [2024-04-26 13:15:24.163179] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:19.326 [2024-04-26 13:15:24.163187] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:19.326 [2024-04-26 13:15:24.163195] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:19.326 [2024-04-26 13:15:24.166727] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:19.326 [2024-04-26 13:15:24.175473] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:19.326 [2024-04-26 13:15:24.176131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.326 [2024-04-26 13:15:24.176491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.326 [2024-04-26 13:15:24.176504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:19.326 [2024-04-26 13:15:24.176514] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:19.326 [2024-04-26 13:15:24.176751] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:19.326 [2024-04-26 13:15:24.176979] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:19.326 [2024-04-26 13:15:24.176988] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:19.326 [2024-04-26 13:15:24.176995] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:19.326 [2024-04-26 13:15:24.180527] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:19.326 [2024-04-26 13:15:24.189295] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:19.326 [2024-04-26 13:15:24.189936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.326 [2024-04-26 13:15:24.190310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.326 [2024-04-26 13:15:24.190322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:19.326 [2024-04-26 13:15:24.190332] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:19.326 [2024-04-26 13:15:24.190570] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:19.326 [2024-04-26 13:15:24.190792] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:19.326 [2024-04-26 13:15:24.190800] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:19.326 [2024-04-26 13:15:24.190808] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:19.326 [2024-04-26 13:15:24.194347] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:19.326 [2024-04-26 13:15:24.203092] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:19.326 [2024-04-26 13:15:24.203685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.326 [2024-04-26 13:15:24.204062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.326 [2024-04-26 13:15:24.204077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:19.326 [2024-04-26 13:15:24.204086] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:19.326 [2024-04-26 13:15:24.204323] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:19.326 [2024-04-26 13:15:24.204544] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:19.326 [2024-04-26 13:15:24.204553] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:19.326 [2024-04-26 13:15:24.204560] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:19.326 [2024-04-26 13:15:24.208095] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:19.326 [2024-04-26 13:15:24.217048] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:19.327 [2024-04-26 13:15:24.217694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.327 [2024-04-26 13:15:24.218042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.327 [2024-04-26 13:15:24.218056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:19.327 [2024-04-26 13:15:24.218070] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:19.327 [2024-04-26 13:15:24.218308] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:19.327 [2024-04-26 13:15:24.218529] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:19.327 [2024-04-26 13:15:24.218537] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:19.327 [2024-04-26 13:15:24.218544] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:19.327 [2024-04-26 13:15:24.222079] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:19.327 [2024-04-26 13:15:24.230821] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:19.327 [2024-04-26 13:15:24.231491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.327 [2024-04-26 13:15:24.231792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.327 [2024-04-26 13:15:24.231804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:19.327 [2024-04-26 13:15:24.231814] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:19.327 [2024-04-26 13:15:24.232067] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:19.327 [2024-04-26 13:15:24.232290] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:19.327 [2024-04-26 13:15:24.232298] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:19.327 [2024-04-26 13:15:24.232305] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:19.327 [2024-04-26 13:15:24.235835] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:19.327 [2024-04-26 13:15:24.244776] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:19.327 [2024-04-26 13:15:24.245330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.327 [2024-04-26 13:15:24.245696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.327 [2024-04-26 13:15:24.245708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:19.327 [2024-04-26 13:15:24.245718] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:19.327 [2024-04-26 13:15:24.245963] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:19.327 [2024-04-26 13:15:24.246185] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:19.327 [2024-04-26 13:15:24.246193] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:19.327 [2024-04-26 13:15:24.246201] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:19.327 [2024-04-26 13:15:24.249731] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:19.327 [2024-04-26 13:15:24.258675] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:19.327 [2024-04-26 13:15:24.259206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.327 [2024-04-26 13:15:24.259448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.327 [2024-04-26 13:15:24.259459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:19.327 [2024-04-26 13:15:24.259467] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:19.327 [2024-04-26 13:15:24.259689] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:19.327 [2024-04-26 13:15:24.259914] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:19.327 [2024-04-26 13:15:24.259922] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:19.327 [2024-04-26 13:15:24.259929] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:19.327 [2024-04-26 13:15:24.263451] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:19.327 [2024-04-26 13:15:24.272599] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:19.327 [2024-04-26 13:15:24.273166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.327 [2024-04-26 13:15:24.273499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.327 [2024-04-26 13:15:24.273509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:19.327 [2024-04-26 13:15:24.273517] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:19.327 [2024-04-26 13:15:24.273735] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:19.327 [2024-04-26 13:15:24.273958] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:19.327 [2024-04-26 13:15:24.273966] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:19.327 [2024-04-26 13:15:24.273973] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:19.327 [2024-04-26 13:15:24.277494] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:19.327 [2024-04-26 13:15:24.286432] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:19.327 [2024-04-26 13:15:24.287070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.327 [2024-04-26 13:15:24.287435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.327 [2024-04-26 13:15:24.287448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:19.327 [2024-04-26 13:15:24.287458] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:19.327 [2024-04-26 13:15:24.287694] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:19.327 [2024-04-26 13:15:24.287922] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:19.327 [2024-04-26 13:15:24.287932] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:19.327 [2024-04-26 13:15:24.287939] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:19.327 [2024-04-26 13:15:24.291470] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:19.327 [2024-04-26 13:15:24.300209] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:19.327 [2024-04-26 13:15:24.300886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.327 [2024-04-26 13:15:24.301236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.327 [2024-04-26 13:15:24.301248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:19.327 [2024-04-26 13:15:24.301258] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:19.327 [2024-04-26 13:15:24.301495] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:19.327 [2024-04-26 13:15:24.301721] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:19.327 [2024-04-26 13:15:24.301729] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:19.327 [2024-04-26 13:15:24.301736] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:19.327 [2024-04-26 13:15:24.305274] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:19.327 [2024-04-26 13:15:24.314010] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:19.327 [2024-04-26 13:15:24.314649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.327 [2024-04-26 13:15:24.314901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.327 [2024-04-26 13:15:24.314915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:19.327 [2024-04-26 13:15:24.314925] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:19.327 [2024-04-26 13:15:24.315162] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:19.327 [2024-04-26 13:15:24.315384] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:19.327 [2024-04-26 13:15:24.315392] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:19.327 [2024-04-26 13:15:24.315400] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:19.327 [2024-04-26 13:15:24.318933] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:19.327 [2024-04-26 13:15:24.327883] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:19.327 [2024-04-26 13:15:24.328546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.327 [2024-04-26 13:15:24.328904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.327 [2024-04-26 13:15:24.328919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:19.327 [2024-04-26 13:15:24.328928] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:19.327 [2024-04-26 13:15:24.329165] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:19.328 [2024-04-26 13:15:24.329386] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:19.328 [2024-04-26 13:15:24.329394] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:19.328 [2024-04-26 13:15:24.329401] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:19.328 [2024-04-26 13:15:24.332945] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:19.328 [2024-04-26 13:15:24.341683] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:19.328 [2024-04-26 13:15:24.342334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.328 [2024-04-26 13:15:24.342695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.328 [2024-04-26 13:15:24.342708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:19.328 [2024-04-26 13:15:24.342718] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:19.328 [2024-04-26 13:15:24.342963] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:19.328 [2024-04-26 13:15:24.343186] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:19.328 [2024-04-26 13:15:24.343198] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:19.328 [2024-04-26 13:15:24.343205] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:19.328 [2024-04-26 13:15:24.346735] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:19.328 [2024-04-26 13:15:24.355470] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:19.328 [2024-04-26 13:15:24.356162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.328 [2024-04-26 13:15:24.356519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.328 [2024-04-26 13:15:24.356531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:19.328 [2024-04-26 13:15:24.356541] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:19.328 [2024-04-26 13:15:24.356778] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:19.328 [2024-04-26 13:15:24.357010] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:19.328 [2024-04-26 13:15:24.357019] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:19.328 [2024-04-26 13:15:24.357026] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:19.328 [2024-04-26 13:15:24.360556] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:19.328 [2024-04-26 13:15:24.369293] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:19.328 [2024-04-26 13:15:24.369884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.328 [2024-04-26 13:15:24.370166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.328 [2024-04-26 13:15:24.370176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:19.328 [2024-04-26 13:15:24.370185] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:19.328 [2024-04-26 13:15:24.370407] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:19.328 [2024-04-26 13:15:24.370625] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:19.328 [2024-04-26 13:15:24.370632] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:19.328 [2024-04-26 13:15:24.370639] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:19.328 [2024-04-26 13:15:24.374169] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:19.613 [2024-04-26 13:15:24.383115] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:19.613 [2024-04-26 13:15:24.383637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.613 [2024-04-26 13:15:24.384022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.613 [2024-04-26 13:15:24.384033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:19.613 [2024-04-26 13:15:24.384040] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:19.613 [2024-04-26 13:15:24.384258] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:19.613 [2024-04-26 13:15:24.384476] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:19.613 [2024-04-26 13:15:24.384483] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:19.613 [2024-04-26 13:15:24.384495] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:19.613 [2024-04-26 13:15:24.388019] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:19.613 [2024-04-26 13:15:24.396995] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:19.613 [2024-04-26 13:15:24.397636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.613 [2024-04-26 13:15:24.397985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.613 [2024-04-26 13:15:24.398000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:19.613 [2024-04-26 13:15:24.398010] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:19.613 [2024-04-26 13:15:24.398247] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:19.613 [2024-04-26 13:15:24.398468] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:19.613 [2024-04-26 13:15:24.398476] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:19.613 [2024-04-26 13:15:24.398483] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:19.613 [2024-04-26 13:15:24.402016] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:19.613 [2024-04-26 13:15:24.410971] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:19.613 [2024-04-26 13:15:24.411632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.613 [2024-04-26 13:15:24.411860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.613 [2024-04-26 13:15:24.411876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:19.613 [2024-04-26 13:15:24.411885] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:19.613 [2024-04-26 13:15:24.412122] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:19.613 [2024-04-26 13:15:24.412344] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:19.613 [2024-04-26 13:15:24.412352] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:19.613 [2024-04-26 13:15:24.412359] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:19.613 [2024-04-26 13:15:24.415892] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:19.613 [2024-04-26 13:15:24.424847] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:19.613 [2024-04-26 13:15:24.425514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.613 [2024-04-26 13:15:24.425871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.613 [2024-04-26 13:15:24.425885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:19.613 [2024-04-26 13:15:24.425895] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:19.613 [2024-04-26 13:15:24.426132] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:19.613 [2024-04-26 13:15:24.426353] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:19.613 [2024-04-26 13:15:24.426361] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:19.613 [2024-04-26 13:15:24.426369] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:19.613 [2024-04-26 13:15:24.429910] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:19.613 [2024-04-26 13:15:24.438661] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:19.613 [2024-04-26 13:15:24.439337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.613 [2024-04-26 13:15:24.439695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.613 [2024-04-26 13:15:24.439708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:19.613 [2024-04-26 13:15:24.439717] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:19.613 [2024-04-26 13:15:24.439963] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:19.613 [2024-04-26 13:15:24.440185] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:19.613 [2024-04-26 13:15:24.440193] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:19.613 [2024-04-26 13:15:24.440200] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:19.613 [2024-04-26 13:15:24.443731] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:19.613 [2024-04-26 13:15:24.452465] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:19.613 [2024-04-26 13:15:24.452985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.613 [2024-04-26 13:15:24.453344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.613 [2024-04-26 13:15:24.453357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:19.613 [2024-04-26 13:15:24.453366] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:19.613 [2024-04-26 13:15:24.453603] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:19.613 [2024-04-26 13:15:24.453824] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:19.613 [2024-04-26 13:15:24.453832] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:19.613 [2024-04-26 13:15:24.453848] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:19.613 [2024-04-26 13:15:24.457379] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:19.613 [2024-04-26 13:15:24.466324] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:19.614 [2024-04-26 13:15:24.466961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.614 [2024-04-26 13:15:24.467342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.614 [2024-04-26 13:15:24.467355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:19.614 [2024-04-26 13:15:24.467365] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:19.614 [2024-04-26 13:15:24.467602] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:19.614 [2024-04-26 13:15:24.467823] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:19.614 [2024-04-26 13:15:24.467831] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:19.614 [2024-04-26 13:15:24.467846] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:19.614 [2024-04-26 13:15:24.471376] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:19.614 [2024-04-26 13:15:24.480126] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:19.614 [2024-04-26 13:15:24.480797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.614 [2024-04-26 13:15:24.481058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.614 [2024-04-26 13:15:24.481072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:19.614 [2024-04-26 13:15:24.481082] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:19.614 [2024-04-26 13:15:24.481319] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:19.614 [2024-04-26 13:15:24.481541] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:19.614 [2024-04-26 13:15:24.481549] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:19.614 [2024-04-26 13:15:24.481556] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:19.614 [2024-04-26 13:15:24.485091] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:19.614 [2024-04-26 13:15:24.494044] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:19.614 [2024-04-26 13:15:24.494695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.614 [2024-04-26 13:15:24.494964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.614 [2024-04-26 13:15:24.494978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:19.614 [2024-04-26 13:15:24.494988] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:19.614 [2024-04-26 13:15:24.495226] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:19.614 [2024-04-26 13:15:24.495448] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:19.614 [2024-04-26 13:15:24.495456] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:19.614 [2024-04-26 13:15:24.495463] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:19.614 [2024-04-26 13:15:24.498997] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:19.614 [2024-04-26 13:15:24.507943] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:19.614 [2024-04-26 13:15:24.508517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.614 [2024-04-26 13:15:24.508852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.614 [2024-04-26 13:15:24.508862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:19.614 [2024-04-26 13:15:24.508870] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:19.614 [2024-04-26 13:15:24.509089] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:19.614 [2024-04-26 13:15:24.509307] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:19.614 [2024-04-26 13:15:24.509314] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:19.614 [2024-04-26 13:15:24.509321] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:19.614 [2024-04-26 13:15:24.512850] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:19.614 [2024-04-26 13:15:24.521792] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:19.614 [2024-04-26 13:15:24.522425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.614 [2024-04-26 13:15:24.522728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.614 [2024-04-26 13:15:24.522741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:19.614 [2024-04-26 13:15:24.522750] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:19.614 [2024-04-26 13:15:24.522997] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:19.614 [2024-04-26 13:15:24.523218] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:19.614 [2024-04-26 13:15:24.523227] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:19.614 [2024-04-26 13:15:24.523234] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:19.614 [2024-04-26 13:15:24.526762] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:19.614 [2024-04-26 13:15:24.535727] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:19.614 [2024-04-26 13:15:24.536386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.614 [2024-04-26 13:15:24.536749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.614 [2024-04-26 13:15:24.536762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:19.614 [2024-04-26 13:15:24.536772] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:19.614 [2024-04-26 13:15:24.537016] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:19.614 [2024-04-26 13:15:24.537238] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:19.614 [2024-04-26 13:15:24.537246] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:19.614 [2024-04-26 13:15:24.537254] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:19.614 [2024-04-26 13:15:24.540783] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:19.614 [2024-04-26 13:15:24.549521] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:19.614 [2024-04-26 13:15:24.549969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.614 [2024-04-26 13:15:24.550299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.614 [2024-04-26 13:15:24.550311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:19.614 [2024-04-26 13:15:24.550321] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:19.614 [2024-04-26 13:15:24.550557] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:19.614 [2024-04-26 13:15:24.550778] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:19.614 [2024-04-26 13:15:24.550786] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:19.614 [2024-04-26 13:15:24.550794] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:19.614 [2024-04-26 13:15:24.554331] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:19.614 [2024-04-26 13:15:24.563489] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:19.614 [2024-04-26 13:15:24.564168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.614 [2024-04-26 13:15:24.564420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.614 [2024-04-26 13:15:24.564436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:19.614 [2024-04-26 13:15:24.564446] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:19.614 [2024-04-26 13:15:24.564683] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:19.614 [2024-04-26 13:15:24.564913] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:19.614 [2024-04-26 13:15:24.564922] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:19.614 [2024-04-26 13:15:24.564929] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:19.614 [2024-04-26 13:15:24.568458] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:19.614 [2024-04-26 13:15:24.577404] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:19.614 [2024-04-26 13:15:24.577961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.614 [2024-04-26 13:15:24.578316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.614 [2024-04-26 13:15:24.578329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:19.614 [2024-04-26 13:15:24.578338] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:19.614 [2024-04-26 13:15:24.578576] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:19.614 [2024-04-26 13:15:24.578797] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:19.614 [2024-04-26 13:15:24.578805] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:19.614 [2024-04-26 13:15:24.578812] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:19.614 [2024-04-26 13:15:24.582349] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:19.614 [2024-04-26 13:15:24.591291] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:19.614 [2024-04-26 13:15:24.591824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.614 [2024-04-26 13:15:24.592169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.614 [2024-04-26 13:15:24.592180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:19.614 [2024-04-26 13:15:24.592188] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:19.615 [2024-04-26 13:15:24.592406] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:19.615 [2024-04-26 13:15:24.592624] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:19.615 [2024-04-26 13:15:24.592631] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:19.615 [2024-04-26 13:15:24.592638] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:19.615 [2024-04-26 13:15:24.596166] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:19.615 [2024-04-26 13:15:24.605137] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:19.615 [2024-04-26 13:15:24.605806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.615 [2024-04-26 13:15:24.606166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.615 [2024-04-26 13:15:24.606179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:19.615 [2024-04-26 13:15:24.606193] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:19.615 [2024-04-26 13:15:24.606431] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:19.615 [2024-04-26 13:15:24.606652] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:19.615 [2024-04-26 13:15:24.606660] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:19.615 [2024-04-26 13:15:24.606667] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:19.615 [2024-04-26 13:15:24.610203] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:19.615 [2024-04-26 13:15:24.618954] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:19.615 [2024-04-26 13:15:24.619607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.615 [2024-04-26 13:15:24.619966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.615 [2024-04-26 13:15:24.619980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:19.615 [2024-04-26 13:15:24.619990] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:19.615 [2024-04-26 13:15:24.620227] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:19.615 [2024-04-26 13:15:24.620449] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:19.615 [2024-04-26 13:15:24.620457] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:19.615 [2024-04-26 13:15:24.620464] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:19.615 [2024-04-26 13:15:24.623996] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:19.615 [2024-04-26 13:15:24.632730] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:19.615 [2024-04-26 13:15:24.633396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.615 [2024-04-26 13:15:24.633753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.615 [2024-04-26 13:15:24.633766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:19.615 [2024-04-26 13:15:24.633776] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:19.615 [2024-04-26 13:15:24.634021] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:19.615 [2024-04-26 13:15:24.634243] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:19.615 [2024-04-26 13:15:24.634251] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:19.615 [2024-04-26 13:15:24.634258] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:19.615 [2024-04-26 13:15:24.637788] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:19.615 [2024-04-26 13:15:24.646526] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:19.615 [2024-04-26 13:15:24.647157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.615 [2024-04-26 13:15:24.647514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.615 [2024-04-26 13:15:24.647527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:19.615 [2024-04-26 13:15:24.647537] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:19.615 [2024-04-26 13:15:24.647781] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:19.615 [2024-04-26 13:15:24.648010] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:19.615 [2024-04-26 13:15:24.648018] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:19.615 [2024-04-26 13:15:24.648026] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:19.615 [2024-04-26 13:15:24.651552] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:19.615 [2024-04-26 13:15:24.660497] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:19.615 [2024-04-26 13:15:24.661025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.615 [2024-04-26 13:15:24.661397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.615 [2024-04-26 13:15:24.661410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:19.615 [2024-04-26 13:15:24.661420] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:19.615 [2024-04-26 13:15:24.661657] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:19.615 [2024-04-26 13:15:24.661886] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:19.615 [2024-04-26 13:15:24.661894] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:19.615 [2024-04-26 13:15:24.661902] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:19.615 [2024-04-26 13:15:24.665433] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:19.878 [2024-04-26 13:15:24.674388] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:19.878 [2024-04-26 13:15:24.675089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.878 [2024-04-26 13:15:24.675462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.878 [2024-04-26 13:15:24.675474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:19.878 [2024-04-26 13:15:24.675484] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:19.878 [2024-04-26 13:15:24.675721] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:19.878 [2024-04-26 13:15:24.675949] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:19.878 [2024-04-26 13:15:24.675958] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:19.878 [2024-04-26 13:15:24.675965] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:19.878 [2024-04-26 13:15:24.679499] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:19.878 [2024-04-26 13:15:24.688250] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:19.878 [2024-04-26 13:15:24.688826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.878 [2024-04-26 13:15:24.689096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.878 [2024-04-26 13:15:24.689106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:19.878 [2024-04-26 13:15:24.689114] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:19.878 [2024-04-26 13:15:24.689332] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:19.878 [2024-04-26 13:15:24.689555] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:19.878 [2024-04-26 13:15:24.689563] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:19.878 [2024-04-26 13:15:24.689569] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:19.878 [2024-04-26 13:15:24.693098] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:19.878 [2024-04-26 13:15:24.702043] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:19.878 [2024-04-26 13:15:24.702610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.878 [2024-04-26 13:15:24.702955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.878 [2024-04-26 13:15:24.702966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:19.878 [2024-04-26 13:15:24.702974] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:19.878 [2024-04-26 13:15:24.703192] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:19.878 [2024-04-26 13:15:24.703409] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:19.878 [2024-04-26 13:15:24.703416] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:19.878 [2024-04-26 13:15:24.703424] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:19.878 [2024-04-26 13:15:24.706954] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:19.878 [2024-04-26 13:15:24.715899] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:19.878 [2024-04-26 13:15:24.716543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.878 [2024-04-26 13:15:24.716939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.878 [2024-04-26 13:15:24.716954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:19.878 [2024-04-26 13:15:24.716964] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:19.878 [2024-04-26 13:15:24.717201] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:19.878 [2024-04-26 13:15:24.717423] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:19.878 [2024-04-26 13:15:24.717431] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:19.878 [2024-04-26 13:15:24.717438] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:19.878 [2024-04-26 13:15:24.720974] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:19.878 [2024-04-26 13:15:24.729709] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:19.878 [2024-04-26 13:15:24.730395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.878 [2024-04-26 13:15:24.730701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.878 [2024-04-26 13:15:24.730714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:19.878 [2024-04-26 13:15:24.730723] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:19.878 [2024-04-26 13:15:24.730966] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:19.878 [2024-04-26 13:15:24.731189] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:19.878 [2024-04-26 13:15:24.731202] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:19.878 [2024-04-26 13:15:24.731209] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:19.878 [2024-04-26 13:15:24.734753] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:19.878 [2024-04-26 13:15:24.743503] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:19.878 [2024-04-26 13:15:24.744159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.878 [2024-04-26 13:15:24.744536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.878 [2024-04-26 13:15:24.744549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:19.878 [2024-04-26 13:15:24.744558] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:19.878 [2024-04-26 13:15:24.744795] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:19.878 [2024-04-26 13:15:24.745023] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:19.878 [2024-04-26 13:15:24.745031] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:19.878 [2024-04-26 13:15:24.745039] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:19.878 [2024-04-26 13:15:24.748615] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:19.878 [2024-04-26 13:15:24.757365] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:19.878 [2024-04-26 13:15:24.757977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.878 [2024-04-26 13:15:24.758104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.878 [2024-04-26 13:15:24.758117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:19.878 [2024-04-26 13:15:24.758126] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:19.878 [2024-04-26 13:15:24.758365] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:19.878 [2024-04-26 13:15:24.758586] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:19.878 [2024-04-26 13:15:24.758594] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:19.878 [2024-04-26 13:15:24.758601] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:19.878 [2024-04-26 13:15:24.762138] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:19.879 [2024-04-26 13:15:24.771293] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:19.879 [2024-04-26 13:15:24.771969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.879 [2024-04-26 13:15:24.772338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.879 [2024-04-26 13:15:24.772351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:19.879 [2024-04-26 13:15:24.772361] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:19.879 [2024-04-26 13:15:24.772598] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:19.879 [2024-04-26 13:15:24.772819] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:19.879 [2024-04-26 13:15:24.772828] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:19.879 [2024-04-26 13:15:24.772846] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:19.879 [2024-04-26 13:15:24.776379] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:19.879 [2024-04-26 13:15:24.785115] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:19.879 [2024-04-26 13:15:24.785641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.879 [2024-04-26 13:15:24.785977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.879 [2024-04-26 13:15:24.785988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:19.879 [2024-04-26 13:15:24.785995] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:19.879 [2024-04-26 13:15:24.786214] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:19.879 [2024-04-26 13:15:24.786432] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:19.879 [2024-04-26 13:15:24.786439] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:19.879 [2024-04-26 13:15:24.786446] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:19.879 [2024-04-26 13:15:24.789973] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:19.879 [2024-04-26 13:15:24.798911] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:19.879 [2024-04-26 13:15:24.799563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.879 [2024-04-26 13:15:24.799960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.879 [2024-04-26 13:15:24.799974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:19.879 [2024-04-26 13:15:24.799983] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:19.879 [2024-04-26 13:15:24.800220] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:19.879 [2024-04-26 13:15:24.800442] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:19.879 [2024-04-26 13:15:24.800449] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:19.879 [2024-04-26 13:15:24.800457] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:19.879 [2024-04-26 13:15:24.803990] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:19.879 [2024-04-26 13:15:24.812760] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:19.879 [2024-04-26 13:15:24.813412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.879 [2024-04-26 13:15:24.813778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.879 [2024-04-26 13:15:24.813791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:19.879 [2024-04-26 13:15:24.813801] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:19.879 [2024-04-26 13:15:24.814045] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:19.879 [2024-04-26 13:15:24.814267] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:19.879 [2024-04-26 13:15:24.814275] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:19.879 [2024-04-26 13:15:24.814283] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:19.879 [2024-04-26 13:15:24.817817] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:19.879 [2024-04-26 13:15:24.826568] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:19.879 [2024-04-26 13:15:24.827227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.879 [2024-04-26 13:15:24.827611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.879 [2024-04-26 13:15:24.827624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:19.879 [2024-04-26 13:15:24.827634] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:19.879 [2024-04-26 13:15:24.827878] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:19.879 [2024-04-26 13:15:24.828101] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:19.879 [2024-04-26 13:15:24.828109] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:19.879 [2024-04-26 13:15:24.828116] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:19.879 [2024-04-26 13:15:24.831644] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:19.879 [2024-04-26 13:15:24.840401] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:19.879 [2024-04-26 13:15:24.841072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.879 [2024-04-26 13:15:24.841432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.879 [2024-04-26 13:15:24.841445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:19.879 [2024-04-26 13:15:24.841454] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:19.879 [2024-04-26 13:15:24.841692] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:19.879 [2024-04-26 13:15:24.841919] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:19.879 [2024-04-26 13:15:24.841928] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:19.879 [2024-04-26 13:15:24.841935] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:19.879 [2024-04-26 13:15:24.845469] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:19.879 [2024-04-26 13:15:24.854214] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:19.879 [2024-04-26 13:15:24.854755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.879 [2024-04-26 13:15:24.855011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.879 [2024-04-26 13:15:24.855021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:19.879 [2024-04-26 13:15:24.855029] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:19.879 [2024-04-26 13:15:24.855247] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:19.879 [2024-04-26 13:15:24.855465] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:19.879 [2024-04-26 13:15:24.855473] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:19.879 [2024-04-26 13:15:24.855480] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:19.879 [2024-04-26 13:15:24.859004] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:19.879 [2024-04-26 13:15:24.868160] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:19.879 [2024-04-26 13:15:24.868618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.879 [2024-04-26 13:15:24.868899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.879 [2024-04-26 13:15:24.868910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:19.879 [2024-04-26 13:15:24.868917] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:19.879 [2024-04-26 13:15:24.869135] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:19.879 [2024-04-26 13:15:24.869352] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:19.879 [2024-04-26 13:15:24.869359] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:19.880 [2024-04-26 13:15:24.869366] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:19.880 [2024-04-26 13:15:24.872972] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:19.880 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 11333 Killed "${NVMF_APP[@]}" "$@" 00:32:19.880 13:15:24 -- host/bdevperf.sh@36 -- # tgt_init 00:32:19.880 13:15:24 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:32:19.880 13:15:24 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:32:19.880 13:15:24 -- common/autotest_common.sh@710 -- # xtrace_disable 00:32:19.880 [2024-04-26 13:15:24.882131] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:19.880 13:15:24 -- common/autotest_common.sh@10 -- # set +x 00:32:19.880 [2024-04-26 13:15:24.882624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.880 [2024-04-26 13:15:24.883018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.880 [2024-04-26 13:15:24.883028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:19.880 [2024-04-26 13:15:24.883036] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:19.880 [2024-04-26 13:15:24.883254] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:19.880 [2024-04-26 13:15:24.883472] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:19.880 [2024-04-26 13:15:24.883479] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:19.880 [2024-04-26 13:15:24.883486] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:19.880 [2024-04-26 13:15:24.887010] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:19.880 13:15:24 -- nvmf/common.sh@470 -- # nvmfpid=12805 00:32:19.880 13:15:24 -- nvmf/common.sh@471 -- # waitforlisten 12805 00:32:19.880 13:15:24 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:32:19.880 13:15:24 -- common/autotest_common.sh@817 -- # '[' -z 12805 ']' 00:32:19.880 13:15:24 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:19.880 13:15:24 -- common/autotest_common.sh@822 -- # local max_retries=100 00:32:19.880 13:15:24 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:19.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:19.880 13:15:24 -- common/autotest_common.sh@826 -- # xtrace_disable 00:32:19.880 13:15:24 -- common/autotest_common.sh@10 -- # set +x 00:32:19.880 [2024-04-26 13:15:24.895954] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:19.880 [2024-04-26 13:15:24.896538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.880 [2024-04-26 13:15:24.896848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.880 [2024-04-26 13:15:24.896863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:19.880 [2024-04-26 13:15:24.896870] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:19.880 [2024-04-26 13:15:24.897088] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:19.880 [2024-04-26 13:15:24.897305] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:19.880 [2024-04-26 13:15:24.897312] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:19.880 [2024-04-26 13:15:24.897320] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:19.880 [2024-04-26 13:15:24.901043] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:19.880 [2024-04-26 13:15:24.909790] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:19.880 [2024-04-26 13:15:24.910426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.880 [2024-04-26 13:15:24.910821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.880 [2024-04-26 13:15:24.910834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:19.880 [2024-04-26 13:15:24.910851] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:19.880 [2024-04-26 13:15:24.911089] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:19.880 [2024-04-26 13:15:24.911311] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:19.880 [2024-04-26 13:15:24.911319] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:19.880 [2024-04-26 13:15:24.911326] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:19.880 [2024-04-26 13:15:24.914863] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:19.880 [2024-04-26 13:15:24.923606] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:19.880 [2024-04-26 13:15:24.924287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.880 [2024-04-26 13:15:24.924543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.880 [2024-04-26 13:15:24.924556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:19.880 [2024-04-26 13:15:24.924566] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:19.880 [2024-04-26 13:15:24.924804] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:19.880 [2024-04-26 13:15:24.925032] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:19.880 [2024-04-26 13:15:24.925041] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:19.880 [2024-04-26 13:15:24.925048] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:19.880 [2024-04-26 13:15:24.928584] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:20.144 [2024-04-26 13:15:24.936693] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:32:20.144 [2024-04-26 13:15:24.936739] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:20.144 [2024-04-26 13:15:24.937545] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:20.144 [2024-04-26 13:15:24.937851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.144 [2024-04-26 13:15:24.938150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.144 [2024-04-26 13:15:24.938161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:20.144 [2024-04-26 13:15:24.938169] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:20.144 [2024-04-26 13:15:24.938388] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:20.144 [2024-04-26 13:15:24.938606] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:20.144 [2024-04-26 13:15:24.938614] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:20.144 [2024-04-26 13:15:24.938622] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:20.144 [2024-04-26 13:15:24.942150] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:20.144 [2024-04-26 13:15:24.951509] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:20.144 [2024-04-26 13:15:24.952203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.144 [2024-04-26 13:15:24.952563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.144 [2024-04-26 13:15:24.952576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:20.144 [2024-04-26 13:15:24.952586] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:20.144 [2024-04-26 13:15:24.952824] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:20.144 [2024-04-26 13:15:24.953053] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:20.144 [2024-04-26 13:15:24.953062] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:20.144 [2024-04-26 13:15:24.953070] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:20.144 [2024-04-26 13:15:24.956599] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:20.144 [2024-04-26 13:15:24.965343] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:20.144 [2024-04-26 13:15:24.965946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.144 [2024-04-26 13:15:24.966324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.144 [2024-04-26 13:15:24.966337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:20.144 [2024-04-26 13:15:24.966347] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:20.144 [2024-04-26 13:15:24.966585] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:20.144 [2024-04-26 13:15:24.966807] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:20.144 [2024-04-26 13:15:24.966815] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:20.144 [2024-04-26 13:15:24.966823] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:20.144 EAL: No free 2048 kB hugepages reported on node 1 00:32:20.144 [2024-04-26 13:15:24.970361] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:20.144 [2024-04-26 13:15:24.979323] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:20.144 [2024-04-26 13:15:24.979945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.144 [2024-04-26 13:15:24.980277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.144 [2024-04-26 13:15:24.980290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:20.144 [2024-04-26 13:15:24.980300] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:20.144 [2024-04-26 13:15:24.980538] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:20.144 [2024-04-26 13:15:24.980759] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:20.144 [2024-04-26 13:15:24.980768] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:20.144 [2024-04-26 13:15:24.980775] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:20.144 [2024-04-26 13:15:24.984315] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:20.144 [2024-04-26 13:15:24.993264] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:20.144 [2024-04-26 13:15:24.993938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.144 [2024-04-26 13:15:24.994327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.144 [2024-04-26 13:15:24.994340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:20.144 [2024-04-26 13:15:24.994350] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:20.144 [2024-04-26 13:15:24.994587] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:20.144 [2024-04-26 13:15:24.994809] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:20.144 [2024-04-26 13:15:24.994817] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:20.144 [2024-04-26 13:15:24.994825] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:20.144 [2024-04-26 13:15:24.998358] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:20.144 [2024-04-26 13:15:25.007108] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:20.144 [2024-04-26 13:15:25.007420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.144 [2024-04-26 13:15:25.007770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.144 [2024-04-26 13:15:25.007781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:20.144 [2024-04-26 13:15:25.007789] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:20.145 [2024-04-26 13:15:25.008014] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:20.145 [2024-04-26 13:15:25.008233] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:20.145 [2024-04-26 13:15:25.008241] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:20.145 [2024-04-26 13:15:25.008248] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:20.145 [2024-04-26 13:15:25.011773] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:20.145 [2024-04-26 13:15:25.018754] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:20.145 [2024-04-26 13:15:25.020970] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:20.145 [2024-04-26 13:15:25.021510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.145 [2024-04-26 13:15:25.021849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.145 [2024-04-26 13:15:25.021867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:20.145 [2024-04-26 13:15:25.021877] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:20.145 [2024-04-26 13:15:25.022115] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:20.145 [2024-04-26 13:15:25.022337] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:20.145 [2024-04-26 13:15:25.022345] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:20.145 [2024-04-26 13:15:25.022352] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:20.145 [2024-04-26 13:15:25.025894] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:20.145 [2024-04-26 13:15:25.034861] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:20.145 [2024-04-26 13:15:25.035578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.145 [2024-04-26 13:15:25.036054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.145 [2024-04-26 13:15:25.036091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:20.145 [2024-04-26 13:15:25.036102] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:20.145 [2024-04-26 13:15:25.036340] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:20.145 [2024-04-26 13:15:25.036562] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:20.145 [2024-04-26 13:15:25.036570] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:20.145 [2024-04-26 13:15:25.036578] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:20.145 [2024-04-26 13:15:25.040117] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:20.145 [2024-04-26 13:15:25.048765] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:20.145 [2024-04-26 13:15:25.049476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.145 [2024-04-26 13:15:25.049734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.145 [2024-04-26 13:15:25.049747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:20.145 [2024-04-26 13:15:25.049757] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:20.145 [2024-04-26 13:15:25.050003] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:20.145 [2024-04-26 13:15:25.050226] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:20.145 [2024-04-26 13:15:25.050234] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:20.145 [2024-04-26 13:15:25.050242] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:20.145 [2024-04-26 13:15:25.053776] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:20.145 [2024-04-26 13:15:25.062727] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:20.145 [2024-04-26 13:15:25.063407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.145 [2024-04-26 13:15:25.063774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.145 [2024-04-26 13:15:25.063786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:20.145 [2024-04-26 13:15:25.063801] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:20.145 [2024-04-26 13:15:25.064047] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:20.145 [2024-04-26 13:15:25.064269] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:20.145 [2024-04-26 13:15:25.064277] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:20.145 [2024-04-26 13:15:25.064285] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:20.145 [2024-04-26 13:15:25.067818] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:20.145 [2024-04-26 13:15:25.071141] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:20.145 [2024-04-26 13:15:25.071163] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:20.145 [2024-04-26 13:15:25.071169] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:20.145 [2024-04-26 13:15:25.071173] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:20.145 [2024-04-26 13:15:25.071177] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:20.145 [2024-04-26 13:15:25.071350] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:32:20.145 [2024-04-26 13:15:25.071472] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:20.145 [2024-04-26 13:15:25.071474] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:32:20.145 [2024-04-26 13:15:25.076567] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:20.145 [2024-04-26 13:15:25.077113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.145 [2024-04-26 13:15:25.077377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.145 [2024-04-26 13:15:25.077388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:20.145 [2024-04-26 13:15:25.077396] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:20.145 [2024-04-26 13:15:25.077615] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:20.145 [2024-04-26 13:15:25.077835] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:20.145 [2024-04-26 13:15:25.077848] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:20.145 [2024-04-26 13:15:25.077856] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:20.145 [2024-04-26 13:15:25.081387] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:20.145 [2024-04-26 13:15:25.090347] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:20.145 [2024-04-26 13:15:25.090799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.145 [2024-04-26 13:15:25.091033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.145 [2024-04-26 13:15:25.091043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:20.145 [2024-04-26 13:15:25.091052] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:20.145 [2024-04-26 13:15:25.091271] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:20.145 [2024-04-26 13:15:25.091489] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:20.145 [2024-04-26 13:15:25.091498] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:20.145 [2024-04-26 13:15:25.091510] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:20.145 [2024-04-26 13:15:25.095041] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:20.145 [2024-04-26 13:15:25.104193] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:20.145 [2024-04-26 13:15:25.104742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.145 [2024-04-26 13:15:25.105063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.145 [2024-04-26 13:15:25.105075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:20.145 [2024-04-26 13:15:25.105083] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:20.145 [2024-04-26 13:15:25.105302] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:20.145 [2024-04-26 13:15:25.105519] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:20.145 [2024-04-26 13:15:25.105527] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:20.145 [2024-04-26 13:15:25.105534] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:20.146 [2024-04-26 13:15:25.109064] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:20.146 [2024-04-26 13:15:25.118006] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:20.146 [2024-04-26 13:15:25.118596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.146 [2024-04-26 13:15:25.118920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.146 [2024-04-26 13:15:25.118931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:20.146 [2024-04-26 13:15:25.118938] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:20.146 [2024-04-26 13:15:25.119156] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:20.146 [2024-04-26 13:15:25.119374] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:20.146 [2024-04-26 13:15:25.119382] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:20.146 [2024-04-26 13:15:25.119389] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:20.146 [2024-04-26 13:15:25.122915] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:20.146 [2024-04-26 13:15:25.131855] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:20.146 [2024-04-26 13:15:25.132521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.146 [2024-04-26 13:15:25.132874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.146 [2024-04-26 13:15:25.132889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:20.146 [2024-04-26 13:15:25.132900] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:20.146 [2024-04-26 13:15:25.133142] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:20.146 [2024-04-26 13:15:25.133364] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:20.146 [2024-04-26 13:15:25.133373] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:20.146 [2024-04-26 13:15:25.133380] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:20.146 [2024-04-26 13:15:25.136937] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:20.146 [2024-04-26 13:15:25.145679] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:20.146 [2024-04-26 13:15:25.146279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.146 [2024-04-26 13:15:25.146542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.146 [2024-04-26 13:15:25.146553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:20.146 [2024-04-26 13:15:25.146560] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:20.146 [2024-04-26 13:15:25.146779] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:20.146 [2024-04-26 13:15:25.147003] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:20.146 [2024-04-26 13:15:25.147012] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:20.146 [2024-04-26 13:15:25.147019] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:20.146 [2024-04-26 13:15:25.150541] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:20.146 [2024-04-26 13:15:25.159492] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:20.146 [2024-04-26 13:15:25.160176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.146 [2024-04-26 13:15:25.160521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.146 [2024-04-26 13:15:25.160534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:20.146 [2024-04-26 13:15:25.160544] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:20.146 [2024-04-26 13:15:25.160783] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:20.146 [2024-04-26 13:15:25.161011] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:20.146 [2024-04-26 13:15:25.161020] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:20.146 [2024-04-26 13:15:25.161028] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:20.146 [2024-04-26 13:15:25.164556] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:20.146 [2024-04-26 13:15:25.173303] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:20.146 [2024-04-26 13:15:25.173893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.146 [2024-04-26 13:15:25.174100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.146 [2024-04-26 13:15:25.174110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:20.146 [2024-04-26 13:15:25.174118] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:20.146 [2024-04-26 13:15:25.174336] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:20.146 [2024-04-26 13:15:25.174555] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:20.146 [2024-04-26 13:15:25.174563] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:20.146 [2024-04-26 13:15:25.174570] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:20.146 [2024-04-26 13:15:25.178099] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:20.146 [2024-04-26 13:15:25.187252] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:20.146 [2024-04-26 13:15:25.187749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.146 [2024-04-26 13:15:25.187807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.146 [2024-04-26 13:15:25.187816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:20.146 [2024-04-26 13:15:25.187823] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:20.146 [2024-04-26 13:15:25.188049] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:20.146 [2024-04-26 13:15:25.188267] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:20.146 [2024-04-26 13:15:25.188275] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:20.146 [2024-04-26 13:15:25.188282] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:20.146 [2024-04-26 13:15:25.191802] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:20.146 [2024-04-26 13:15:25.201163] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:20.146 [2024-04-26 13:15:25.201750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.146 [2024-04-26 13:15:25.201984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.146 [2024-04-26 13:15:25.201995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:20.146 [2024-04-26 13:15:25.202003] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:20.146 [2024-04-26 13:15:25.202221] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:20.146 [2024-04-26 13:15:25.202447] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:20.146 [2024-04-26 13:15:25.202454] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:20.146 [2024-04-26 13:15:25.202461] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:20.409 [2024-04-26 13:15:25.205992] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:20.409 [2024-04-26 13:15:25.214930] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:20.409 [2024-04-26 13:15:25.215352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.409 [2024-04-26 13:15:25.215550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.409 [2024-04-26 13:15:25.215560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:20.409 [2024-04-26 13:15:25.215568] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:20.409 [2024-04-26 13:15:25.215786] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:20.409 [2024-04-26 13:15:25.216009] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:20.409 [2024-04-26 13:15:25.216017] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:20.409 [2024-04-26 13:15:25.216024] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:20.409 [2024-04-26 13:15:25.219545] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:20.409 [2024-04-26 13:15:25.228742] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:20.409 [2024-04-26 13:15:25.229413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.409 [2024-04-26 13:15:25.229754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.409 [2024-04-26 13:15:25.229767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:20.409 [2024-04-26 13:15:25.229777] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:20.409 [2024-04-26 13:15:25.230023] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:20.409 [2024-04-26 13:15:25.230245] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:20.409 [2024-04-26 13:15:25.230253] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:20.409 [2024-04-26 13:15:25.230261] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:20.409 [2024-04-26 13:15:25.233793] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:20.409 [2024-04-26 13:15:25.242554] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:20.409 [2024-04-26 13:15:25.243182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.409 [2024-04-26 13:15:25.243526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.409 [2024-04-26 13:15:25.243538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:20.409 [2024-04-26 13:15:25.243548] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:20.409 [2024-04-26 13:15:25.243785] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:20.409 [2024-04-26 13:15:25.244012] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:20.409 [2024-04-26 13:15:25.244021] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:20.409 [2024-04-26 13:15:25.244028] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:20.410 [2024-04-26 13:15:25.247563] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:20.410 [2024-04-26 13:15:25.256517] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:20.410 [2024-04-26 13:15:25.257190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.410 [2024-04-26 13:15:25.257276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.410 [2024-04-26 13:15:25.257288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:20.410 [2024-04-26 13:15:25.257297] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:20.410 [2024-04-26 13:15:25.257534] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:20.410 [2024-04-26 13:15:25.257756] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:20.410 [2024-04-26 13:15:25.257764] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:20.410 [2024-04-26 13:15:25.257772] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:20.410 [2024-04-26 13:15:25.261305] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:20.410 [2024-04-26 13:15:25.270465] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:20.410 [2024-04-26 13:15:25.270897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.410 [2024-04-26 13:15:25.271259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.410 [2024-04-26 13:15:25.271277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:20.410 [2024-04-26 13:15:25.271285] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:20.410 [2024-04-26 13:15:25.271505] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:20.410 [2024-04-26 13:15:25.271722] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:20.410 [2024-04-26 13:15:25.271730] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:20.410 [2024-04-26 13:15:25.271737] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:20.410 [2024-04-26 13:15:25.275264] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:20.410 [2024-04-26 13:15:25.284416] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:20.410 [2024-04-26 13:15:25.284846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.410 [2024-04-26 13:15:25.285129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.410 [2024-04-26 13:15:25.285140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:20.410 [2024-04-26 13:15:25.285148] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:20.410 [2024-04-26 13:15:25.285366] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:20.410 [2024-04-26 13:15:25.285584] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:20.410 [2024-04-26 13:15:25.285591] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:20.410 [2024-04-26 13:15:25.285598] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:20.410 [2024-04-26 13:15:25.289124] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:20.410 [2024-04-26 13:15:25.298280] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:20.410 [2024-04-26 13:15:25.298945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.410 [2024-04-26 13:15:25.299348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.410 [2024-04-26 13:15:25.299361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:20.410 [2024-04-26 13:15:25.299370] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:20.410 [2024-04-26 13:15:25.299608] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:20.410 [2024-04-26 13:15:25.299829] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:20.410 [2024-04-26 13:15:25.299845] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:20.410 [2024-04-26 13:15:25.299853] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:20.410 [2024-04-26 13:15:25.303382] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:20.410 [2024-04-26 13:15:25.312125] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:20.410 [2024-04-26 13:15:25.312812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.410 [2024-04-26 13:15:25.313187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.410 [2024-04-26 13:15:25.313200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:20.410 [2024-04-26 13:15:25.313214] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:20.410 [2024-04-26 13:15:25.313452] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:20.410 [2024-04-26 13:15:25.313674] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:20.410 [2024-04-26 13:15:25.313682] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:20.410 [2024-04-26 13:15:25.313689] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:20.410 [2024-04-26 13:15:25.317220] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:20.410 [2024-04-26 13:15:25.325967] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:20.410 [2024-04-26 13:15:25.326518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.410 [2024-04-26 13:15:25.326729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.410 [2024-04-26 13:15:25.326739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:20.410 [2024-04-26 13:15:25.326747] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:20.410 [2024-04-26 13:15:25.326970] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:20.410 [2024-04-26 13:15:25.327189] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:20.410 [2024-04-26 13:15:25.327196] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:20.410 [2024-04-26 13:15:25.327203] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:20.410 [2024-04-26 13:15:25.330729] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:20.410 [2024-04-26 13:15:25.339896] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:20.410 [2024-04-26 13:15:25.340471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.410 [2024-04-26 13:15:25.340808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.410 [2024-04-26 13:15:25.340818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:20.410 [2024-04-26 13:15:25.340825] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:20.410 [2024-04-26 13:15:25.341048] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:20.410 [2024-04-26 13:15:25.341266] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:20.410 [2024-04-26 13:15:25.341273] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:20.410 [2024-04-26 13:15:25.341280] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:20.410 [2024-04-26 13:15:25.344800] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:20.410 [2024-04-26 13:15:25.353746] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:20.410 [2024-04-26 13:15:25.354411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.410 [2024-04-26 13:15:25.354760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.410 [2024-04-26 13:15:25.354773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:20.410 [2024-04-26 13:15:25.354783] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:20.410 [2024-04-26 13:15:25.355031] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:20.410 [2024-04-26 13:15:25.355253] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:20.410 [2024-04-26 13:15:25.355262] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:20.410 [2024-04-26 13:15:25.355270] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:20.410 [2024-04-26 13:15:25.358798] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:20.410 [2024-04-26 13:15:25.367541] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:20.410 [2024-04-26 13:15:25.368032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.410 [2024-04-26 13:15:25.368427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.410 [2024-04-26 13:15:25.368440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:20.410 [2024-04-26 13:15:25.368450] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:20.410 [2024-04-26 13:15:25.368687] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:20.410 [2024-04-26 13:15:25.368918] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:20.410 [2024-04-26 13:15:25.368927] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:20.410 [2024-04-26 13:15:25.368935] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:20.410 [2024-04-26 13:15:25.372463] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:20.410 [2024-04-26 13:15:25.381410] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:20.410 [2024-04-26 13:15:25.381966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.410 [2024-04-26 13:15:25.382323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.410 [2024-04-26 13:15:25.382336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:20.411 [2024-04-26 13:15:25.382346] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:20.411 [2024-04-26 13:15:25.382583] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:20.411 [2024-04-26 13:15:25.382805] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:20.411 [2024-04-26 13:15:25.382813] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:20.411 [2024-04-26 13:15:25.382820] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:20.411 [2024-04-26 13:15:25.386356] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:20.411 [2024-04-26 13:15:25.395311] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:20.411 [2024-04-26 13:15:25.395890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.411 [2024-04-26 13:15:25.396228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.411 [2024-04-26 13:15:25.396239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:20.411 [2024-04-26 13:15:25.396247] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:20.411 [2024-04-26 13:15:25.396470] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:20.411 [2024-04-26 13:15:25.396694] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:20.411 [2024-04-26 13:15:25.396702] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:20.411 [2024-04-26 13:15:25.396709] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:20.411 [2024-04-26 13:15:25.400240] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:20.411 [2024-04-26 13:15:25.409187] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:20.411 [2024-04-26 13:15:25.409853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.411 [2024-04-26 13:15:25.410265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.411 [2024-04-26 13:15:25.410279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:20.411 [2024-04-26 13:15:25.410288] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:20.411 [2024-04-26 13:15:25.410525] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:20.411 [2024-04-26 13:15:25.410747] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:20.411 [2024-04-26 13:15:25.410755] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:20.411 [2024-04-26 13:15:25.410762] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:20.411 [2024-04-26 13:15:25.414297] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:20.411 [2024-04-26 13:15:25.423039] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:20.411 [2024-04-26 13:15:25.423472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.411 [2024-04-26 13:15:25.423676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.411 [2024-04-26 13:15:25.423689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:20.411 [2024-04-26 13:15:25.423697] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:20.411 [2024-04-26 13:15:25.423921] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:20.411 [2024-04-26 13:15:25.424140] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:20.411 [2024-04-26 13:15:25.424147] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:20.411 [2024-04-26 13:15:25.424154] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:20.411 [2024-04-26 13:15:25.427679] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:20.411 [2024-04-26 13:15:25.436870] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:20.411 [2024-04-26 13:15:25.437424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.411 [2024-04-26 13:15:25.437779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.411 [2024-04-26 13:15:25.437791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:20.411 [2024-04-26 13:15:25.437801] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:20.411 [2024-04-26 13:15:25.438045] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:20.411 [2024-04-26 13:15:25.438267] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:20.411 [2024-04-26 13:15:25.438281] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:20.411 [2024-04-26 13:15:25.438288] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:20.411 [2024-04-26 13:15:25.441819] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:20.411 [2024-04-26 13:15:25.450765] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:20.411 [2024-04-26 13:15:25.451406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.411 [2024-04-26 13:15:25.451748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.411 [2024-04-26 13:15:25.451761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:20.411 [2024-04-26 13:15:25.451770] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:20.411 [2024-04-26 13:15:25.452015] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:20.411 [2024-04-26 13:15:25.452237] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:20.411 [2024-04-26 13:15:25.452246] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:20.411 [2024-04-26 13:15:25.452254] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:20.411 [2024-04-26 13:15:25.455783] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:20.411 [2024-04-26 13:15:25.464739] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:20.411 [2024-04-26 13:15:25.465435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.411 [2024-04-26 13:15:25.465784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.411 [2024-04-26 13:15:25.465797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:20.411 [2024-04-26 13:15:25.465806] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:20.411 [2024-04-26 13:15:25.466050] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:20.411 [2024-04-26 13:15:25.466272] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:20.411 [2024-04-26 13:15:25.466281] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:20.411 [2024-04-26 13:15:25.466288] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:20.673 [2024-04-26 13:15:25.469815] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:20.673 [2024-04-26 13:15:25.478556] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:20.673 [2024-04-26 13:15:25.479075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.673 [2024-04-26 13:15:25.479555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.673 [2024-04-26 13:15:25.479568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:20.673 [2024-04-26 13:15:25.479578] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:20.673 [2024-04-26 13:15:25.479816] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:20.673 [2024-04-26 13:15:25.480043] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:20.673 [2024-04-26 13:15:25.480052] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:20.673 [2024-04-26 13:15:25.480063] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:20.673 [2024-04-26 13:15:25.483592] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:20.673 [2024-04-26 13:15:25.492336] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:20.673 [2024-04-26 13:15:25.492940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.673 [2024-04-26 13:15:25.493241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.673 [2024-04-26 13:15:25.493254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:20.673 [2024-04-26 13:15:25.493264] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:20.673 [2024-04-26 13:15:25.493501] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:20.673 [2024-04-26 13:15:25.493722] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:20.673 [2024-04-26 13:15:25.493730] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:20.673 [2024-04-26 13:15:25.493738] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:20.673 [2024-04-26 13:15:25.497272] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:20.673 [2024-04-26 13:15:25.506225] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:20.673 [2024-04-26 13:15:25.506906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.673 [2024-04-26 13:15:25.507306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.673 [2024-04-26 13:15:25.507319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:20.673 [2024-04-26 13:15:25.507328] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:20.673 [2024-04-26 13:15:25.507565] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:20.673 [2024-04-26 13:15:25.507786] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:20.673 [2024-04-26 13:15:25.507794] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:20.673 [2024-04-26 13:15:25.507802] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:20.673 [2024-04-26 13:15:25.511337] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:20.673 [2024-04-26 13:15:25.520079] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:20.673 [2024-04-26 13:15:25.520764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.673 [2024-04-26 13:15:25.521107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.673 [2024-04-26 13:15:25.521121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:20.673 [2024-04-26 13:15:25.521130] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:20.673 [2024-04-26 13:15:25.521367] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:20.673 [2024-04-26 13:15:25.521588] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:20.673 [2024-04-26 13:15:25.521597] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:20.673 [2024-04-26 13:15:25.521604] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:20.673 [2024-04-26 13:15:25.525141] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:20.673 [2024-04-26 13:15:25.533891] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:20.673 [2024-04-26 13:15:25.534402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.673 [2024-04-26 13:15:25.534648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.673 [2024-04-26 13:15:25.534663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:20.673 [2024-04-26 13:15:25.534672] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:20.673 [2024-04-26 13:15:25.534926] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:20.673 [2024-04-26 13:15:25.535150] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:20.673 [2024-04-26 13:15:25.535160] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:20.673 [2024-04-26 13:15:25.535169] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:20.673 [2024-04-26 13:15:25.538697] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:20.673 [2024-04-26 13:15:25.547866] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:20.673 [2024-04-26 13:15:25.548527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.673 [2024-04-26 13:15:25.548867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.673 [2024-04-26 13:15:25.548881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:20.673 [2024-04-26 13:15:25.548891] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:20.673 [2024-04-26 13:15:25.549128] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:20.673 [2024-04-26 13:15:25.549349] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:20.673 [2024-04-26 13:15:25.549359] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:20.673 [2024-04-26 13:15:25.549366] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:20.673 [2024-04-26 13:15:25.552904] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:20.673 [2024-04-26 13:15:25.561650] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:20.674 [2024-04-26 13:15:25.562297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.674 [2024-04-26 13:15:25.562640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.674 [2024-04-26 13:15:25.562653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:20.674 [2024-04-26 13:15:25.562662] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:20.674 [2024-04-26 13:15:25.562908] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:20.674 [2024-04-26 13:15:25.563130] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:20.674 [2024-04-26 13:15:25.563138] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:20.674 [2024-04-26 13:15:25.563146] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:20.674 [2024-04-26 13:15:25.566674] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:20.674 [2024-04-26 13:15:25.575634] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:20.674 [2024-04-26 13:15:25.576349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.674 [2024-04-26 13:15:25.576558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.674 [2024-04-26 13:15:25.576570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:20.674 [2024-04-26 13:15:25.576580] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:20.674 [2024-04-26 13:15:25.576817] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:20.674 [2024-04-26 13:15:25.577045] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:20.674 [2024-04-26 13:15:25.577055] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:20.674 [2024-04-26 13:15:25.577062] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:20.674 [2024-04-26 13:15:25.580594] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:20.674 [2024-04-26 13:15:25.589552] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:20.674 [2024-04-26 13:15:25.590206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.674 [2024-04-26 13:15:25.590548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.674 [2024-04-26 13:15:25.590561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:20.674 [2024-04-26 13:15:25.590571] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:20.674 [2024-04-26 13:15:25.590808] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:20.674 [2024-04-26 13:15:25.591037] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:20.674 [2024-04-26 13:15:25.591045] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:20.674 [2024-04-26 13:15:25.591053] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:20.674 [2024-04-26 13:15:25.594579] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:20.674 [2024-04-26 13:15:25.603325] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:20.674 [2024-04-26 13:15:25.603882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.674 [2024-04-26 13:15:25.604293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.674 [2024-04-26 13:15:25.604303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:20.674 [2024-04-26 13:15:25.604311] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:20.674 [2024-04-26 13:15:25.604529] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:20.674 [2024-04-26 13:15:25.604747] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:20.674 [2024-04-26 13:15:25.604755] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:20.674 [2024-04-26 13:15:25.604761] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:20.674 [2024-04-26 13:15:25.608292] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:20.674 [2024-04-26 13:15:25.617236] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:20.674 [2024-04-26 13:15:25.617780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.674 [2024-04-26 13:15:25.618111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.674 [2024-04-26 13:15:25.618122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:20.674 [2024-04-26 13:15:25.618130] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:20.674 [2024-04-26 13:15:25.618348] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:20.674 [2024-04-26 13:15:25.618566] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:20.674 [2024-04-26 13:15:25.618574] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:20.674 [2024-04-26 13:15:25.618580] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:20.674 [2024-04-26 13:15:25.622104] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:20.674 [2024-04-26 13:15:25.631046] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:20.674 [2024-04-26 13:15:25.631599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.674 [2024-04-26 13:15:25.631932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.674 [2024-04-26 13:15:25.631943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:20.674 [2024-04-26 13:15:25.631950] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:20.674 [2024-04-26 13:15:25.632168] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:20.674 [2024-04-26 13:15:25.632385] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:20.674 [2024-04-26 13:15:25.632393] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:20.674 [2024-04-26 13:15:25.632400] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:20.674 [2024-04-26 13:15:25.635939] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:20.674 [2024-04-26 13:15:25.644919] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:20.674 [2024-04-26 13:15:25.645543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.674 [2024-04-26 13:15:25.645658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.674 [2024-04-26 13:15:25.645670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:20.674 [2024-04-26 13:15:25.645680] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:20.674 [2024-04-26 13:15:25.645924] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:20.674 [2024-04-26 13:15:25.646147] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:20.674 [2024-04-26 13:15:25.646155] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:20.674 [2024-04-26 13:15:25.646162] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:20.674 [2024-04-26 13:15:25.649692] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:20.674 [2024-04-26 13:15:25.658854] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:20.674 [2024-04-26 13:15:25.659349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.674 [2024-04-26 13:15:25.659584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.674 [2024-04-26 13:15:25.659598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:20.674 [2024-04-26 13:15:25.659611] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:20.674 [2024-04-26 13:15:25.659856] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:20.674 [2024-04-26 13:15:25.660080] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:20.674 [2024-04-26 13:15:25.660089] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:20.674 [2024-04-26 13:15:25.660097] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:20.674 [2024-04-26 13:15:25.663629] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:20.674 [2024-04-26 13:15:25.672794] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:20.674 [2024-04-26 13:15:25.673467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.674 [2024-04-26 13:15:25.673813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.674 [2024-04-26 13:15:25.673826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:20.674 [2024-04-26 13:15:25.673836] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:20.674 [2024-04-26 13:15:25.674081] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:20.674 [2024-04-26 13:15:25.674303] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:20.674 [2024-04-26 13:15:25.674312] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:20.674 [2024-04-26 13:15:25.674320] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:20.674 [2024-04-26 13:15:25.677855] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:20.674 [2024-04-26 13:15:25.686603] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:20.674 [2024-04-26 13:15:25.687183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.674 [2024-04-26 13:15:25.687361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.674 [2024-04-26 13:15:25.687375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:20.674 [2024-04-26 13:15:25.687384] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:20.674 [2024-04-26 13:15:25.687621] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:20.674 [2024-04-26 13:15:25.687849] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:20.674 [2024-04-26 13:15:25.687858] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:20.674 [2024-04-26 13:15:25.687866] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:20.674 [2024-04-26 13:15:25.691395] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:20.674 [2024-04-26 13:15:25.700550] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:20.674 [2024-04-26 13:15:25.701057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.674 [2024-04-26 13:15:25.701299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.674 [2024-04-26 13:15:25.701312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:20.674 [2024-04-26 13:15:25.701322] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:20.674 [2024-04-26 13:15:25.701564] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:20.674 [2024-04-26 13:15:25.701785] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:20.674 [2024-04-26 13:15:25.701794] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:20.674 [2024-04-26 13:15:25.701801] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:20.674 [2024-04-26 13:15:25.705345] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:20.674 13:15:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:32:20.674 13:15:25 -- common/autotest_common.sh@850 -- # return 0 00:32:20.674 13:15:25 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:32:20.674 13:15:25 -- common/autotest_common.sh@716 -- # xtrace_disable 00:32:20.674 [2024-04-26 13:15:25.714500] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:20.674 13:15:25 -- common/autotest_common.sh@10 -- # set +x 00:32:20.674 [2024-04-26 13:15:25.715170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.674 [2024-04-26 13:15:25.715515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.674 [2024-04-26 13:15:25.715528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:20.674 [2024-04-26 13:15:25.715538] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:20.674 [2024-04-26 13:15:25.715775] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:20.675 [2024-04-26 13:15:25.716003] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:20.675 [2024-04-26 13:15:25.716013] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:20.675 [2024-04-26 13:15:25.716020] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:20.675 [2024-04-26 13:15:25.719554] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:20.675 [2024-04-26 13:15:25.728299] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:20.675 [2024-04-26 13:15:25.728893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.675 [2024-04-26 13:15:25.729108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.675 [2024-04-26 13:15:25.729118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:20.675 [2024-04-26 13:15:25.729126] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:20.675 [2024-04-26 13:15:25.729345] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:20.675 [2024-04-26 13:15:25.729563] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:20.675 [2024-04-26 13:15:25.729570] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:20.675 [2024-04-26 13:15:25.729577] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:20.936 [2024-04-26 13:15:25.733104] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:20.936 [2024-04-26 13:15:25.742266] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:20.936 [2024-04-26 13:15:25.742818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.936 [2024-04-26 13:15:25.743200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.936 [2024-04-26 13:15:25.743211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:20.936 [2024-04-26 13:15:25.743223] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:20.937 [2024-04-26 13:15:25.743441] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:20.937 [2024-04-26 13:15:25.743659] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:20.937 [2024-04-26 13:15:25.743667] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:20.937 [2024-04-26 13:15:25.743673] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:20.937 [2024-04-26 13:15:25.747201] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:20.937 13:15:25 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:20.937 13:15:25 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:20.937 13:15:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:20.937 13:15:25 -- common/autotest_common.sh@10 -- # set +x 00:32:20.937 [2024-04-26 13:15:25.756136] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:20.937 [2024-04-26 13:15:25.756694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.937 [2024-04-26 13:15:25.756939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.937 [2024-04-26 13:15:25.756954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:20.937 [2024-04-26 13:15:25.756963] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:20.937 [2024-04-26 13:15:25.757200] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:20.937 [2024-04-26 13:15:25.757422] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:20.937 [2024-04-26 13:15:25.757431] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:20.937 [2024-04-26 13:15:25.757439] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:20.937 [2024-04-26 13:15:25.757876] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:20.937 [2024-04-26 13:15:25.760972] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:20.937 13:15:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:20.937 13:15:25 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:20.937 13:15:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:20.937 13:15:25 -- common/autotest_common.sh@10 -- # set +x 00:32:20.937 [2024-04-26 13:15:25.769922] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:20.937 [2024-04-26 13:15:25.770515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.937 [2024-04-26 13:15:25.770880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.937 [2024-04-26 13:15:25.770895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:20.937 [2024-04-26 13:15:25.770904] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:20.937 [2024-04-26 13:15:25.771141] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:20.937 [2024-04-26 13:15:25.771362] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:20.937 [2024-04-26 13:15:25.771372] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:20.937 [2024-04-26 13:15:25.771379] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:20.937 [2024-04-26 13:15:25.774916] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:20.937 [2024-04-26 13:15:25.783868] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:20.937 [2024-04-26 13:15:25.784581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.937 [2024-04-26 13:15:25.784805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.937 [2024-04-26 13:15:25.784817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:20.937 [2024-04-26 13:15:25.784827] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:20.937 [2024-04-26 13:15:25.785072] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:20.937 [2024-04-26 13:15:25.785295] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:20.937 [2024-04-26 13:15:25.785303] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:20.937 [2024-04-26 13:15:25.785310] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:20.937 Malloc0 00:32:20.937 [2024-04-26 13:15:25.788841] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:20.937 13:15:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:20.937 13:15:25 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:20.937 13:15:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:20.937 13:15:25 -- common/autotest_common.sh@10 -- # set +x 00:32:20.937 [2024-04-26 13:15:25.797788] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:20.937 [2024-04-26 13:15:25.798218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.937 [2024-04-26 13:15:25.798408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.937 [2024-04-26 13:15:25.798418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:20.937 [2024-04-26 13:15:25.798426] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:20.937 [2024-04-26 13:15:25.798644] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:20.937 [2024-04-26 13:15:25.798866] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:20.937 [2024-04-26 13:15:25.798875] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:20.937 [2024-04-26 13:15:25.798882] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:20.937 13:15:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:20.937 13:15:25 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:20.937 13:15:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:20.937 13:15:25 -- common/autotest_common.sh@10 -- # set +x 00:32:20.937 [2024-04-26 13:15:25.802410] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:20.937 [2024-04-26 13:15:25.811561] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:20.937 [2024-04-26 13:15:25.812255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.937 [2024-04-26 13:15:25.812620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.937 [2024-04-26 13:15:25.812633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd870 with addr=10.0.0.2, port=4420 00:32:20.937 [2024-04-26 13:15:25.812643] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd870 is same with the state(5) to be set 00:32:20.937 [2024-04-26 13:15:25.812887] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd870 (9): Bad file descriptor 00:32:20.937 13:15:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:20.937 [2024-04-26 13:15:25.813113] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:20.937 [2024-04-26 13:15:25.813122] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:20.937 [2024-04-26 13:15:25.813129] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:20.937 13:15:25 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:20.937 13:15:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:20.937 13:15:25 -- common/autotest_common.sh@10 -- # set +x 00:32:20.937 [2024-04-26 13:15:25.816662] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:20.937 [2024-04-26 13:15:25.820214] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:20.937 13:15:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:20.937 [2024-04-26 13:15:25.825399] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:20.937 13:15:25 -- host/bdevperf.sh@38 -- # wait 11701 00:32:20.937 [2024-04-26 13:15:25.860056] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:32:30.970 00:32:30.970 Latency(us) 00:32:30.970 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:30.970 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:30.970 Verification LBA range: start 0x0 length 0x4000 00:32:30.970 Nvme1n1 : 15.01 8246.35 32.21 9514.17 0.00 7182.18 778.24 15947.09 00:32:30.970 =================================================================================================================== 00:32:30.970 Total : 8246.35 32.21 9514.17 0.00 7182.18 778.24 15947.09 00:32:30.970 13:15:34 -- host/bdevperf.sh@39 -- # sync 00:32:30.970 13:15:34 -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:30.970 13:15:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:30.970 13:15:34 -- common/autotest_common.sh@10 -- # set +x 00:32:30.970 13:15:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:30.970 13:15:34 -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:32:30.970 13:15:34 -- host/bdevperf.sh@44 -- # nvmftestfini 00:32:30.970 13:15:34 -- nvmf/common.sh@477 -- # nvmfcleanup 00:32:30.970 13:15:34 -- nvmf/common.sh@117 -- # sync 00:32:30.970 13:15:34 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:30.970 13:15:34 -- nvmf/common.sh@120 -- # set +e 00:32:30.970 13:15:34 -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:30.970 13:15:34 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:30.970 rmmod nvme_tcp 00:32:30.970 rmmod nvme_fabrics 00:32:30.970 rmmod nvme_keyring 00:32:30.970 13:15:34 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:30.970 13:15:34 -- nvmf/common.sh@124 -- # set -e 00:32:30.970 13:15:34 -- nvmf/common.sh@125 -- # return 0 00:32:30.970 13:15:34 -- nvmf/common.sh@478 -- # '[' -n 12805 ']' 00:32:30.970 13:15:34 -- nvmf/common.sh@479 -- # killprocess 12805 00:32:30.970 13:15:34 -- common/autotest_common.sh@936 -- # '[' -z 12805 ']' 00:32:30.970 13:15:34 -- common/autotest_common.sh@940 -- # kill -0 12805 00:32:30.970 13:15:34 -- common/autotest_common.sh@941 -- # uname 00:32:30.971 13:15:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:32:30.971 13:15:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 12805 00:32:30.971 13:15:34 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:32:30.971 13:15:34 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:32:30.971 13:15:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 12805' 00:32:30.971 killing process with pid 12805 00:32:30.971 13:15:34 -- common/autotest_common.sh@955 -- # kill 12805 00:32:30.971 13:15:34 -- common/autotest_common.sh@960 -- # wait 12805 00:32:30.971 13:15:34 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:32:30.971 13:15:34 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:32:30.971 13:15:34 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:32:30.971 13:15:34 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:30.971 13:15:34 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:30.971 13:15:34 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:30.971 13:15:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:30.971 13:15:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:31.912 13:15:36 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:31.912 00:32:31.912 real 0m27.870s 00:32:31.912 user 1m3.023s 00:32:31.912 sys 0m7.050s 00:32:31.912 13:15:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:32:31.912 13:15:36 -- common/autotest_common.sh@10 -- # set +x 00:32:31.912 ************************************ 00:32:31.912 END TEST nvmf_bdevperf 00:32:31.912 ************************************ 00:32:31.912 13:15:36 -- nvmf/nvmf.sh@120 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:32:31.912 13:15:36 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:32:31.912 13:15:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:32:31.912 13:15:36 -- common/autotest_common.sh@10 -- # set +x 00:32:32.172 ************************************ 00:32:32.172 START TEST nvmf_target_disconnect 00:32:32.172 ************************************ 00:32:32.172 13:15:37 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:32:32.172 * Looking for test storage... 00:32:32.172 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:32.172 13:15:37 -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:32.172 13:15:37 -- nvmf/common.sh@7 -- # uname -s 00:32:32.172 13:15:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:32.172 13:15:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:32.172 13:15:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:32.172 13:15:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:32.172 13:15:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:32.172 13:15:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:32.172 13:15:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:32.172 13:15:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:32.172 13:15:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:32.172 13:15:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:32.172 13:15:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:32.172 13:15:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:32.172 13:15:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:32.172 13:15:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:32.172 13:15:37 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:32.172 13:15:37 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:32.172 13:15:37 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:32.172 13:15:37 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:32.172 13:15:37 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:32.172 13:15:37 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:32.172 13:15:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:32.173 13:15:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:32.173 13:15:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:32.173 13:15:37 -- paths/export.sh@5 -- # export PATH 00:32:32.173 13:15:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:32.173 13:15:37 -- nvmf/common.sh@47 -- # : 0 00:32:32.173 13:15:37 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:32.173 13:15:37 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:32.173 13:15:37 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:32.173 13:15:37 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:32.173 13:15:37 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:32.173 13:15:37 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:32.173 13:15:37 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:32.173 13:15:37 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:32.173 13:15:37 -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:32:32.173 13:15:37 -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:32:32.173 13:15:37 -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:32:32.173 13:15:37 -- host/target_disconnect.sh@77 -- # nvmftestinit 00:32:32.173 13:15:37 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:32:32.173 13:15:37 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:32.173 13:15:37 -- nvmf/common.sh@437 -- # prepare_net_devs 00:32:32.173 13:15:37 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:32:32.173 13:15:37 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:32:32.173 13:15:37 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:32.173 13:15:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:32.173 13:15:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:32.173 13:15:37 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:32:32.173 13:15:37 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:32:32.173 13:15:37 -- nvmf/common.sh@285 -- # xtrace_disable 00:32:32.173 13:15:37 -- common/autotest_common.sh@10 -- # set +x 00:32:40.316 13:15:43 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:32:40.316 13:15:43 -- nvmf/common.sh@291 -- # pci_devs=() 00:32:40.316 13:15:43 -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:40.316 13:15:43 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:40.316 13:15:43 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:40.316 13:15:43 -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:40.316 13:15:43 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:40.316 13:15:43 -- nvmf/common.sh@295 -- # net_devs=() 00:32:40.316 13:15:43 -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:40.316 13:15:43 -- nvmf/common.sh@296 -- # e810=() 00:32:40.316 13:15:43 -- nvmf/common.sh@296 -- # local -ga e810 00:32:40.316 13:15:43 -- nvmf/common.sh@297 -- # x722=() 00:32:40.316 13:15:43 -- nvmf/common.sh@297 -- # local -ga x722 00:32:40.316 13:15:43 -- nvmf/common.sh@298 -- # mlx=() 00:32:40.316 13:15:43 -- nvmf/common.sh@298 -- # local -ga mlx 00:32:40.316 13:15:43 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:40.316 13:15:43 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:40.316 13:15:43 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:40.316 13:15:43 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:40.316 13:15:43 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:40.316 13:15:43 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:40.316 13:15:43 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:40.316 13:15:43 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:40.316 13:15:43 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:40.316 13:15:43 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:40.316 13:15:43 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:40.316 13:15:43 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:40.316 13:15:43 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:40.316 13:15:43 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:40.316 13:15:43 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:40.316 13:15:43 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:40.316 13:15:43 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:40.316 13:15:43 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:40.316 13:15:43 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:32:40.316 Found 0000:31:00.0 (0x8086 - 0x159b) 00:32:40.316 13:15:43 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:40.316 13:15:43 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:40.316 13:15:43 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:40.316 13:15:43 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:40.316 13:15:43 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:40.316 13:15:43 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:40.316 13:15:43 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:32:40.316 Found 0000:31:00.1 (0x8086 - 0x159b) 00:32:40.316 13:15:43 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:40.316 13:15:43 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:40.316 13:15:43 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:40.316 13:15:43 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:40.316 13:15:43 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:40.316 13:15:43 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:40.316 13:15:43 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:40.316 13:15:43 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:40.316 13:15:43 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:40.316 13:15:43 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:40.316 13:15:43 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:32:40.316 13:15:43 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:40.316 13:15:43 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:32:40.316 Found net devices under 0000:31:00.0: cvl_0_0 00:32:40.316 13:15:43 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:32:40.316 13:15:43 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:40.316 13:15:43 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:40.316 13:15:43 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:32:40.316 13:15:43 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:40.316 13:15:43 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:32:40.316 Found net devices under 0000:31:00.1: cvl_0_1 00:32:40.316 13:15:43 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:32:40.316 13:15:43 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:32:40.316 13:15:43 -- nvmf/common.sh@403 -- # is_hw=yes 00:32:40.316 13:15:43 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:32:40.316 13:15:43 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:32:40.316 13:15:43 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:32:40.316 13:15:43 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:40.316 13:15:43 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:40.316 13:15:43 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:40.316 13:15:43 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:40.316 13:15:43 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:40.316 13:15:43 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:40.316 13:15:43 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:40.316 13:15:43 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:40.316 13:15:43 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:40.316 13:15:43 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:40.316 13:15:43 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:40.316 13:15:43 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:40.316 13:15:44 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:40.316 13:15:44 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:40.316 13:15:44 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:40.316 13:15:44 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:40.316 13:15:44 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:40.316 13:15:44 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:40.316 13:15:44 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:40.316 13:15:44 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:40.316 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:40.316 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.586 ms 00:32:40.316 00:32:40.316 --- 10.0.0.2 ping statistics --- 00:32:40.316 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:40.316 rtt min/avg/max/mdev = 0.586/0.586/0.586/0.000 ms 00:32:40.316 13:15:44 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:40.316 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:40.316 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.330 ms 00:32:40.316 00:32:40.316 --- 10.0.0.1 ping statistics --- 00:32:40.316 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:40.316 rtt min/avg/max/mdev = 0.330/0.330/0.330/0.000 ms 00:32:40.316 13:15:44 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:40.316 13:15:44 -- nvmf/common.sh@411 -- # return 0 00:32:40.316 13:15:44 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:32:40.316 13:15:44 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:40.316 13:15:44 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:32:40.316 13:15:44 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:32:40.316 13:15:44 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:40.316 13:15:44 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:32:40.316 13:15:44 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:32:40.316 13:15:44 -- host/target_disconnect.sh@78 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:32:40.316 13:15:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:32:40.316 13:15:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:32:40.316 13:15:44 -- common/autotest_common.sh@10 -- # set +x 00:32:40.316 ************************************ 00:32:40.316 START TEST nvmf_target_disconnect_tc1 00:32:40.316 ************************************ 00:32:40.316 13:15:44 -- common/autotest_common.sh@1111 -- # nvmf_target_disconnect_tc1 00:32:40.316 13:15:44 -- host/target_disconnect.sh@32 -- # set +e 00:32:40.316 13:15:44 -- host/target_disconnect.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:40.316 EAL: No free 2048 kB hugepages reported on node 1 00:32:40.316 [2024-04-26 13:15:44.595649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.316 [2024-04-26 13:15:44.596208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.316 [2024-04-26 13:15:44.596257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13865f0 with addr=10.0.0.2, port=4420 00:32:40.316 [2024-04-26 13:15:44.596291] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:32:40.317 [2024-04-26 13:15:44.596306] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:40.317 [2024-04-26 13:15:44.596313] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:32:40.317 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:32:40.317 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:32:40.317 Initializing NVMe Controllers 00:32:40.317 13:15:44 -- host/target_disconnect.sh@33 -- # trap - ERR 00:32:40.317 13:15:44 -- host/target_disconnect.sh@33 -- # print_backtrace 00:32:40.317 13:15:44 -- common/autotest_common.sh@1139 -- # [[ hxBET =~ e ]] 00:32:40.317 13:15:44 -- common/autotest_common.sh@1139 -- # return 0 00:32:40.317 13:15:44 -- host/target_disconnect.sh@37 -- # '[' 1 '!=' 1 ']' 00:32:40.317 13:15:44 -- host/target_disconnect.sh@41 -- # set -e 00:32:40.317 00:32:40.317 real 0m0.104s 00:32:40.317 user 0m0.046s 00:32:40.317 sys 0m0.057s 00:32:40.317 13:15:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:32:40.317 13:15:44 -- common/autotest_common.sh@10 -- # set +x 00:32:40.317 ************************************ 00:32:40.317 END TEST nvmf_target_disconnect_tc1 00:32:40.317 ************************************ 00:32:40.317 13:15:44 -- host/target_disconnect.sh@79 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:32:40.317 13:15:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:32:40.317 13:15:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:32:40.317 13:15:44 -- common/autotest_common.sh@10 -- # set +x 00:32:40.317 ************************************ 00:32:40.317 START TEST nvmf_target_disconnect_tc2 00:32:40.317 ************************************ 00:32:40.317 13:15:44 -- common/autotest_common.sh@1111 -- # nvmf_target_disconnect_tc2 00:32:40.317 13:15:44 -- host/target_disconnect.sh@45 -- # disconnect_init 10.0.0.2 00:32:40.317 13:15:44 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:32:40.317 13:15:44 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:32:40.317 13:15:44 -- common/autotest_common.sh@710 -- # xtrace_disable 00:32:40.317 13:15:44 -- common/autotest_common.sh@10 -- # set +x 00:32:40.317 13:15:44 -- nvmf/common.sh@470 -- # nvmfpid=19011 00:32:40.317 13:15:44 -- nvmf/common.sh@471 -- # waitforlisten 19011 00:32:40.317 13:15:44 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:32:40.317 13:15:44 -- common/autotest_common.sh@817 -- # '[' -z 19011 ']' 00:32:40.317 13:15:44 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:40.317 13:15:44 -- common/autotest_common.sh@822 -- # local max_retries=100 00:32:40.317 13:15:44 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:40.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:40.317 13:15:44 -- common/autotest_common.sh@826 -- # xtrace_disable 00:32:40.317 13:15:44 -- common/autotest_common.sh@10 -- # set +x 00:32:40.317 [2024-04-26 13:15:44.857703] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:32:40.317 [2024-04-26 13:15:44.857759] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:40.317 EAL: No free 2048 kB hugepages reported on node 1 00:32:40.317 [2024-04-26 13:15:44.945023] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:40.317 [2024-04-26 13:15:45.037749] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:40.317 [2024-04-26 13:15:45.037812] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:40.317 [2024-04-26 13:15:45.037821] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:40.317 [2024-04-26 13:15:45.037828] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:40.317 [2024-04-26 13:15:45.037835] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:40.317 [2024-04-26 13:15:45.038445] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:32:40.317 [2024-04-26 13:15:45.038669] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:32:40.317 [2024-04-26 13:15:45.038910] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:32:40.317 [2024-04-26 13:15:45.038915] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:32:40.888 13:15:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:32:40.888 13:15:45 -- common/autotest_common.sh@850 -- # return 0 00:32:40.888 13:15:45 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:32:40.888 13:15:45 -- common/autotest_common.sh@716 -- # xtrace_disable 00:32:40.888 13:15:45 -- common/autotest_common.sh@10 -- # set +x 00:32:40.888 13:15:45 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:40.888 13:15:45 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:40.888 13:15:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:40.888 13:15:45 -- common/autotest_common.sh@10 -- # set +x 00:32:40.888 Malloc0 00:32:40.888 13:15:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:40.888 13:15:45 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:32:40.888 13:15:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:40.888 13:15:45 -- common/autotest_common.sh@10 -- # set +x 00:32:40.888 [2024-04-26 13:15:45.728892] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:40.888 13:15:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:40.888 13:15:45 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:40.888 13:15:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:40.888 13:15:45 -- common/autotest_common.sh@10 -- # set +x 00:32:40.888 13:15:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:40.888 13:15:45 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:40.888 13:15:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:40.888 13:15:45 -- common/autotest_common.sh@10 -- # set +x 00:32:40.888 13:15:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:40.888 13:15:45 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:40.888 13:15:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:40.888 13:15:45 -- common/autotest_common.sh@10 -- # set +x 00:32:40.888 [2024-04-26 13:15:45.769220] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:40.889 13:15:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:40.889 13:15:45 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:40.889 13:15:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:40.889 13:15:45 -- common/autotest_common.sh@10 -- # set +x 00:32:40.889 13:15:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:40.889 13:15:45 -- host/target_disconnect.sh@50 -- # reconnectpid=19188 00:32:40.889 13:15:45 -- host/target_disconnect.sh@52 -- # sleep 2 00:32:40.889 13:15:45 -- host/target_disconnect.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:40.889 EAL: No free 2048 kB hugepages reported on node 1 00:32:42.802 13:15:47 -- host/target_disconnect.sh@53 -- # kill -9 19011 00:32:42.802 13:15:47 -- host/target_disconnect.sh@55 -- # sleep 2 00:32:42.803 Read completed with error (sct=0, sc=8) 00:32:42.803 starting I/O failed 00:32:42.803 Read completed with error (sct=0, sc=8) 00:32:42.803 starting I/O failed 00:32:42.803 Read completed with error (sct=0, sc=8) 00:32:42.803 starting I/O failed 00:32:42.803 Read completed with error (sct=0, sc=8) 00:32:42.803 starting I/O failed 00:32:42.803 Write completed with error (sct=0, sc=8) 00:32:42.803 starting I/O failed 00:32:42.803 Read completed with error (sct=0, sc=8) 00:32:42.803 starting I/O failed 00:32:42.803 Write completed with error (sct=0, sc=8) 00:32:42.803 starting I/O failed 00:32:42.803 Write completed with error (sct=0, sc=8) 00:32:42.803 starting I/O failed 00:32:42.803 Read completed with error (sct=0, sc=8) 00:32:42.803 starting I/O failed 00:32:42.803 Write completed with error (sct=0, sc=8) 00:32:42.803 starting I/O failed 00:32:42.803 Write completed with error (sct=0, sc=8) 00:32:42.803 starting I/O failed 00:32:42.803 Read completed with error (sct=0, sc=8) 00:32:42.803 starting I/O failed 00:32:42.803 Read completed with error (sct=0, sc=8) 00:32:42.803 starting I/O failed 00:32:42.803 Write completed with error (sct=0, sc=8) 00:32:42.803 starting I/O failed 00:32:42.803 Read completed with error (sct=0, sc=8) 00:32:42.803 starting I/O failed 00:32:42.803 Read completed with error (sct=0, sc=8) 00:32:42.803 starting I/O failed 00:32:42.803 Read completed with error (sct=0, sc=8) 00:32:42.803 starting I/O failed 00:32:42.803 Read completed with error (sct=0, sc=8) 00:32:42.803 starting I/O failed 00:32:42.803 Write completed with error (sct=0, sc=8) 00:32:42.803 starting I/O failed 00:32:42.803 Write completed with error (sct=0, sc=8) 00:32:42.803 starting I/O failed 00:32:42.803 Write completed with error (sct=0, sc=8) 00:32:42.803 starting I/O failed 00:32:42.803 Write completed with error (sct=0, sc=8) 00:32:42.803 starting I/O failed 00:32:42.803 Write completed with error (sct=0, sc=8) 00:32:42.803 starting I/O failed 00:32:42.803 Write completed with error (sct=0, sc=8) 00:32:42.803 starting I/O failed 00:32:42.803 Write completed with error (sct=0, sc=8) 00:32:42.803 starting I/O failed 00:32:42.803 Write completed with error (sct=0, sc=8) 00:32:42.803 starting I/O failed 00:32:42.803 Write completed with error (sct=0, sc=8) 00:32:42.803 starting I/O failed 00:32:42.803 Read completed with error (sct=0, sc=8) 00:32:42.803 starting I/O failed 00:32:42.803 Write completed with error (sct=0, sc=8) 00:32:42.803 starting I/O failed 00:32:42.803 Write completed with error (sct=0, sc=8) 00:32:42.803 starting I/O failed 00:32:42.803 Write completed with error (sct=0, sc=8) 00:32:42.803 starting I/O failed 00:32:42.803 Write completed with error (sct=0, sc=8) 00:32:42.803 starting I/O failed 00:32:42.803 [2024-04-26 13:15:47.802587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:32:42.803 Read completed with error (sct=0, sc=8) 00:32:42.803 starting I/O failed 00:32:42.803 Read completed with error (sct=0, sc=8) 00:32:42.803 starting I/O failed 00:32:42.803 Read completed with error (sct=0, sc=8) 00:32:42.803 starting I/O failed 00:32:42.803 Read completed with error (sct=0, sc=8) 00:32:42.803 starting I/O failed 00:32:42.803 Read completed with error (sct=0, sc=8) 00:32:42.803 starting I/O failed 00:32:42.803 Read completed with error (sct=0, sc=8) 00:32:42.803 starting I/O failed 00:32:42.803 Read completed with error (sct=0, sc=8) 00:32:42.803 starting I/O failed 00:32:42.803 Read completed with error (sct=0, sc=8) 00:32:42.803 starting I/O failed 00:32:42.803 Read completed with error (sct=0, sc=8) 00:32:42.803 starting I/O failed 00:32:42.803 Read completed with error (sct=0, sc=8) 00:32:42.803 starting I/O failed 00:32:42.803 Read completed with error (sct=0, sc=8) 00:32:42.803 starting I/O failed 00:32:42.803 Read completed with error (sct=0, sc=8) 00:32:42.803 starting I/O failed 00:32:42.803 Read completed with error (sct=0, sc=8) 00:32:42.803 starting I/O failed 00:32:42.803 Read completed with error (sct=0, sc=8) 00:32:42.803 starting I/O failed 00:32:42.803 Read completed with error (sct=0, sc=8) 00:32:42.803 starting I/O failed 00:32:42.803 Read completed with error (sct=0, sc=8) 00:32:42.803 starting I/O failed 00:32:42.803 Write completed with error (sct=0, sc=8) 00:32:42.803 starting I/O failed 00:32:42.803 Write completed with error (sct=0, sc=8) 00:32:42.803 starting I/O failed 00:32:42.803 Read completed with error (sct=0, sc=8) 00:32:42.803 starting I/O failed 00:32:42.803 Write completed with error (sct=0, sc=8) 00:32:42.803 starting I/O failed 00:32:42.803 Write completed with error (sct=0, sc=8) 00:32:42.803 starting I/O failed 00:32:42.803 Read completed with error (sct=0, sc=8) 00:32:42.803 starting I/O failed 00:32:42.803 Read completed with error (sct=0, sc=8) 00:32:42.803 starting I/O failed 00:32:42.803 Write completed with error (sct=0, sc=8) 00:32:42.803 starting I/O failed 00:32:42.803 Read completed with error (sct=0, sc=8) 00:32:42.803 starting I/O failed 00:32:42.803 Write completed with error (sct=0, sc=8) 00:32:42.803 starting I/O failed 00:32:42.803 Write completed with error (sct=0, sc=8) 00:32:42.803 starting I/O failed 00:32:42.803 Write completed with error (sct=0, sc=8) 00:32:42.803 starting I/O failed 00:32:42.803 Write completed with error (sct=0, sc=8) 00:32:42.803 starting I/O failed 00:32:42.803 Read completed with error (sct=0, sc=8) 00:32:42.803 starting I/O failed 00:32:42.803 Write completed with error (sct=0, sc=8) 00:32:42.803 starting I/O failed 00:32:42.803 Read completed with error (sct=0, sc=8) 00:32:42.803 starting I/O failed 00:32:42.803 [2024-04-26 13:15:47.802835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:42.803 [2024-04-26 13:15:47.803194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.803 [2024-04-26 13:15:47.803395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.803 [2024-04-26 13:15:47.803402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:42.803 qpair failed and we were unable to recover it. 00:32:42.803 [2024-04-26 13:15:47.803585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.803 [2024-04-26 13:15:47.803878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.803 [2024-04-26 13:15:47.803886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:42.803 qpair failed and we were unable to recover it. 00:32:42.803 [2024-04-26 13:15:47.804212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.803 [2024-04-26 13:15:47.804517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.803 [2024-04-26 13:15:47.804524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:42.803 qpair failed and we were unable to recover it. 00:32:42.803 [2024-04-26 13:15:47.804879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.803 [2024-04-26 13:15:47.805130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.803 [2024-04-26 13:15:47.805138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:42.803 qpair failed and we were unable to recover it. 00:32:42.803 [2024-04-26 13:15:47.805495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.803 [2024-04-26 13:15:47.805838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.803 [2024-04-26 13:15:47.805846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:42.803 qpair failed and we were unable to recover it. 00:32:42.803 [2024-04-26 13:15:47.806152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.803 [2024-04-26 13:15:47.806485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.803 [2024-04-26 13:15:47.806492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:42.803 qpair failed and we were unable to recover it. 00:32:42.803 [2024-04-26 13:15:47.806800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.803 [2024-04-26 13:15:47.807113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.803 [2024-04-26 13:15:47.807120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:42.803 qpair failed and we were unable to recover it. 00:32:42.803 [2024-04-26 13:15:47.807279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.803 [2024-04-26 13:15:47.807474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.803 [2024-04-26 13:15:47.807482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:42.803 qpair failed and we were unable to recover it. 00:32:42.803 [2024-04-26 13:15:47.807810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.803 [2024-04-26 13:15:47.808117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.803 [2024-04-26 13:15:47.808125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:42.803 qpair failed and we were unable to recover it. 00:32:42.803 [2024-04-26 13:15:47.808467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.803 [2024-04-26 13:15:47.808643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.803 [2024-04-26 13:15:47.808651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:42.803 qpair failed and we were unable to recover it. 00:32:42.803 [2024-04-26 13:15:47.809001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.803 [2024-04-26 13:15:47.809337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.803 [2024-04-26 13:15:47.809344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:42.803 qpair failed and we were unable to recover it. 00:32:42.803 [2024-04-26 13:15:47.809671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.803 [2024-04-26 13:15:47.809844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.804 [2024-04-26 13:15:47.809852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:42.804 qpair failed and we were unable to recover it. 00:32:42.804 [2024-04-26 13:15:47.810123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.804 [2024-04-26 13:15:47.810382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.804 [2024-04-26 13:15:47.810389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:42.804 qpair failed and we were unable to recover it. 00:32:42.804 [2024-04-26 13:15:47.810721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.804 [2024-04-26 13:15:47.811030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.804 [2024-04-26 13:15:47.811037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:42.804 qpair failed and we were unable to recover it. 00:32:42.804 [2024-04-26 13:15:47.811364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.804 [2024-04-26 13:15:47.811681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.804 [2024-04-26 13:15:47.811688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:42.804 qpair failed and we were unable to recover it. 00:32:42.804 [2024-04-26 13:15:47.812016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.804 [2024-04-26 13:15:47.812207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.804 [2024-04-26 13:15:47.812214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:42.804 qpair failed and we were unable to recover it. 00:32:42.804 [2024-04-26 13:15:47.812482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.804 [2024-04-26 13:15:47.812801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.804 [2024-04-26 13:15:47.812808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:42.804 qpair failed and we were unable to recover it. 00:32:42.804 [2024-04-26 13:15:47.813148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.804 [2024-04-26 13:15:47.813503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.804 [2024-04-26 13:15:47.813510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:42.804 qpair failed and we were unable to recover it. 00:32:42.804 [2024-04-26 13:15:47.813814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.804 [2024-04-26 13:15:47.814054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.804 [2024-04-26 13:15:47.814062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:42.804 qpair failed and we were unable to recover it. 00:32:42.804 [2024-04-26 13:15:47.814408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.804 [2024-04-26 13:15:47.814743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.804 [2024-04-26 13:15:47.814750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:42.804 qpair failed and we were unable to recover it. 00:32:42.804 [2024-04-26 13:15:47.815022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.804 [2024-04-26 13:15:47.815364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.804 [2024-04-26 13:15:47.815370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:42.804 qpair failed and we were unable to recover it. 00:32:42.804 [2024-04-26 13:15:47.815699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.804 [2024-04-26 13:15:47.815997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.804 [2024-04-26 13:15:47.816003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:42.804 qpair failed and we were unable to recover it. 00:32:42.804 [2024-04-26 13:15:47.816316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.804 [2024-04-26 13:15:47.816653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.804 [2024-04-26 13:15:47.816659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:42.804 qpair failed and we were unable to recover it. 00:32:42.804 [2024-04-26 13:15:47.816821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.804 [2024-04-26 13:15:47.816878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.804 [2024-04-26 13:15:47.816884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:42.804 qpair failed and we were unable to recover it. 00:32:42.804 [2024-04-26 13:15:47.817307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.804 [2024-04-26 13:15:47.817586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.804 [2024-04-26 13:15:47.817592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:42.804 qpair failed and we were unable to recover it. 00:32:42.804 [2024-04-26 13:15:47.817908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.804 [2024-04-26 13:15:47.818248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.804 [2024-04-26 13:15:47.818254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:42.804 qpair failed and we were unable to recover it. 00:32:42.804 [2024-04-26 13:15:47.818545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.804 [2024-04-26 13:15:47.818888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.804 [2024-04-26 13:15:47.818895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:42.804 qpair failed and we were unable to recover it. 00:32:42.804 [2024-04-26 13:15:47.819206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.804 [2024-04-26 13:15:47.819490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.804 [2024-04-26 13:15:47.819497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:42.804 qpair failed and we were unable to recover it. 00:32:42.804 [2024-04-26 13:15:47.819614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.804 [2024-04-26 13:15:47.819876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.804 [2024-04-26 13:15:47.819883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:42.804 qpair failed and we were unable to recover it. 00:32:42.804 [2024-04-26 13:15:47.820212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.804 [2024-04-26 13:15:47.820509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.804 [2024-04-26 13:15:47.820516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:42.804 qpair failed and we were unable to recover it. 00:32:42.804 [2024-04-26 13:15:47.820861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.804 [2024-04-26 13:15:47.821158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.804 [2024-04-26 13:15:47.821166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:42.804 qpair failed and we were unable to recover it. 00:32:42.804 [2024-04-26 13:15:47.821472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.804 [2024-04-26 13:15:47.821774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.804 [2024-04-26 13:15:47.821782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:42.804 qpair failed and we were unable to recover it. 00:32:42.804 [2024-04-26 13:15:47.822105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.804 [2024-04-26 13:15:47.822405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.804 [2024-04-26 13:15:47.822412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:42.804 qpair failed and we were unable to recover it. 00:32:42.804 [2024-04-26 13:15:47.822711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.804 [2024-04-26 13:15:47.823031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.804 [2024-04-26 13:15:47.823037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:42.804 qpair failed and we were unable to recover it. 00:32:42.804 [2024-04-26 13:15:47.823334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.804 [2024-04-26 13:15:47.823640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.804 [2024-04-26 13:15:47.823646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:42.804 qpair failed and we were unable to recover it. 00:32:42.804 [2024-04-26 13:15:47.823897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.804 [2024-04-26 13:15:47.824192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.804 [2024-04-26 13:15:47.824198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:42.804 qpair failed and we were unable to recover it. 00:32:42.804 [2024-04-26 13:15:47.824484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.804 [2024-04-26 13:15:47.824690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.804 [2024-04-26 13:15:47.824697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:42.804 qpair failed and we were unable to recover it. 00:32:42.804 [2024-04-26 13:15:47.825055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.804 [2024-04-26 13:15:47.825378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.804 [2024-04-26 13:15:47.825384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:42.804 qpair failed and we were unable to recover it. 00:32:42.804 [2024-04-26 13:15:47.825523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.804 [2024-04-26 13:15:47.825806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.804 [2024-04-26 13:15:47.825813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:42.804 qpair failed and we were unable to recover it. 00:32:42.804 [2024-04-26 13:15:47.826148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.804 [2024-04-26 13:15:47.826472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.804 [2024-04-26 13:15:47.826479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:42.804 qpair failed and we were unable to recover it. 00:32:42.805 [2024-04-26 13:15:47.826763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.805 [2024-04-26 13:15:47.826923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.805 [2024-04-26 13:15:47.826930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:42.805 qpair failed and we were unable to recover it. 00:32:42.805 [2024-04-26 13:15:47.827258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.805 [2024-04-26 13:15:47.827595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.805 [2024-04-26 13:15:47.827602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:42.805 qpair failed and we were unable to recover it. 00:32:42.805 [2024-04-26 13:15:47.827900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.805 [2024-04-26 13:15:47.828195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.805 [2024-04-26 13:15:47.828201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:42.805 qpair failed and we were unable to recover it. 00:32:42.805 [2024-04-26 13:15:47.828491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.805 [2024-04-26 13:15:47.828766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.805 [2024-04-26 13:15:47.828773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:42.805 qpair failed and we were unable to recover it. 00:32:42.805 [2024-04-26 13:15:47.829089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.805 [2024-04-26 13:15:47.829440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.805 [2024-04-26 13:15:47.829446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:42.805 qpair failed and we were unable to recover it. 00:32:42.805 [2024-04-26 13:15:47.829782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.805 [2024-04-26 13:15:47.830091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.805 [2024-04-26 13:15:47.830097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:42.805 qpair failed and we were unable to recover it. 00:32:42.805 [2024-04-26 13:15:47.830337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.805 [2024-04-26 13:15:47.830621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.805 [2024-04-26 13:15:47.830627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:42.805 qpair failed and we were unable to recover it. 00:32:42.805 [2024-04-26 13:15:47.830929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.805 [2024-04-26 13:15:47.831230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.805 [2024-04-26 13:15:47.831236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:42.805 qpair failed and we were unable to recover it. 00:32:42.805 [2024-04-26 13:15:47.831551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.805 [2024-04-26 13:15:47.831751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.805 [2024-04-26 13:15:47.831758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:42.805 qpair failed and we were unable to recover it. 00:32:42.805 [2024-04-26 13:15:47.832056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.805 [2024-04-26 13:15:47.832296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.805 [2024-04-26 13:15:47.832302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:42.805 qpair failed and we were unable to recover it. 00:32:42.805 [2024-04-26 13:15:47.832617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.805 [2024-04-26 13:15:47.832888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.805 [2024-04-26 13:15:47.832894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:42.805 qpair failed and we were unable to recover it. 00:32:42.805 [2024-04-26 13:15:47.833171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.805 [2024-04-26 13:15:47.833482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.805 [2024-04-26 13:15:47.833488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:42.805 qpair failed and we were unable to recover it. 00:32:42.805 [2024-04-26 13:15:47.833793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.805 [2024-04-26 13:15:47.834131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.805 [2024-04-26 13:15:47.834137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:42.805 qpair failed and we were unable to recover it. 00:32:42.805 [2024-04-26 13:15:47.834378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.805 [2024-04-26 13:15:47.834725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.805 [2024-04-26 13:15:47.834732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:42.805 qpair failed and we were unable to recover it. 00:32:42.805 [2024-04-26 13:15:47.835049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.805 [2024-04-26 13:15:47.835353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.805 [2024-04-26 13:15:47.835360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:42.805 qpair failed and we were unable to recover it. 00:32:42.805 [2024-04-26 13:15:47.835696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.805 [2024-04-26 13:15:47.836004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.805 [2024-04-26 13:15:47.836010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:42.805 qpair failed and we were unable to recover it. 00:32:42.805 [2024-04-26 13:15:47.836310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.805 [2024-04-26 13:15:47.836624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.805 [2024-04-26 13:15:47.836631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:42.805 qpair failed and we were unable to recover it. 00:32:42.805 [2024-04-26 13:15:47.836935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.805 [2024-04-26 13:15:47.837237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.805 [2024-04-26 13:15:47.837244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:42.805 qpair failed and we were unable to recover it. 00:32:42.805 [2024-04-26 13:15:47.837427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.805 [2024-04-26 13:15:47.837644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.805 [2024-04-26 13:15:47.837650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:42.805 qpair failed and we were unable to recover it. 00:32:42.805 [2024-04-26 13:15:47.837948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.805 [2024-04-26 13:15:47.838246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.805 [2024-04-26 13:15:47.838252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:42.805 qpair failed and we were unable to recover it. 00:32:42.805 [2024-04-26 13:15:47.838598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.805 [2024-04-26 13:15:47.838836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.805 [2024-04-26 13:15:47.838853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:42.805 qpair failed and we were unable to recover it. 00:32:42.805 [2024-04-26 13:15:47.839230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.805 [2024-04-26 13:15:47.839578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.805 [2024-04-26 13:15:47.839585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:42.805 qpair failed and we were unable to recover it. 00:32:42.805 [2024-04-26 13:15:47.839940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.805 [2024-04-26 13:15:47.840269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.805 [2024-04-26 13:15:47.840275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:42.805 qpair failed and we were unable to recover it. 00:32:42.805 [2024-04-26 13:15:47.840589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.805 [2024-04-26 13:15:47.840822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.805 [2024-04-26 13:15:47.840828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:42.805 qpair failed and we were unable to recover it. 00:32:42.805 [2024-04-26 13:15:47.841062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.805 [2024-04-26 13:15:47.841371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.805 [2024-04-26 13:15:47.841377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:42.805 qpair failed and we were unable to recover it. 00:32:42.805 [2024-04-26 13:15:47.841526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.805 [2024-04-26 13:15:47.841736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.805 [2024-04-26 13:15:47.841743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:42.805 qpair failed and we were unable to recover it. 00:32:42.805 [2024-04-26 13:15:47.842056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.805 [2024-04-26 13:15:47.842360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.805 [2024-04-26 13:15:47.842368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:42.805 qpair failed and we were unable to recover it. 00:32:42.805 [2024-04-26 13:15:47.842699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.805 [2024-04-26 13:15:47.842982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.805 [2024-04-26 13:15:47.842989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:42.805 qpair failed and we were unable to recover it. 00:32:42.805 [2024-04-26 13:15:47.843354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.805 [2024-04-26 13:15:47.843663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.806 [2024-04-26 13:15:47.843670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:42.806 qpair failed and we were unable to recover it. 00:32:42.806 [2024-04-26 13:15:47.843999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.806 [2024-04-26 13:15:47.844191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.806 [2024-04-26 13:15:47.844199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:42.806 qpair failed and we were unable to recover it. 00:32:42.806 [2024-04-26 13:15:47.844498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.806 [2024-04-26 13:15:47.844831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.806 [2024-04-26 13:15:47.844841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:42.806 qpair failed and we were unable to recover it. 00:32:42.806 [2024-04-26 13:15:47.845140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.806 [2024-04-26 13:15:47.845468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.806 [2024-04-26 13:15:47.845475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:42.806 qpair failed and we were unable to recover it. 00:32:42.806 [2024-04-26 13:15:47.845767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.806 [2024-04-26 13:15:47.846065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.806 [2024-04-26 13:15:47.846072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:42.806 qpair failed and we were unable to recover it. 00:32:42.806 [2024-04-26 13:15:47.846389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.806 [2024-04-26 13:15:47.846673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.806 [2024-04-26 13:15:47.846679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:42.806 qpair failed and we were unable to recover it. 00:32:42.806 [2024-04-26 13:15:47.847041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.806 [2024-04-26 13:15:47.847215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.806 [2024-04-26 13:15:47.847222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:42.806 qpair failed and we were unable to recover it. 00:32:42.806 [2024-04-26 13:15:47.847542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.806 [2024-04-26 13:15:47.847853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.806 [2024-04-26 13:15:47.847859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:42.806 qpair failed and we were unable to recover it. 00:32:42.806 [2024-04-26 13:15:47.848052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.806 [2024-04-26 13:15:47.848387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.806 [2024-04-26 13:15:47.848395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:42.806 qpair failed and we were unable to recover it. 00:32:42.806 [2024-04-26 13:15:47.848699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.806 [2024-04-26 13:15:47.849026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.806 [2024-04-26 13:15:47.849032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:42.806 qpair failed and we were unable to recover it. 00:32:42.806 [2024-04-26 13:15:47.849363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.806 [2024-04-26 13:15:47.849698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.806 [2024-04-26 13:15:47.849704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:42.806 qpair failed and we were unable to recover it. 00:32:42.806 [2024-04-26 13:15:47.849890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.806 [2024-04-26 13:15:47.850169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.806 [2024-04-26 13:15:47.850175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:42.806 qpair failed and we were unable to recover it. 00:32:42.806 [2024-04-26 13:15:47.850423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.806 [2024-04-26 13:15:47.850728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.806 [2024-04-26 13:15:47.850734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:42.806 qpair failed and we were unable to recover it. 00:32:42.806 [2024-04-26 13:15:47.851052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.806 [2024-04-26 13:15:47.851379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.806 [2024-04-26 13:15:47.851385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:42.806 qpair failed and we were unable to recover it. 00:32:42.806 [2024-04-26 13:15:47.851691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.806 [2024-04-26 13:15:47.852008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.806 [2024-04-26 13:15:47.852015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:42.806 qpair failed and we were unable to recover it. 00:32:42.806 [2024-04-26 13:15:47.852365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.806 [2024-04-26 13:15:47.852672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.806 [2024-04-26 13:15:47.852678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:42.806 qpair failed and we were unable to recover it. 00:32:42.806 [2024-04-26 13:15:47.852975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.806 [2024-04-26 13:15:47.853314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.806 [2024-04-26 13:15:47.853320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:42.806 qpair failed and we were unable to recover it. 00:32:42.806 [2024-04-26 13:15:47.853637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.806 [2024-04-26 13:15:47.853865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.806 [2024-04-26 13:15:47.853872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:42.806 qpair failed and we were unable to recover it. 00:32:42.806 [2024-04-26 13:15:47.854237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.806 [2024-04-26 13:15:47.854570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.806 [2024-04-26 13:15:47.854577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:42.806 qpair failed and we were unable to recover it. 00:32:42.806 [2024-04-26 13:15:47.854864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.806 [2024-04-26 13:15:47.855194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.806 [2024-04-26 13:15:47.855200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:42.806 qpair failed and we were unable to recover it. 00:32:42.806 [2024-04-26 13:15:47.855509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.806 [2024-04-26 13:15:47.855668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.806 [2024-04-26 13:15:47.855675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:42.806 qpair failed and we were unable to recover it. 00:32:42.806 [2024-04-26 13:15:47.856023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.806 [2024-04-26 13:15:47.856367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.806 [2024-04-26 13:15:47.856374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:42.806 qpair failed and we were unable to recover it. 00:32:42.806 [2024-04-26 13:15:47.856700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.806 [2024-04-26 13:15:47.857016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.806 [2024-04-26 13:15:47.857023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:42.806 qpair failed and we were unable to recover it. 00:32:42.806 [2024-04-26 13:15:47.857178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.806 [2024-04-26 13:15:47.857411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.806 [2024-04-26 13:15:47.857417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:42.806 qpair failed and we were unable to recover it. 00:32:42.806 [2024-04-26 13:15:47.857731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.806 [2024-04-26 13:15:47.858014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.806 [2024-04-26 13:15:47.858020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:42.806 qpair failed and we were unable to recover it. 00:32:42.806 [2024-04-26 13:15:47.858302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.806 [2024-04-26 13:15:47.858627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.806 [2024-04-26 13:15:47.858633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:42.806 qpair failed and we were unable to recover it. 00:32:42.806 [2024-04-26 13:15:47.858918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.806 [2024-04-26 13:15:47.859218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.806 [2024-04-26 13:15:47.859225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:42.806 qpair failed and we were unable to recover it. 00:32:42.806 [2024-04-26 13:15:47.859503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.806 [2024-04-26 13:15:47.859827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.806 [2024-04-26 13:15:47.859833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:42.806 qpair failed and we were unable to recover it. 00:32:42.806 [2024-04-26 13:15:47.860008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.806 [2024-04-26 13:15:47.860315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.806 [2024-04-26 13:15:47.860322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:42.806 qpair failed and we were unable to recover it. 00:32:43.076 [2024-04-26 13:15:47.860633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.076 [2024-04-26 13:15:47.860950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.076 [2024-04-26 13:15:47.860957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.076 qpair failed and we were unable to recover it. 00:32:43.076 [2024-04-26 13:15:47.861268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.076 [2024-04-26 13:15:47.861556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.076 [2024-04-26 13:15:47.861563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.076 qpair failed and we were unable to recover it. 00:32:43.076 [2024-04-26 13:15:47.861851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.076 [2024-04-26 13:15:47.862159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.076 [2024-04-26 13:15:47.862166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.076 qpair failed and we were unable to recover it. 00:32:43.076 [2024-04-26 13:15:47.862465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.076 [2024-04-26 13:15:47.862738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.076 [2024-04-26 13:15:47.862744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.076 qpair failed and we were unable to recover it. 00:32:43.076 [2024-04-26 13:15:47.863104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.076 [2024-04-26 13:15:47.863423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.076 [2024-04-26 13:15:47.863429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.076 qpair failed and we were unable to recover it. 00:32:43.076 [2024-04-26 13:15:47.863731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.076 [2024-04-26 13:15:47.863951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.076 [2024-04-26 13:15:47.863958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.076 qpair failed and we were unable to recover it. 00:32:43.076 [2024-04-26 13:15:47.864268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.076 [2024-04-26 13:15:47.864578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.076 [2024-04-26 13:15:47.864585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.076 qpair failed and we were unable to recover it. 00:32:43.076 [2024-04-26 13:15:47.864900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.076 [2024-04-26 13:15:47.865224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.076 [2024-04-26 13:15:47.865230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.076 qpair failed and we were unable to recover it. 00:32:43.076 [2024-04-26 13:15:47.865419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.076 [2024-04-26 13:15:47.865784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.076 [2024-04-26 13:15:47.865790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.076 qpair failed and we were unable to recover it. 00:32:43.076 [2024-04-26 13:15:47.866090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.076 [2024-04-26 13:15:47.866468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.076 [2024-04-26 13:15:47.866474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.076 qpair failed and we were unable to recover it. 00:32:43.076 [2024-04-26 13:15:47.866827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.076 [2024-04-26 13:15:47.867049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.076 [2024-04-26 13:15:47.867056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.076 qpair failed and we were unable to recover it. 00:32:43.076 [2024-04-26 13:15:47.867388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.076 [2024-04-26 13:15:47.867705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.076 [2024-04-26 13:15:47.867712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.076 qpair failed and we were unable to recover it. 00:32:43.076 [2024-04-26 13:15:47.868020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.076 [2024-04-26 13:15:47.868321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.076 [2024-04-26 13:15:47.868327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.076 qpair failed and we were unable to recover it. 00:32:43.076 [2024-04-26 13:15:47.868497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.076 [2024-04-26 13:15:47.868805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.076 [2024-04-26 13:15:47.868811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.076 qpair failed and we were unable to recover it. 00:32:43.076 [2024-04-26 13:15:47.869122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.076 [2024-04-26 13:15:47.869423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.076 [2024-04-26 13:15:47.869430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.076 qpair failed and we were unable to recover it. 00:32:43.076 [2024-04-26 13:15:47.869780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.076 [2024-04-26 13:15:47.870080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.076 [2024-04-26 13:15:47.870087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.076 qpair failed and we were unable to recover it. 00:32:43.076 [2024-04-26 13:15:47.870407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.076 [2024-04-26 13:15:47.870737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.076 [2024-04-26 13:15:47.870744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.076 qpair failed and we were unable to recover it. 00:32:43.076 [2024-04-26 13:15:47.871049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.076 [2024-04-26 13:15:47.871355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.076 [2024-04-26 13:15:47.871361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.076 qpair failed and we were unable to recover it. 00:32:43.076 [2024-04-26 13:15:47.871728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.076 [2024-04-26 13:15:47.872066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.076 [2024-04-26 13:15:47.872073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.076 qpair failed and we were unable to recover it. 00:32:43.076 [2024-04-26 13:15:47.872266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.076 [2024-04-26 13:15:47.872556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.076 [2024-04-26 13:15:47.872562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.076 qpair failed and we were unable to recover it. 00:32:43.076 [2024-04-26 13:15:47.872896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.076 [2024-04-26 13:15:47.873221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.076 [2024-04-26 13:15:47.873228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.077 qpair failed and we were unable to recover it. 00:32:43.077 [2024-04-26 13:15:47.873557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.077 [2024-04-26 13:15:47.873937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.077 [2024-04-26 13:15:47.873943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.077 qpair failed and we were unable to recover it. 00:32:43.077 [2024-04-26 13:15:47.874244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.077 [2024-04-26 13:15:47.874558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.077 [2024-04-26 13:15:47.874564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.077 qpair failed and we were unable to recover it. 00:32:43.077 [2024-04-26 13:15:47.874864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.077 [2024-04-26 13:15:47.875182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.077 [2024-04-26 13:15:47.875188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.077 qpair failed and we were unable to recover it. 00:32:43.077 [2024-04-26 13:15:47.875346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.077 [2024-04-26 13:15:47.875627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.077 [2024-04-26 13:15:47.875633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.077 qpair failed and we were unable to recover it. 00:32:43.077 [2024-04-26 13:15:47.875918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.077 [2024-04-26 13:15:47.876253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.077 [2024-04-26 13:15:47.876260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.077 qpair failed and we were unable to recover it. 00:32:43.077 [2024-04-26 13:15:47.876553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.077 [2024-04-26 13:15:47.876834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.077 [2024-04-26 13:15:47.876844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.077 qpair failed and we were unable to recover it. 00:32:43.077 [2024-04-26 13:15:47.877040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.077 [2024-04-26 13:15:47.877384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.077 [2024-04-26 13:15:47.877390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.077 qpair failed and we were unable to recover it. 00:32:43.077 [2024-04-26 13:15:47.877686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.077 [2024-04-26 13:15:47.877967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.077 [2024-04-26 13:15:47.877974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.077 qpair failed and we were unable to recover it. 00:32:43.077 [2024-04-26 13:15:47.878290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.077 [2024-04-26 13:15:47.878599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.077 [2024-04-26 13:15:47.878606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.077 qpair failed and we were unable to recover it. 00:32:43.077 [2024-04-26 13:15:47.878915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.077 [2024-04-26 13:15:47.879249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.077 [2024-04-26 13:15:47.879255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.077 qpair failed and we were unable to recover it. 00:32:43.077 [2024-04-26 13:15:47.879584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.077 [2024-04-26 13:15:47.879744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.077 [2024-04-26 13:15:47.879750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.077 qpair failed and we were unable to recover it. 00:32:43.077 [2024-04-26 13:15:47.880058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.077 [2024-04-26 13:15:47.880367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.077 [2024-04-26 13:15:47.880373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.077 qpair failed and we were unable to recover it. 00:32:43.077 [2024-04-26 13:15:47.880667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.077 [2024-04-26 13:15:47.880983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.077 [2024-04-26 13:15:47.880990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.077 qpair failed and we were unable to recover it. 00:32:43.077 [2024-04-26 13:15:47.881275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.077 [2024-04-26 13:15:47.881511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.077 [2024-04-26 13:15:47.881517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.077 qpair failed and we were unable to recover it. 00:32:43.077 [2024-04-26 13:15:47.881817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.077 [2024-04-26 13:15:47.882182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.077 [2024-04-26 13:15:47.882189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.077 qpair failed and we were unable to recover it. 00:32:43.077 [2024-04-26 13:15:47.882459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.077 [2024-04-26 13:15:47.882795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.077 [2024-04-26 13:15:47.882801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.077 qpair failed and we were unable to recover it. 00:32:43.077 [2024-04-26 13:15:47.883110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.077 [2024-04-26 13:15:47.883458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.077 [2024-04-26 13:15:47.883465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.077 qpair failed and we were unable to recover it. 00:32:43.077 [2024-04-26 13:15:47.883773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.077 [2024-04-26 13:15:47.884096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.077 [2024-04-26 13:15:47.884103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.077 qpair failed and we were unable to recover it. 00:32:43.077 [2024-04-26 13:15:47.884424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.077 [2024-04-26 13:15:47.884587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.077 [2024-04-26 13:15:47.884594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.077 qpair failed and we were unable to recover it. 00:32:43.077 [2024-04-26 13:15:47.884790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.077 [2024-04-26 13:15:47.885042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.077 [2024-04-26 13:15:47.885050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.077 qpair failed and we were unable to recover it. 00:32:43.077 [2024-04-26 13:15:47.885375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.077 [2024-04-26 13:15:47.885697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.077 [2024-04-26 13:15:47.885703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.077 qpair failed and we were unable to recover it. 00:32:43.077 [2024-04-26 13:15:47.886014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.077 [2024-04-26 13:15:47.886338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.077 [2024-04-26 13:15:47.886344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.077 qpair failed and we were unable to recover it. 00:32:43.077 [2024-04-26 13:15:47.886663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.077 [2024-04-26 13:15:47.886867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.077 [2024-04-26 13:15:47.886873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.077 qpair failed and we were unable to recover it. 00:32:43.077 [2024-04-26 13:15:47.887203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.077 [2024-04-26 13:15:47.887550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.077 [2024-04-26 13:15:47.887556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.077 qpair failed and we were unable to recover it. 00:32:43.077 [2024-04-26 13:15:47.887851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.077 [2024-04-26 13:15:47.888169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.077 [2024-04-26 13:15:47.888175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.077 qpair failed and we were unable to recover it. 00:32:43.077 [2024-04-26 13:15:47.888474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.077 [2024-04-26 13:15:47.888818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.077 [2024-04-26 13:15:47.888825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.077 qpair failed and we were unable to recover it. 00:32:43.077 [2024-04-26 13:15:47.889015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.078 [2024-04-26 13:15:47.889321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.078 [2024-04-26 13:15:47.889328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.078 qpair failed and we were unable to recover it. 00:32:43.078 [2024-04-26 13:15:47.889655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.078 [2024-04-26 13:15:47.889984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.078 [2024-04-26 13:15:47.889991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.078 qpair failed and we were unable to recover it. 00:32:43.078 [2024-04-26 13:15:47.890288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.078 [2024-04-26 13:15:47.890602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.078 [2024-04-26 13:15:47.890608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.078 qpair failed and we were unable to recover it. 00:32:43.078 [2024-04-26 13:15:47.890808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.078 [2024-04-26 13:15:47.891092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.078 [2024-04-26 13:15:47.891098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.078 qpair failed and we were unable to recover it. 00:32:43.078 [2024-04-26 13:15:47.891406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.078 [2024-04-26 13:15:47.891689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.078 [2024-04-26 13:15:47.891695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.078 qpair failed and we were unable to recover it. 00:32:43.078 [2024-04-26 13:15:47.891859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.078 [2024-04-26 13:15:47.892245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.078 [2024-04-26 13:15:47.892252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.078 qpair failed and we were unable to recover it. 00:32:43.078 [2024-04-26 13:15:47.892559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.078 [2024-04-26 13:15:47.892842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.078 [2024-04-26 13:15:47.892849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.078 qpair failed and we were unable to recover it. 00:32:43.078 [2024-04-26 13:15:47.893160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.078 [2024-04-26 13:15:47.893226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.078 [2024-04-26 13:15:47.893232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.078 qpair failed and we were unable to recover it. 00:32:43.078 [2024-04-26 13:15:47.893526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.078 [2024-04-26 13:15:47.893799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.078 [2024-04-26 13:15:47.893806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.078 qpair failed and we were unable to recover it. 00:32:43.078 [2024-04-26 13:15:47.894124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.078 [2024-04-26 13:15:47.894437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.078 [2024-04-26 13:15:47.894444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.078 qpair failed and we were unable to recover it. 00:32:43.078 [2024-04-26 13:15:47.894766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.078 [2024-04-26 13:15:47.895067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.078 [2024-04-26 13:15:47.895074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.078 qpair failed and we were unable to recover it. 00:32:43.078 [2024-04-26 13:15:47.895391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.078 [2024-04-26 13:15:47.895710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.078 [2024-04-26 13:15:47.895716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.078 qpair failed and we were unable to recover it. 00:32:43.078 [2024-04-26 13:15:47.896048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.078 [2024-04-26 13:15:47.896353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.078 [2024-04-26 13:15:47.896360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.078 qpair failed and we were unable to recover it. 00:32:43.078 [2024-04-26 13:15:47.896737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.078 [2024-04-26 13:15:47.896928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.078 [2024-04-26 13:15:47.896934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.078 qpair failed and we were unable to recover it. 00:32:43.078 [2024-04-26 13:15:47.897101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.078 [2024-04-26 13:15:47.897426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.078 [2024-04-26 13:15:47.897432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.078 qpair failed and we were unable to recover it. 00:32:43.078 [2024-04-26 13:15:47.897723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.078 [2024-04-26 13:15:47.897890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.078 [2024-04-26 13:15:47.897897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.078 qpair failed and we were unable to recover it. 00:32:43.078 [2024-04-26 13:15:47.898206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.078 [2024-04-26 13:15:47.898537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.078 [2024-04-26 13:15:47.898543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.078 qpair failed and we were unable to recover it. 00:32:43.078 [2024-04-26 13:15:47.898843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.078 [2024-04-26 13:15:47.899166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.078 [2024-04-26 13:15:47.899172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.078 qpair failed and we were unable to recover it. 00:32:43.078 [2024-04-26 13:15:47.899552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.078 [2024-04-26 13:15:47.899865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.078 [2024-04-26 13:15:47.899872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.078 qpair failed and we were unable to recover it. 00:32:43.078 [2024-04-26 13:15:47.900154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.078 [2024-04-26 13:15:47.900473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.078 [2024-04-26 13:15:47.900480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.078 qpair failed and we were unable to recover it. 00:32:43.078 [2024-04-26 13:15:47.900764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.078 [2024-04-26 13:15:47.901046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.078 [2024-04-26 13:15:47.901053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.078 qpair failed and we were unable to recover it. 00:32:43.078 [2024-04-26 13:15:47.901367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.078 [2024-04-26 13:15:47.901668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.078 [2024-04-26 13:15:47.901674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.078 qpair failed and we were unable to recover it. 00:32:43.078 [2024-04-26 13:15:47.901933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.078 [2024-04-26 13:15:47.902284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.078 [2024-04-26 13:15:47.902290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.078 qpair failed and we were unable to recover it. 00:32:43.078 [2024-04-26 13:15:47.902589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.078 [2024-04-26 13:15:47.902869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.078 [2024-04-26 13:15:47.902875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.078 qpair failed and we were unable to recover it. 00:32:43.078 [2024-04-26 13:15:47.903219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.078 [2024-04-26 13:15:47.903417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.078 [2024-04-26 13:15:47.903424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.078 qpair failed and we were unable to recover it. 00:32:43.078 [2024-04-26 13:15:47.903793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.078 [2024-04-26 13:15:47.904109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.078 [2024-04-26 13:15:47.904115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.078 qpair failed and we were unable to recover it. 00:32:43.078 [2024-04-26 13:15:47.904392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.078 [2024-04-26 13:15:47.904711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.078 [2024-04-26 13:15:47.904718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.078 qpair failed and we were unable to recover it. 00:32:43.078 [2024-04-26 13:15:47.904999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.078 [2024-04-26 13:15:47.905345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.078 [2024-04-26 13:15:47.905352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.079 qpair failed and we were unable to recover it. 00:32:43.079 [2024-04-26 13:15:47.905658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.079 [2024-04-26 13:15:47.905976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.079 [2024-04-26 13:15:47.905982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.079 qpair failed and we were unable to recover it. 00:32:43.079 [2024-04-26 13:15:47.906287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.079 [2024-04-26 13:15:47.906609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.079 [2024-04-26 13:15:47.906615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.079 qpair failed and we were unable to recover it. 00:32:43.079 [2024-04-26 13:15:47.906913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.079 [2024-04-26 13:15:47.907216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.079 [2024-04-26 13:15:47.907222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.079 qpair failed and we were unable to recover it. 00:32:43.079 [2024-04-26 13:15:47.907607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.079 [2024-04-26 13:15:47.907878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.079 [2024-04-26 13:15:47.907885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.079 qpair failed and we were unable to recover it. 00:32:43.079 [2024-04-26 13:15:47.908083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.079 [2024-04-26 13:15:47.908417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.079 [2024-04-26 13:15:47.908424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.079 qpair failed and we were unable to recover it. 00:32:43.079 [2024-04-26 13:15:47.908718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.079 [2024-04-26 13:15:47.909061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.079 [2024-04-26 13:15:47.909067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.079 qpair failed and we were unable to recover it. 00:32:43.079 [2024-04-26 13:15:47.909395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.079 [2024-04-26 13:15:47.909701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.079 [2024-04-26 13:15:47.909707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.079 qpair failed and we were unable to recover it. 00:32:43.079 [2024-04-26 13:15:47.909900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.079 [2024-04-26 13:15:47.910275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.079 [2024-04-26 13:15:47.910281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.079 qpair failed and we were unable to recover it. 00:32:43.079 [2024-04-26 13:15:47.910577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.079 [2024-04-26 13:15:47.910886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.079 [2024-04-26 13:15:47.910892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.079 qpair failed and we were unable to recover it. 00:32:43.079 [2024-04-26 13:15:47.911314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.079 [2024-04-26 13:15:47.911497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.079 [2024-04-26 13:15:47.911504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.079 qpair failed and we were unable to recover it. 00:32:43.079 [2024-04-26 13:15:47.911693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.079 [2024-04-26 13:15:47.912005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.079 [2024-04-26 13:15:47.912011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.079 qpair failed and we were unable to recover it. 00:32:43.079 [2024-04-26 13:15:47.912321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.079 [2024-04-26 13:15:47.912646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.079 [2024-04-26 13:15:47.912653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.079 qpair failed and we were unable to recover it. 00:32:43.079 [2024-04-26 13:15:47.912964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.079 [2024-04-26 13:15:47.913171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.079 [2024-04-26 13:15:47.913177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.079 qpair failed and we were unable to recover it. 00:32:43.079 [2024-04-26 13:15:47.913495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.079 [2024-04-26 13:15:47.913785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.079 [2024-04-26 13:15:47.913791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.079 qpair failed and we were unable to recover it. 00:32:43.079 [2024-04-26 13:15:47.914093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.079 [2024-04-26 13:15:47.914412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.079 [2024-04-26 13:15:47.914418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.079 qpair failed and we were unable to recover it. 00:32:43.079 [2024-04-26 13:15:47.914718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.079 [2024-04-26 13:15:47.915039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.079 [2024-04-26 13:15:47.915046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.079 qpair failed and we were unable to recover it. 00:32:43.079 [2024-04-26 13:15:47.915377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.079 [2024-04-26 13:15:47.915660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.079 [2024-04-26 13:15:47.915666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.079 qpair failed and we were unable to recover it. 00:32:43.079 [2024-04-26 13:15:47.915968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.079 [2024-04-26 13:15:47.916293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.079 [2024-04-26 13:15:47.916299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.079 qpair failed and we were unable to recover it. 00:32:43.079 [2024-04-26 13:15:47.916585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.079 [2024-04-26 13:15:47.916869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.079 [2024-04-26 13:15:47.916876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.079 qpair failed and we were unable to recover it. 00:32:43.079 [2024-04-26 13:15:47.917174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.079 [2024-04-26 13:15:47.917454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.079 [2024-04-26 13:15:47.917460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.079 qpair failed and we were unable to recover it. 00:32:43.079 [2024-04-26 13:15:47.917794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.079 [2024-04-26 13:15:47.918135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.079 [2024-04-26 13:15:47.918142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.079 qpair failed and we were unable to recover it. 00:32:43.079 [2024-04-26 13:15:47.918286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.079 [2024-04-26 13:15:47.918494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.079 [2024-04-26 13:15:47.918501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.079 qpair failed and we were unable to recover it. 00:32:43.079 [2024-04-26 13:15:47.918808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.079 [2024-04-26 13:15:47.919098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.079 [2024-04-26 13:15:47.919105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.079 qpair failed and we were unable to recover it. 00:32:43.079 [2024-04-26 13:15:47.919407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.079 [2024-04-26 13:15:47.919723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.079 [2024-04-26 13:15:47.919730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.079 qpair failed and we were unable to recover it. 00:32:43.079 [2024-04-26 13:15:47.920029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.079 [2024-04-26 13:15:47.920362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.079 [2024-04-26 13:15:47.920368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.080 qpair failed and we were unable to recover it. 00:32:43.080 [2024-04-26 13:15:47.920644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.080 [2024-04-26 13:15:47.920952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.080 [2024-04-26 13:15:47.920958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.080 qpair failed and we were unable to recover it. 00:32:43.080 [2024-04-26 13:15:47.921237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.080 [2024-04-26 13:15:47.921584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.080 [2024-04-26 13:15:47.921590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.080 qpair failed and we were unable to recover it. 00:32:43.080 [2024-04-26 13:15:47.921904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.080 [2024-04-26 13:15:47.922077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.080 [2024-04-26 13:15:47.922084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.080 qpair failed and we were unable to recover it. 00:32:43.080 [2024-04-26 13:15:47.922317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.080 [2024-04-26 13:15:47.922603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.080 [2024-04-26 13:15:47.922610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.080 qpair failed and we were unable to recover it. 00:32:43.080 [2024-04-26 13:15:47.922910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.080 [2024-04-26 13:15:47.923223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.080 [2024-04-26 13:15:47.923230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.080 qpair failed and we were unable to recover it. 00:32:43.080 [2024-04-26 13:15:47.923541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.080 [2024-04-26 13:15:47.923818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.080 [2024-04-26 13:15:47.923824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.080 qpair failed and we were unable to recover it. 00:32:43.080 [2024-04-26 13:15:47.924150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.080 [2024-04-26 13:15:47.924487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.080 [2024-04-26 13:15:47.924494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.080 qpair failed and we were unable to recover it. 00:32:43.080 [2024-04-26 13:15:47.924788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.080 [2024-04-26 13:15:47.925113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.080 [2024-04-26 13:15:47.925120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.080 qpair failed and we were unable to recover it. 00:32:43.080 [2024-04-26 13:15:47.925431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.080 [2024-04-26 13:15:47.925756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.080 [2024-04-26 13:15:47.925763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.080 qpair failed and we were unable to recover it. 00:32:43.080 [2024-04-26 13:15:47.926033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.080 [2024-04-26 13:15:47.926348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.080 [2024-04-26 13:15:47.926355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.080 qpair failed and we were unable to recover it. 00:32:43.080 [2024-04-26 13:15:47.926654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.080 [2024-04-26 13:15:47.926969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.080 [2024-04-26 13:15:47.926975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.080 qpair failed and we were unable to recover it. 00:32:43.080 [2024-04-26 13:15:47.927316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.080 [2024-04-26 13:15:47.927599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.080 [2024-04-26 13:15:47.927606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.080 qpair failed and we were unable to recover it. 00:32:43.080 [2024-04-26 13:15:47.927818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.080 [2024-04-26 13:15:47.928125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.080 [2024-04-26 13:15:47.928131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.080 qpair failed and we were unable to recover it. 00:32:43.080 [2024-04-26 13:15:47.928423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.080 [2024-04-26 13:15:47.928719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.080 [2024-04-26 13:15:47.928725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.080 qpair failed and we were unable to recover it. 00:32:43.080 [2024-04-26 13:15:47.928912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.080 [2024-04-26 13:15:47.929285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.080 [2024-04-26 13:15:47.929291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.080 qpair failed and we were unable to recover it. 00:32:43.080 [2024-04-26 13:15:47.929593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.080 [2024-04-26 13:15:47.929930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.080 [2024-04-26 13:15:47.929936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.080 qpair failed and we were unable to recover it. 00:32:43.080 [2024-04-26 13:15:47.930314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.080 [2024-04-26 13:15:47.930576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.080 [2024-04-26 13:15:47.930582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.080 qpair failed and we were unable to recover it. 00:32:43.080 [2024-04-26 13:15:47.930909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.080 [2024-04-26 13:15:47.931112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.080 [2024-04-26 13:15:47.931118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.080 qpair failed and we were unable to recover it. 00:32:43.080 [2024-04-26 13:15:47.931486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.080 [2024-04-26 13:15:47.931789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.080 [2024-04-26 13:15:47.931795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.080 qpair failed and we were unable to recover it. 00:32:43.080 [2024-04-26 13:15:47.932100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.080 [2024-04-26 13:15:47.932299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.080 [2024-04-26 13:15:47.932305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.080 qpair failed and we were unable to recover it. 00:32:43.080 [2024-04-26 13:15:47.932554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.080 [2024-04-26 13:15:47.932870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.080 [2024-04-26 13:15:47.932878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.080 qpair failed and we were unable to recover it. 00:32:43.080 [2024-04-26 13:15:47.933049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.080 [2024-04-26 13:15:47.933393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.080 [2024-04-26 13:15:47.933399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.080 qpair failed and we were unable to recover it. 00:32:43.080 [2024-04-26 13:15:47.933691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.080 [2024-04-26 13:15:47.933967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.080 [2024-04-26 13:15:47.933974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.080 qpair failed and we were unable to recover it. 00:32:43.080 [2024-04-26 13:15:47.934271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.080 [2024-04-26 13:15:47.934586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.080 [2024-04-26 13:15:47.934592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.080 qpair failed and we were unable to recover it. 00:32:43.080 [2024-04-26 13:15:47.934799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.080 [2024-04-26 13:15:47.935127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.080 [2024-04-26 13:15:47.935134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.080 qpair failed and we were unable to recover it. 00:32:43.080 [2024-04-26 13:15:47.935435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.080 [2024-04-26 13:15:47.935734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.080 [2024-04-26 13:15:47.935741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.080 qpair failed and we were unable to recover it. 00:32:43.080 [2024-04-26 13:15:47.936053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.080 [2024-04-26 13:15:47.936212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.081 [2024-04-26 13:15:47.936219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.081 qpair failed and we were unable to recover it. 00:32:43.081 [2024-04-26 13:15:47.936422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.081 [2024-04-26 13:15:47.936637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.081 [2024-04-26 13:15:47.936644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.081 qpair failed and we were unable to recover it. 00:32:43.081 [2024-04-26 13:15:47.936957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.081 [2024-04-26 13:15:47.937306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.081 [2024-04-26 13:15:47.937313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.081 qpair failed and we were unable to recover it. 00:32:43.081 [2024-04-26 13:15:47.937606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.081 [2024-04-26 13:15:47.937901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.081 [2024-04-26 13:15:47.937908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.081 qpair failed and we were unable to recover it. 00:32:43.081 [2024-04-26 13:15:47.938237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.081 [2024-04-26 13:15:47.938535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.081 [2024-04-26 13:15:47.938544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.081 qpair failed and we were unable to recover it. 00:32:43.081 [2024-04-26 13:15:47.938847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.081 [2024-04-26 13:15:47.939148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.081 [2024-04-26 13:15:47.939154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.081 qpair failed and we were unable to recover it. 00:32:43.081 [2024-04-26 13:15:47.939455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.081 [2024-04-26 13:15:47.939734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.081 [2024-04-26 13:15:47.939741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.081 qpair failed and we were unable to recover it. 00:32:43.081 [2024-04-26 13:15:47.940140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.081 [2024-04-26 13:15:47.940441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.081 [2024-04-26 13:15:47.940448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.081 qpair failed and we were unable to recover it. 00:32:43.081 [2024-04-26 13:15:47.940724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.081 [2024-04-26 13:15:47.941018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.081 [2024-04-26 13:15:47.941025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.081 qpair failed and we were unable to recover it. 00:32:43.081 [2024-04-26 13:15:47.941326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.081 [2024-04-26 13:15:47.941607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.081 [2024-04-26 13:15:47.941614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.081 qpair failed and we were unable to recover it. 00:32:43.081 [2024-04-26 13:15:47.941916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.081 [2024-04-26 13:15:47.942213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.081 [2024-04-26 13:15:47.942219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.081 qpair failed and we were unable to recover it. 00:32:43.081 [2024-04-26 13:15:47.942525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.081 [2024-04-26 13:15:47.942873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.081 [2024-04-26 13:15:47.942880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.081 qpair failed and we were unable to recover it. 00:32:43.081 [2024-04-26 13:15:47.943169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.081 [2024-04-26 13:15:47.943522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.081 [2024-04-26 13:15:47.943530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.081 qpair failed and we were unable to recover it. 00:32:43.081 [2024-04-26 13:15:47.943893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.081 [2024-04-26 13:15:47.944166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.081 [2024-04-26 13:15:47.944173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.081 qpair failed and we were unable to recover it. 00:32:43.081 [2024-04-26 13:15:47.944477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.081 [2024-04-26 13:15:47.944799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.081 [2024-04-26 13:15:47.944808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.081 qpair failed and we were unable to recover it. 00:32:43.081 [2024-04-26 13:15:47.945112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.081 [2024-04-26 13:15:47.945488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.081 [2024-04-26 13:15:47.945495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.081 qpair failed and we were unable to recover it. 00:32:43.081 [2024-04-26 13:15:47.945849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.081 [2024-04-26 13:15:47.946193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.081 [2024-04-26 13:15:47.946199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.081 qpair failed and we were unable to recover it. 00:32:43.081 [2024-04-26 13:15:47.946499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.081 [2024-04-26 13:15:47.946695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.081 [2024-04-26 13:15:47.946702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.081 qpair failed and we were unable to recover it. 00:32:43.081 [2024-04-26 13:15:47.947005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.081 [2024-04-26 13:15:47.947344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.081 [2024-04-26 13:15:47.947351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.081 qpair failed and we were unable to recover it. 00:32:43.081 [2024-04-26 13:15:47.947639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.081 [2024-04-26 13:15:47.947952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.081 [2024-04-26 13:15:47.947959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.081 qpair failed and we were unable to recover it. 00:32:43.081 [2024-04-26 13:15:47.948267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.081 [2024-04-26 13:15:47.948550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.081 [2024-04-26 13:15:47.948556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.081 qpair failed and we were unable to recover it. 00:32:43.081 [2024-04-26 13:15:47.948861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.081 [2024-04-26 13:15:47.949208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.082 [2024-04-26 13:15:47.949214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.082 qpair failed and we were unable to recover it. 00:32:43.082 [2024-04-26 13:15:47.949382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.082 [2024-04-26 13:15:47.949703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.082 [2024-04-26 13:15:47.949710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.082 qpair failed and we were unable to recover it. 00:32:43.082 [2024-04-26 13:15:47.950012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.082 [2024-04-26 13:15:47.950310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.082 [2024-04-26 13:15:47.950317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.082 qpair failed and we were unable to recover it. 00:32:43.082 [2024-04-26 13:15:47.950530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.082 [2024-04-26 13:15:47.950707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.082 [2024-04-26 13:15:47.950715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.082 qpair failed and we were unable to recover it. 00:32:43.082 [2024-04-26 13:15:47.951005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.082 [2024-04-26 13:15:47.951340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.082 [2024-04-26 13:15:47.951347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.082 qpair failed and we were unable to recover it. 00:32:43.082 [2024-04-26 13:15:47.951646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.082 [2024-04-26 13:15:47.951932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.082 [2024-04-26 13:15:47.951939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.082 qpair failed and we were unable to recover it. 00:32:43.082 [2024-04-26 13:15:47.952344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.082 [2024-04-26 13:15:47.952682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.082 [2024-04-26 13:15:47.952688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.082 qpair failed and we were unable to recover it. 00:32:43.082 [2024-04-26 13:15:47.953005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.082 [2024-04-26 13:15:47.953327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.082 [2024-04-26 13:15:47.953333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.082 qpair failed and we were unable to recover it. 00:32:43.082 [2024-04-26 13:15:47.953633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.082 [2024-04-26 13:15:47.953985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.082 [2024-04-26 13:15:47.953992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.082 qpair failed and we were unable to recover it. 00:32:43.082 [2024-04-26 13:15:47.954317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.082 [2024-04-26 13:15:47.954640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.082 [2024-04-26 13:15:47.954647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.082 qpair failed and we were unable to recover it. 00:32:43.082 [2024-04-26 13:15:47.954951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.082 [2024-04-26 13:15:47.955260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.082 [2024-04-26 13:15:47.955266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.082 qpair failed and we were unable to recover it. 00:32:43.082 [2024-04-26 13:15:47.955564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.082 [2024-04-26 13:15:47.955726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.082 [2024-04-26 13:15:47.955734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.082 qpair failed and we were unable to recover it. 00:32:43.082 [2024-04-26 13:15:47.956067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.082 [2024-04-26 13:15:47.956392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.082 [2024-04-26 13:15:47.956399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.082 qpair failed and we were unable to recover it. 00:32:43.082 [2024-04-26 13:15:47.956733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.082 [2024-04-26 13:15:47.956926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.082 [2024-04-26 13:15:47.956933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.082 qpair failed and we were unable to recover it. 00:32:43.082 [2024-04-26 13:15:47.957263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.082 [2024-04-26 13:15:47.957583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.082 [2024-04-26 13:15:47.957590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.082 qpair failed and we were unable to recover it. 00:32:43.082 [2024-04-26 13:15:47.957907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.082 [2024-04-26 13:15:47.958250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.082 [2024-04-26 13:15:47.958257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.082 qpair failed and we were unable to recover it. 00:32:43.082 [2024-04-26 13:15:47.958566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.082 [2024-04-26 13:15:47.958879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.082 [2024-04-26 13:15:47.958885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.082 qpair failed and we were unable to recover it. 00:32:43.082 [2024-04-26 13:15:47.959190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.082 [2024-04-26 13:15:47.959501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.082 [2024-04-26 13:15:47.959507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.082 qpair failed and we were unable to recover it. 00:32:43.082 [2024-04-26 13:15:47.959662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.082 [2024-04-26 13:15:47.959946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.082 [2024-04-26 13:15:47.959954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.082 qpair failed and we were unable to recover it. 00:32:43.082 [2024-04-26 13:15:47.960267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.082 [2024-04-26 13:15:47.960572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.082 [2024-04-26 13:15:47.960578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.082 qpair failed and we were unable to recover it. 00:32:43.082 [2024-04-26 13:15:47.960866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.082 [2024-04-26 13:15:47.961026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.082 [2024-04-26 13:15:47.961033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.082 qpair failed and we were unable to recover it. 00:32:43.082 [2024-04-26 13:15:47.961237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.082 [2024-04-26 13:15:47.961569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.082 [2024-04-26 13:15:47.961575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.082 qpair failed and we were unable to recover it. 00:32:43.082 [2024-04-26 13:15:47.961794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.082 [2024-04-26 13:15:47.961973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.082 [2024-04-26 13:15:47.961980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.082 qpair failed and we were unable to recover it. 00:32:43.082 [2024-04-26 13:15:47.962273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.082 [2024-04-26 13:15:47.962604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.082 [2024-04-26 13:15:47.962610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.082 qpair failed and we were unable to recover it. 00:32:43.082 [2024-04-26 13:15:47.962915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.082 [2024-04-26 13:15:47.963223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.082 [2024-04-26 13:15:47.963229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.082 qpair failed and we were unable to recover it. 00:32:43.082 [2024-04-26 13:15:47.963523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.082 [2024-04-26 13:15:47.963899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.082 [2024-04-26 13:15:47.963906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.082 qpair failed and we were unable to recover it. 00:32:43.082 [2024-04-26 13:15:47.964196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.082 [2024-04-26 13:15:47.964480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.082 [2024-04-26 13:15:47.964486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.082 qpair failed and we were unable to recover it. 00:32:43.082 [2024-04-26 13:15:47.964803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.082 [2024-04-26 13:15:47.965003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.082 [2024-04-26 13:15:47.965010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.083 qpair failed and we were unable to recover it. 00:32:43.083 [2024-04-26 13:15:47.965330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.083 [2024-04-26 13:15:47.965652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.083 [2024-04-26 13:15:47.965659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.083 qpair failed and we were unable to recover it. 00:32:43.083 [2024-04-26 13:15:47.965958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.083 [2024-04-26 13:15:47.966295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.083 [2024-04-26 13:15:47.966302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.083 qpair failed and we were unable to recover it. 00:32:43.083 [2024-04-26 13:15:47.966623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.083 [2024-04-26 13:15:47.966964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.083 [2024-04-26 13:15:47.966971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.083 qpair failed and we were unable to recover it. 00:32:43.083 [2024-04-26 13:15:47.967244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.083 [2024-04-26 13:15:47.967601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.083 [2024-04-26 13:15:47.967608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.083 qpair failed and we were unable to recover it. 00:32:43.083 [2024-04-26 13:15:47.967921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.083 [2024-04-26 13:15:47.968088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.083 [2024-04-26 13:15:47.968095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.083 qpair failed and we were unable to recover it. 00:32:43.083 [2024-04-26 13:15:47.968386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.083 [2024-04-26 13:15:47.968582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.083 [2024-04-26 13:15:47.968588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.083 qpair failed and we were unable to recover it. 00:32:43.083 [2024-04-26 13:15:47.968763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.083 [2024-04-26 13:15:47.968942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.083 [2024-04-26 13:15:47.968949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.083 qpair failed and we were unable to recover it. 00:32:43.083 [2024-04-26 13:15:47.969123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.083 [2024-04-26 13:15:47.969416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.083 [2024-04-26 13:15:47.969424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.083 qpair failed and we were unable to recover it. 00:32:43.083 [2024-04-26 13:15:47.969736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.083 [2024-04-26 13:15:47.970107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.083 [2024-04-26 13:15:47.970114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.083 qpair failed and we were unable to recover it. 00:32:43.083 [2024-04-26 13:15:47.970407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.083 [2024-04-26 13:15:47.970740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.083 [2024-04-26 13:15:47.970746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.083 qpair failed and we were unable to recover it. 00:32:43.083 [2024-04-26 13:15:47.971052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.083 [2024-04-26 13:15:47.971384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.083 [2024-04-26 13:15:47.971391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.083 qpair failed and we were unable to recover it. 00:32:43.083 [2024-04-26 13:15:47.971698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.083 [2024-04-26 13:15:47.972086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.083 [2024-04-26 13:15:47.972093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.083 qpair failed and we were unable to recover it. 00:32:43.083 [2024-04-26 13:15:47.972440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.083 [2024-04-26 13:15:47.972732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.083 [2024-04-26 13:15:47.972738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.083 qpair failed and we were unable to recover it. 00:32:43.083 [2024-04-26 13:15:47.972998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.083 [2024-04-26 13:15:47.973197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.083 [2024-04-26 13:15:47.973204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.083 qpair failed and we were unable to recover it. 00:32:43.083 [2024-04-26 13:15:47.973520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.083 [2024-04-26 13:15:47.973816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.083 [2024-04-26 13:15:47.973823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.083 qpair failed and we were unable to recover it. 00:32:43.083 [2024-04-26 13:15:47.974146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.083 [2024-04-26 13:15:47.974480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.083 [2024-04-26 13:15:47.974487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.083 qpair failed and we were unable to recover it. 00:32:43.083 [2024-04-26 13:15:47.974844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.083 [2024-04-26 13:15:47.975009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.083 [2024-04-26 13:15:47.975015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.083 qpair failed and we were unable to recover it. 00:32:43.083 [2024-04-26 13:15:47.975434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.083 [2024-04-26 13:15:47.975772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.083 [2024-04-26 13:15:47.975784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.083 qpair failed and we were unable to recover it. 00:32:43.083 [2024-04-26 13:15:47.976088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.083 [2024-04-26 13:15:47.976387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.083 [2024-04-26 13:15:47.976393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.083 qpair failed and we were unable to recover it. 00:32:43.083 [2024-04-26 13:15:47.976707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.083 [2024-04-26 13:15:47.977023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.083 [2024-04-26 13:15:47.977030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.083 qpair failed and we were unable to recover it. 00:32:43.083 [2024-04-26 13:15:47.977335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.083 [2024-04-26 13:15:47.977658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.083 [2024-04-26 13:15:47.977664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.083 qpair failed and we were unable to recover it. 00:32:43.083 [2024-04-26 13:15:47.977998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.083 [2024-04-26 13:15:47.978331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.083 [2024-04-26 13:15:47.978338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.083 qpair failed and we were unable to recover it. 00:32:43.083 [2024-04-26 13:15:47.978525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.083 [2024-04-26 13:15:47.978853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.083 [2024-04-26 13:15:47.978860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.083 qpair failed and we were unable to recover it. 00:32:43.083 [2024-04-26 13:15:47.979154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.083 [2024-04-26 13:15:47.979440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.083 [2024-04-26 13:15:47.979446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.083 qpair failed and we were unable to recover it. 00:32:43.083 [2024-04-26 13:15:47.979755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.083 [2024-04-26 13:15:47.980097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.083 [2024-04-26 13:15:47.980105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.083 qpair failed and we were unable to recover it. 00:32:43.083 [2024-04-26 13:15:47.980411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.083 [2024-04-26 13:15:47.980736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.083 [2024-04-26 13:15:47.980743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.083 qpair failed and we were unable to recover it. 00:32:43.083 [2024-04-26 13:15:47.981042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.083 [2024-04-26 13:15:47.981371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.083 [2024-04-26 13:15:47.981377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.084 qpair failed and we were unable to recover it. 00:32:43.084 [2024-04-26 13:15:47.981680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.084 [2024-04-26 13:15:47.982003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.084 [2024-04-26 13:15:47.982010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.084 qpair failed and we were unable to recover it. 00:32:43.084 [2024-04-26 13:15:47.982320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.084 [2024-04-26 13:15:47.982590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.084 [2024-04-26 13:15:47.982596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.084 qpair failed and we were unable to recover it. 00:32:43.084 [2024-04-26 13:15:47.982751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.084 [2024-04-26 13:15:47.983127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.084 [2024-04-26 13:15:47.983136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.084 qpair failed and we were unable to recover it. 00:32:43.084 [2024-04-26 13:15:47.983450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.084 [2024-04-26 13:15:47.983738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.084 [2024-04-26 13:15:47.983744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.084 qpair failed and we were unable to recover it. 00:32:43.084 [2024-04-26 13:15:47.984018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.084 [2024-04-26 13:15:47.984333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.084 [2024-04-26 13:15:47.984339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.084 qpair failed and we were unable to recover it. 00:32:43.084 [2024-04-26 13:15:47.984649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.084 [2024-04-26 13:15:47.984972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.084 [2024-04-26 13:15:47.984980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.084 qpair failed and we were unable to recover it. 00:32:43.084 [2024-04-26 13:15:47.985289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.084 [2024-04-26 13:15:47.985606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.084 [2024-04-26 13:15:47.985614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.084 qpair failed and we were unable to recover it. 00:32:43.084 [2024-04-26 13:15:47.985850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.084 [2024-04-26 13:15:47.986071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.084 [2024-04-26 13:15:47.986078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.084 qpair failed and we were unable to recover it. 00:32:43.084 [2024-04-26 13:15:47.986249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.084 [2024-04-26 13:15:47.986526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.084 [2024-04-26 13:15:47.986532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.084 qpair failed and we were unable to recover it. 00:32:43.084 [2024-04-26 13:15:47.986862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.084 [2024-04-26 13:15:47.987183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.084 [2024-04-26 13:15:47.987189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.084 qpair failed and we were unable to recover it. 00:32:43.084 [2024-04-26 13:15:47.987490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.084 [2024-04-26 13:15:47.987775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.084 [2024-04-26 13:15:47.987781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.084 qpair failed and we were unable to recover it. 00:32:43.084 [2024-04-26 13:15:47.988084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.084 [2024-04-26 13:15:47.988393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.084 [2024-04-26 13:15:47.988399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.084 qpair failed and we were unable to recover it. 00:32:43.084 [2024-04-26 13:15:47.988710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.084 [2024-04-26 13:15:47.989045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.084 [2024-04-26 13:15:47.989052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.084 qpair failed and we were unable to recover it. 00:32:43.084 [2024-04-26 13:15:47.989361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.084 [2024-04-26 13:15:47.989679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.084 [2024-04-26 13:15:47.989685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.084 qpair failed and we were unable to recover it. 00:32:43.084 [2024-04-26 13:15:47.990022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.084 [2024-04-26 13:15:47.990338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.084 [2024-04-26 13:15:47.990345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.084 qpair failed and we were unable to recover it. 00:32:43.084 [2024-04-26 13:15:47.990652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.084 [2024-04-26 13:15:47.990949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.084 [2024-04-26 13:15:47.990956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.084 qpair failed and we were unable to recover it. 00:32:43.084 [2024-04-26 13:15:47.991118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.084 [2024-04-26 13:15:47.991395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.084 [2024-04-26 13:15:47.991401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.084 qpair failed and we were unable to recover it. 00:32:43.084 [2024-04-26 13:15:47.991728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.084 [2024-04-26 13:15:47.991933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.084 [2024-04-26 13:15:47.991940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.084 qpair failed and we were unable to recover it. 00:32:43.084 [2024-04-26 13:15:47.992230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.084 [2024-04-26 13:15:47.992641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.084 [2024-04-26 13:15:47.992648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.084 qpair failed and we were unable to recover it. 00:32:43.084 [2024-04-26 13:15:47.992943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.084 [2024-04-26 13:15:47.993226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.084 [2024-04-26 13:15:47.993232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.084 qpair failed and we were unable to recover it. 00:32:43.084 [2024-04-26 13:15:47.993533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.084 [2024-04-26 13:15:47.993856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.084 [2024-04-26 13:15:47.993862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.084 qpair failed and we were unable to recover it. 00:32:43.084 [2024-04-26 13:15:47.994023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.084 [2024-04-26 13:15:47.994292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.084 [2024-04-26 13:15:47.994299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.084 qpair failed and we were unable to recover it. 00:32:43.084 [2024-04-26 13:15:47.994634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.084 [2024-04-26 13:15:47.994939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.084 [2024-04-26 13:15:47.994945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.084 qpair failed and we were unable to recover it. 00:32:43.084 [2024-04-26 13:15:47.995269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.084 [2024-04-26 13:15:47.995573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.084 [2024-04-26 13:15:47.995580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.084 qpair failed and we were unable to recover it. 00:32:43.084 [2024-04-26 13:15:47.995834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.084 [2024-04-26 13:15:47.996130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.084 [2024-04-26 13:15:47.996137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.084 qpair failed and we were unable to recover it. 00:32:43.084 [2024-04-26 13:15:47.996278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.084 [2024-04-26 13:15:47.996554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.084 [2024-04-26 13:15:47.996561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.084 qpair failed and we were unable to recover it. 00:32:43.084 [2024-04-26 13:15:47.996889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.084 [2024-04-26 13:15:47.997215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.084 [2024-04-26 13:15:47.997221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.084 qpair failed and we were unable to recover it. 00:32:43.084 [2024-04-26 13:15:47.997521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.084 [2024-04-26 13:15:47.997800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.085 [2024-04-26 13:15:47.997806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.085 qpair failed and we were unable to recover it. 00:32:43.085 [2024-04-26 13:15:47.998121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.085 [2024-04-26 13:15:47.998436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.085 [2024-04-26 13:15:47.998442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.085 qpair failed and we were unable to recover it. 00:32:43.085 [2024-04-26 13:15:47.998598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.085 [2024-04-26 13:15:47.998798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.085 [2024-04-26 13:15:47.998804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.085 qpair failed and we were unable to recover it. 00:32:43.085 [2024-04-26 13:15:47.999105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.085 [2024-04-26 13:15:47.999421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.085 [2024-04-26 13:15:47.999428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.085 qpair failed and we were unable to recover it. 00:32:43.085 [2024-04-26 13:15:47.999746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.085 [2024-04-26 13:15:48.000113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.085 [2024-04-26 13:15:48.000120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.085 qpair failed and we were unable to recover it. 00:32:43.085 [2024-04-26 13:15:48.000270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.085 [2024-04-26 13:15:48.000535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.085 [2024-04-26 13:15:48.000542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.085 qpair failed and we were unable to recover it. 00:32:43.085 [2024-04-26 13:15:48.000849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.085 [2024-04-26 13:15:48.001138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.085 [2024-04-26 13:15:48.001145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.085 qpair failed and we were unable to recover it. 00:32:43.085 [2024-04-26 13:15:48.001461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.085 [2024-04-26 13:15:48.001776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.085 [2024-04-26 13:15:48.001782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.085 qpair failed and we were unable to recover it. 00:32:43.085 [2024-04-26 13:15:48.002116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.085 [2024-04-26 13:15:48.002441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.085 [2024-04-26 13:15:48.002448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.085 qpair failed and we were unable to recover it. 00:32:43.085 [2024-04-26 13:15:48.002758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.085 [2024-04-26 13:15:48.003067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.085 [2024-04-26 13:15:48.003074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.085 qpair failed and we were unable to recover it. 00:32:43.085 [2024-04-26 13:15:48.003355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.085 [2024-04-26 13:15:48.003648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.085 [2024-04-26 13:15:48.003655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.085 qpair failed and we were unable to recover it. 00:32:43.085 [2024-04-26 13:15:48.003940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.085 [2024-04-26 13:15:48.004268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.085 [2024-04-26 13:15:48.004275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.085 qpair failed and we were unable to recover it. 00:32:43.085 [2024-04-26 13:15:48.004587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.085 [2024-04-26 13:15:48.004900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.085 [2024-04-26 13:15:48.004909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.085 qpair failed and we were unable to recover it. 00:32:43.085 [2024-04-26 13:15:48.005200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.085 [2024-04-26 13:15:48.005409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.085 [2024-04-26 13:15:48.005415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.085 qpair failed and we were unable to recover it. 00:32:43.085 [2024-04-26 13:15:48.005581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.085 [2024-04-26 13:15:48.005896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.085 [2024-04-26 13:15:48.005902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.085 qpair failed and we were unable to recover it. 00:32:43.085 [2024-04-26 13:15:48.006211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.085 [2024-04-26 13:15:48.006558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.085 [2024-04-26 13:15:48.006564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.085 qpair failed and we were unable to recover it. 00:32:43.085 [2024-04-26 13:15:48.006881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.085 [2024-04-26 13:15:48.007202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.085 [2024-04-26 13:15:48.007209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.085 qpair failed and we were unable to recover it. 00:32:43.085 [2024-04-26 13:15:48.007533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.085 [2024-04-26 13:15:48.007836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.085 [2024-04-26 13:15:48.007846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.085 qpair failed and we were unable to recover it. 00:32:43.085 [2024-04-26 13:15:48.008084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.085 [2024-04-26 13:15:48.008403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.085 [2024-04-26 13:15:48.008409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.085 qpair failed and we were unable to recover it. 00:32:43.085 [2024-04-26 13:15:48.008703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.085 [2024-04-26 13:15:48.008895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.085 [2024-04-26 13:15:48.008902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.085 qpair failed and we were unable to recover it. 00:32:43.085 [2024-04-26 13:15:48.009230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.085 [2024-04-26 13:15:48.009534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.085 [2024-04-26 13:15:48.009541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.085 qpair failed and we were unable to recover it. 00:32:43.085 [2024-04-26 13:15:48.009832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.085 [2024-04-26 13:15:48.010174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.085 [2024-04-26 13:15:48.010181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.085 qpair failed and we were unable to recover it. 00:32:43.085 [2024-04-26 13:15:48.010429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.085 [2024-04-26 13:15:48.010749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.085 [2024-04-26 13:15:48.010756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.085 qpair failed and we were unable to recover it. 00:32:43.085 [2024-04-26 13:15:48.011084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.085 [2024-04-26 13:15:48.011408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.085 [2024-04-26 13:15:48.011415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.085 qpair failed and we were unable to recover it. 00:32:43.085 [2024-04-26 13:15:48.011728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.085 [2024-04-26 13:15:48.011997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.085 [2024-04-26 13:15:48.012010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.085 qpair failed and we were unable to recover it. 00:32:43.085 [2024-04-26 13:15:48.012283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.085 [2024-04-26 13:15:48.012501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.085 [2024-04-26 13:15:48.012507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.086 qpair failed and we were unable to recover it. 00:32:43.086 [2024-04-26 13:15:48.012787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.086 [2024-04-26 13:15:48.013137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.086 [2024-04-26 13:15:48.013143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.086 qpair failed and we were unable to recover it. 00:32:43.086 [2024-04-26 13:15:48.013439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.086 [2024-04-26 13:15:48.013732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.086 [2024-04-26 13:15:48.013738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.086 qpair failed and we were unable to recover it. 00:32:43.086 [2024-04-26 13:15:48.014133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.086 [2024-04-26 13:15:48.014438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.086 [2024-04-26 13:15:48.014445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.086 qpair failed and we were unable to recover it. 00:32:43.086 [2024-04-26 13:15:48.014760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.086 [2024-04-26 13:15:48.015045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.086 [2024-04-26 13:15:48.015051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.086 qpair failed and we were unable to recover it. 00:32:43.086 [2024-04-26 13:15:48.015368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.086 [2024-04-26 13:15:48.015685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.086 [2024-04-26 13:15:48.015692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.086 qpair failed and we were unable to recover it. 00:32:43.086 [2024-04-26 13:15:48.016000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.086 [2024-04-26 13:15:48.016171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.086 [2024-04-26 13:15:48.016178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.086 qpair failed and we were unable to recover it. 00:32:43.086 [2024-04-26 13:15:48.016479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.086 [2024-04-26 13:15:48.016813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.086 [2024-04-26 13:15:48.016820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.086 qpair failed and we were unable to recover it. 00:32:43.086 [2024-04-26 13:15:48.017001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.086 [2024-04-26 13:15:48.017308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.086 [2024-04-26 13:15:48.017315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.086 qpair failed and we were unable to recover it. 00:32:43.086 [2024-04-26 13:15:48.017646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.086 [2024-04-26 13:15:48.018341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.086 [2024-04-26 13:15:48.018359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.086 qpair failed and we were unable to recover it. 00:32:43.086 [2024-04-26 13:15:48.018526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.086 [2024-04-26 13:15:48.018816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.086 [2024-04-26 13:15:48.018822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.086 qpair failed and we were unable to recover it. 00:32:43.086 [2024-04-26 13:15:48.019127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.086 [2024-04-26 13:15:48.019476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.086 [2024-04-26 13:15:48.019483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.086 qpair failed and we were unable to recover it. 00:32:43.086 [2024-04-26 13:15:48.019798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.086 [2024-04-26 13:15:48.020111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.086 [2024-04-26 13:15:48.020118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.086 qpair failed and we were unable to recover it. 00:32:43.086 [2024-04-26 13:15:48.020426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.086 [2024-04-26 13:15:48.020768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.086 [2024-04-26 13:15:48.020775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.086 qpair failed and we were unable to recover it. 00:32:43.086 [2024-04-26 13:15:48.021085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.086 [2024-04-26 13:15:48.021417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.086 [2024-04-26 13:15:48.021423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.086 qpair failed and we were unable to recover it. 00:32:43.086 [2024-04-26 13:15:48.021732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.086 [2024-04-26 13:15:48.022065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.086 [2024-04-26 13:15:48.022071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.086 qpair failed and we were unable to recover it. 00:32:43.086 [2024-04-26 13:15:48.022392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.086 [2024-04-26 13:15:48.022709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.086 [2024-04-26 13:15:48.022716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.086 qpair failed and we were unable to recover it. 00:32:43.086 [2024-04-26 13:15:48.022912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.086 [2024-04-26 13:15:48.023236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.086 [2024-04-26 13:15:48.023242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.086 qpair failed and we were unable to recover it. 00:32:43.086 [2024-04-26 13:15:48.023543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.086 [2024-04-26 13:15:48.023830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.086 [2024-04-26 13:15:48.023840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.086 qpair failed and we were unable to recover it. 00:32:43.086 [2024-04-26 13:15:48.024003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.086 [2024-04-26 13:15:48.024288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.086 [2024-04-26 13:15:48.024294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.086 qpair failed and we were unable to recover it. 00:32:43.086 [2024-04-26 13:15:48.024621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.086 [2024-04-26 13:15:48.024963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.086 [2024-04-26 13:15:48.024970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.086 qpair failed and we were unable to recover it. 00:32:43.086 [2024-04-26 13:15:48.025307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.086 [2024-04-26 13:15:48.025469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.086 [2024-04-26 13:15:48.025476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.086 qpair failed and we were unable to recover it. 00:32:43.086 [2024-04-26 13:15:48.025780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.086 [2024-04-26 13:15:48.025997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.086 [2024-04-26 13:15:48.026005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.086 qpair failed and we were unable to recover it. 00:32:43.086 [2024-04-26 13:15:48.026327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.086 [2024-04-26 13:15:48.026652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.086 [2024-04-26 13:15:48.026659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.086 qpair failed and we were unable to recover it. 00:32:43.086 [2024-04-26 13:15:48.026868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.086 [2024-04-26 13:15:48.027172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.086 [2024-04-26 13:15:48.027179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.086 qpair failed and we were unable to recover it. 00:32:43.086 [2024-04-26 13:15:48.027500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.087 [2024-04-26 13:15:48.027821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.087 [2024-04-26 13:15:48.027827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.087 qpair failed and we were unable to recover it. 00:32:43.087 [2024-04-26 13:15:48.028018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.087 [2024-04-26 13:15:48.028378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.087 [2024-04-26 13:15:48.028384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.087 qpair failed and we were unable to recover it. 00:32:43.087 [2024-04-26 13:15:48.028682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.087 [2024-04-26 13:15:48.028818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.087 [2024-04-26 13:15:48.028826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.087 qpair failed and we were unable to recover it. 00:32:43.087 [2024-04-26 13:15:48.029144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.087 [2024-04-26 13:15:48.029471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.087 [2024-04-26 13:15:48.029478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.087 qpair failed and we were unable to recover it. 00:32:43.087 [2024-04-26 13:15:48.029699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.087 [2024-04-26 13:15:48.030015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.087 [2024-04-26 13:15:48.030022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.087 qpair failed and we were unable to recover it. 00:32:43.087 [2024-04-26 13:15:48.030207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.087 [2024-04-26 13:15:48.030479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.087 [2024-04-26 13:15:48.030486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.087 qpair failed and we were unable to recover it. 00:32:43.087 [2024-04-26 13:15:48.030778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.087 [2024-04-26 13:15:48.031084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.087 [2024-04-26 13:15:48.031092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.087 qpair failed and we were unable to recover it. 00:32:43.087 [2024-04-26 13:15:48.031334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.087 [2024-04-26 13:15:48.031672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.087 [2024-04-26 13:15:48.031679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.087 qpair failed and we were unable to recover it. 00:32:43.087 [2024-04-26 13:15:48.031989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.087 [2024-04-26 13:15:48.032297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.087 [2024-04-26 13:15:48.032304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.087 qpair failed and we were unable to recover it. 00:32:43.087 [2024-04-26 13:15:48.032466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.087 [2024-04-26 13:15:48.032738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.087 [2024-04-26 13:15:48.032744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.087 qpair failed and we were unable to recover it. 00:32:43.087 [2024-04-26 13:15:48.033051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.087 [2024-04-26 13:15:48.033382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.087 [2024-04-26 13:15:48.033389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.087 qpair failed and we were unable to recover it. 00:32:43.087 [2024-04-26 13:15:48.033607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.087 [2024-04-26 13:15:48.033857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.087 [2024-04-26 13:15:48.033865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.087 qpair failed and we were unable to recover it. 00:32:43.087 [2024-04-26 13:15:48.034055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.087 [2024-04-26 13:15:48.034265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.087 [2024-04-26 13:15:48.034274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.087 qpair failed and we were unable to recover it. 00:32:43.087 [2024-04-26 13:15:48.034594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.087 [2024-04-26 13:15:48.034890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.087 [2024-04-26 13:15:48.034897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.087 qpair failed and we were unable to recover it. 00:32:43.087 [2024-04-26 13:15:48.035196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.087 [2024-04-26 13:15:48.035400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.087 [2024-04-26 13:15:48.035406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.087 qpair failed and we were unable to recover it. 00:32:43.087 [2024-04-26 13:15:48.035704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.087 [2024-04-26 13:15:48.035904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.087 [2024-04-26 13:15:48.035911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.087 qpair failed and we were unable to recover it. 00:32:43.087 [2024-04-26 13:15:48.036152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.087 [2024-04-26 13:15:48.036494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.087 [2024-04-26 13:15:48.036500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.087 qpair failed and we were unable to recover it. 00:32:43.087 [2024-04-26 13:15:48.036816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.087 [2024-04-26 13:15:48.037111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.087 [2024-04-26 13:15:48.037117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.087 qpair failed and we were unable to recover it. 00:32:43.087 [2024-04-26 13:15:48.037419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.087 [2024-04-26 13:15:48.037706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.087 [2024-04-26 13:15:48.037712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.087 qpair failed and we were unable to recover it. 00:32:43.087 [2024-04-26 13:15:48.037876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.087 [2024-04-26 13:15:48.038175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.087 [2024-04-26 13:15:48.038181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.087 qpair failed and we were unable to recover it. 00:32:43.087 [2024-04-26 13:15:48.038506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.087 [2024-04-26 13:15:48.038828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.087 [2024-04-26 13:15:48.038834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.087 qpair failed and we were unable to recover it. 00:32:43.087 [2024-04-26 13:15:48.039160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.087 [2024-04-26 13:15:48.039365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.087 [2024-04-26 13:15:48.039372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.087 qpair failed and we were unable to recover it. 00:32:43.087 [2024-04-26 13:15:48.039656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.087 [2024-04-26 13:15:48.039973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.087 [2024-04-26 13:15:48.039981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.087 qpair failed and we were unable to recover it. 00:32:43.087 [2024-04-26 13:15:48.040303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.087 [2024-04-26 13:15:48.040667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.087 [2024-04-26 13:15:48.040673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.087 qpair failed and we were unable to recover it. 00:32:43.087 [2024-04-26 13:15:48.040987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.087 [2024-04-26 13:15:48.041313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.087 [2024-04-26 13:15:48.041320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.087 qpair failed and we were unable to recover it. 00:32:43.087 [2024-04-26 13:15:48.041636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.087 [2024-04-26 13:15:48.041814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.087 [2024-04-26 13:15:48.041821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.088 qpair failed and we were unable to recover it. 00:32:43.088 [2024-04-26 13:15:48.042054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.088 [2024-04-26 13:15:48.042414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.088 [2024-04-26 13:15:48.042420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.088 qpair failed and we were unable to recover it. 00:32:43.088 [2024-04-26 13:15:48.042725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.088 [2024-04-26 13:15:48.043033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.088 [2024-04-26 13:15:48.043041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.088 qpair failed and we were unable to recover it. 00:32:43.088 [2024-04-26 13:15:48.043353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.088 [2024-04-26 13:15:48.043650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.088 [2024-04-26 13:15:48.043657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.088 qpair failed and we were unable to recover it. 00:32:43.088 [2024-04-26 13:15:48.043971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.088 [2024-04-26 13:15:48.044273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.088 [2024-04-26 13:15:48.044279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.088 qpair failed and we were unable to recover it. 00:32:43.088 [2024-04-26 13:15:48.044603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.088 [2024-04-26 13:15:48.044792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.088 [2024-04-26 13:15:48.044799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.088 qpair failed and we were unable to recover it. 00:32:43.088 [2024-04-26 13:15:48.045118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.088 [2024-04-26 13:15:48.045500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.088 [2024-04-26 13:15:48.045507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.088 qpair failed and we were unable to recover it. 00:32:43.088 [2024-04-26 13:15:48.045808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.088 [2024-04-26 13:15:48.046093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.088 [2024-04-26 13:15:48.046109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.088 qpair failed and we were unable to recover it. 00:32:43.088 [2024-04-26 13:15:48.046409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.088 [2024-04-26 13:15:48.046742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.088 [2024-04-26 13:15:48.046749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.088 qpair failed and we were unable to recover it. 00:32:43.088 [2024-04-26 13:15:48.047122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.088 [2024-04-26 13:15:48.047388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.088 [2024-04-26 13:15:48.047394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.088 qpair failed and we were unable to recover it. 00:32:43.088 [2024-04-26 13:15:48.047685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.088 [2024-04-26 13:15:48.047984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.088 [2024-04-26 13:15:48.047990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.088 qpair failed and we were unable to recover it. 00:32:43.088 [2024-04-26 13:15:48.048314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.088 [2024-04-26 13:15:48.048643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.088 [2024-04-26 13:15:48.048650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.088 qpair failed and we were unable to recover it. 00:32:43.088 [2024-04-26 13:15:48.048965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.088 [2024-04-26 13:15:48.049314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.088 [2024-04-26 13:15:48.049321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.088 qpair failed and we were unable to recover it. 00:32:43.088 [2024-04-26 13:15:48.049646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.088 [2024-04-26 13:15:48.049849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.088 [2024-04-26 13:15:48.049856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.088 qpair failed and we were unable to recover it. 00:32:43.088 [2024-04-26 13:15:48.050200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.088 [2024-04-26 13:15:48.050523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.088 [2024-04-26 13:15:48.050530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.088 qpair failed and we were unable to recover it. 00:32:43.088 [2024-04-26 13:15:48.050824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.088 [2024-04-26 13:15:48.051178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.088 [2024-04-26 13:15:48.051185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.088 qpair failed and we were unable to recover it. 00:32:43.088 [2024-04-26 13:15:48.051414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.088 [2024-04-26 13:15:48.051711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.088 [2024-04-26 13:15:48.051717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.088 qpair failed and we were unable to recover it. 00:32:43.088 [2024-04-26 13:15:48.052014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.088 [2024-04-26 13:15:48.052265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.088 [2024-04-26 13:15:48.052272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.088 qpair failed and we were unable to recover it. 00:32:43.088 [2024-04-26 13:15:48.052446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.088 [2024-04-26 13:15:48.052774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.088 [2024-04-26 13:15:48.052783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.088 qpair failed and we were unable to recover it. 00:32:43.088 [2024-04-26 13:15:48.053113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.088 [2024-04-26 13:15:48.053430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.088 [2024-04-26 13:15:48.053437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.088 qpair failed and we were unable to recover it. 00:32:43.088 [2024-04-26 13:15:48.053747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.088 [2024-04-26 13:15:48.053983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.088 [2024-04-26 13:15:48.053990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.088 qpair failed and we were unable to recover it. 00:32:43.088 [2024-04-26 13:15:48.054302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.088 [2024-04-26 13:15:48.054604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.088 [2024-04-26 13:15:48.054610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.088 qpair failed and we were unable to recover it. 00:32:43.088 [2024-04-26 13:15:48.054805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.088 [2024-04-26 13:15:48.055157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.088 [2024-04-26 13:15:48.055165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.088 qpair failed and we were unable to recover it. 00:32:43.088 [2024-04-26 13:15:48.055490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.088 [2024-04-26 13:15:48.055694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.088 [2024-04-26 13:15:48.055701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.088 qpair failed and we were unable to recover it. 00:32:43.088 [2024-04-26 13:15:48.056019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.088 [2024-04-26 13:15:48.056370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.088 [2024-04-26 13:15:48.056378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.088 qpair failed and we were unable to recover it. 00:32:43.088 [2024-04-26 13:15:48.056690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.088 [2024-04-26 13:15:48.057014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.088 [2024-04-26 13:15:48.057022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.088 qpair failed and we were unable to recover it. 00:32:43.088 [2024-04-26 13:15:48.057408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.088 [2024-04-26 13:15:48.057559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.088 [2024-04-26 13:15:48.057567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.088 qpair failed and we were unable to recover it. 00:32:43.088 [2024-04-26 13:15:48.057793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.088 [2024-04-26 13:15:48.058146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.088 [2024-04-26 13:15:48.058153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.088 qpair failed and we were unable to recover it. 00:32:43.088 [2024-04-26 13:15:48.058461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.089 [2024-04-26 13:15:48.058776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.089 [2024-04-26 13:15:48.058782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.089 qpair failed and we were unable to recover it. 00:32:43.089 [2024-04-26 13:15:48.059070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.089 [2024-04-26 13:15:48.059367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.089 [2024-04-26 13:15:48.059373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.089 qpair failed and we were unable to recover it. 00:32:43.089 [2024-04-26 13:15:48.059657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.089 [2024-04-26 13:15:48.059958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.089 [2024-04-26 13:15:48.059965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.089 qpair failed and we were unable to recover it. 00:32:43.089 [2024-04-26 13:15:48.060248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.089 [2024-04-26 13:15:48.060563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.089 [2024-04-26 13:15:48.060569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.089 qpair failed and we were unable to recover it. 00:32:43.089 [2024-04-26 13:15:48.060873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.089 [2024-04-26 13:15:48.061070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.089 [2024-04-26 13:15:48.061078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.089 qpair failed and we were unable to recover it. 00:32:43.089 [2024-04-26 13:15:48.061306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.089 [2024-04-26 13:15:48.061495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.089 [2024-04-26 13:15:48.061502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.089 qpair failed and we were unable to recover it. 00:32:43.089 [2024-04-26 13:15:48.061778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.089 [2024-04-26 13:15:48.061969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.089 [2024-04-26 13:15:48.061977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.089 qpair failed and we were unable to recover it. 00:32:43.089 [2024-04-26 13:15:48.062304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.089 [2024-04-26 13:15:48.062592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.089 [2024-04-26 13:15:48.062599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.089 qpair failed and we were unable to recover it. 00:32:43.089 [2024-04-26 13:15:48.062921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.089 [2024-04-26 13:15:48.063238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.089 [2024-04-26 13:15:48.063245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.089 qpair failed and we were unable to recover it. 00:32:43.089 [2024-04-26 13:15:48.063426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.089 [2024-04-26 13:15:48.063637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.089 [2024-04-26 13:15:48.063645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.089 qpair failed and we were unable to recover it. 00:32:43.089 [2024-04-26 13:15:48.063964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.089 [2024-04-26 13:15:48.064271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.089 [2024-04-26 13:15:48.064278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.089 qpair failed and we were unable to recover it. 00:32:43.089 [2024-04-26 13:15:48.064494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.089 [2024-04-26 13:15:48.064693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.089 [2024-04-26 13:15:48.064699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.089 qpair failed and we were unable to recover it. 00:32:43.089 [2024-04-26 13:15:48.065053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.089 [2024-04-26 13:15:48.065370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.089 [2024-04-26 13:15:48.065376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.089 qpair failed and we were unable to recover it. 00:32:43.089 [2024-04-26 13:15:48.065672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.089 [2024-04-26 13:15:48.065990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.089 [2024-04-26 13:15:48.065997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.089 qpair failed and we were unable to recover it. 00:32:43.089 [2024-04-26 13:15:48.066303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.089 [2024-04-26 13:15:48.066609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.089 [2024-04-26 13:15:48.066615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.089 qpair failed and we were unable to recover it. 00:32:43.089 [2024-04-26 13:15:48.066904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.089 [2024-04-26 13:15:48.067219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.089 [2024-04-26 13:15:48.067225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.089 qpair failed and we were unable to recover it. 00:32:43.089 [2024-04-26 13:15:48.067420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.089 [2024-04-26 13:15:48.067813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.089 [2024-04-26 13:15:48.067820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.089 qpair failed and we were unable to recover it. 00:32:43.089 [2024-04-26 13:15:48.068091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.089 [2024-04-26 13:15:48.068418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.089 [2024-04-26 13:15:48.068425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.089 qpair failed and we were unable to recover it. 00:32:43.089 [2024-04-26 13:15:48.068642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.089 [2024-04-26 13:15:48.068866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.089 [2024-04-26 13:15:48.068874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.089 qpair failed and we were unable to recover it. 00:32:43.089 [2024-04-26 13:15:48.069236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.089 [2024-04-26 13:15:48.069546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.089 [2024-04-26 13:15:48.069552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.089 qpair failed and we were unable to recover it. 00:32:43.089 [2024-04-26 13:15:48.069872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.089 [2024-04-26 13:15:48.070071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.089 [2024-04-26 13:15:48.070078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.089 qpair failed and we were unable to recover it. 00:32:43.089 [2024-04-26 13:15:48.070429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.089 [2024-04-26 13:15:48.070801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.089 [2024-04-26 13:15:48.070808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.089 qpair failed and we were unable to recover it. 00:32:43.089 [2024-04-26 13:15:48.071132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.089 [2024-04-26 13:15:48.071436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.089 [2024-04-26 13:15:48.071443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.089 qpair failed and we were unable to recover it. 00:32:43.089 [2024-04-26 13:15:48.071776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.089 [2024-04-26 13:15:48.071969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.089 [2024-04-26 13:15:48.071975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.089 qpair failed and we were unable to recover it. 00:32:43.089 [2024-04-26 13:15:48.072318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.089 [2024-04-26 13:15:48.072624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.089 [2024-04-26 13:15:48.072631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.089 qpair failed and we were unable to recover it. 00:32:43.089 [2024-04-26 13:15:48.072947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.089 [2024-04-26 13:15:48.073165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.089 [2024-04-26 13:15:48.073171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.089 qpair failed and we were unable to recover it. 00:32:43.089 [2024-04-26 13:15:48.073559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.089 [2024-04-26 13:15:48.073784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.089 [2024-04-26 13:15:48.073790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.089 qpair failed and we were unable to recover it. 00:32:43.089 [2024-04-26 13:15:48.074167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.090 [2024-04-26 13:15:48.074473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.090 [2024-04-26 13:15:48.074479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.090 qpair failed and we were unable to recover it. 00:32:43.090 [2024-04-26 13:15:48.074809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.090 [2024-04-26 13:15:48.075151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.090 [2024-04-26 13:15:48.075157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.090 qpair failed and we were unable to recover it. 00:32:43.090 [2024-04-26 13:15:48.075515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.090 [2024-04-26 13:15:48.075817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.090 [2024-04-26 13:15:48.075823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.090 qpair failed and we were unable to recover it. 00:32:43.090 [2024-04-26 13:15:48.076053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.090 [2024-04-26 13:15:48.076276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.090 [2024-04-26 13:15:48.076282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.090 qpair failed and we were unable to recover it. 00:32:43.090 [2024-04-26 13:15:48.076597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.090 [2024-04-26 13:15:48.076946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.090 [2024-04-26 13:15:48.076953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.090 qpair failed and we were unable to recover it. 00:32:43.090 [2024-04-26 13:15:48.077127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.090 [2024-04-26 13:15:48.077431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.090 [2024-04-26 13:15:48.077437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.090 qpair failed and we were unable to recover it. 00:32:43.090 [2024-04-26 13:15:48.077764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.090 [2024-04-26 13:15:48.078059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.090 [2024-04-26 13:15:48.078066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.090 qpair failed and we were unable to recover it. 00:32:43.090 [2024-04-26 13:15:48.078367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.090 [2024-04-26 13:15:48.078735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.090 [2024-04-26 13:15:48.078741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.090 qpair failed and we were unable to recover it. 00:32:43.090 [2024-04-26 13:15:48.079065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.090 [2024-04-26 13:15:48.079238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.090 [2024-04-26 13:15:48.079245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.090 qpair failed and we were unable to recover it. 00:32:43.090 [2024-04-26 13:15:48.079438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.090 [2024-04-26 13:15:48.079633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.090 [2024-04-26 13:15:48.079640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.090 qpair failed and we were unable to recover it. 00:32:43.090 [2024-04-26 13:15:48.079932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.090 [2024-04-26 13:15:48.080240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.090 [2024-04-26 13:15:48.080247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.090 qpair failed and we were unable to recover it. 00:32:43.090 [2024-04-26 13:15:48.080544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.090 [2024-04-26 13:15:48.080845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.090 [2024-04-26 13:15:48.080853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.090 qpair failed and we were unable to recover it. 00:32:43.090 [2024-04-26 13:15:48.081176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.090 [2024-04-26 13:15:48.081496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.090 [2024-04-26 13:15:48.081502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.090 qpair failed and we were unable to recover it. 00:32:43.090 [2024-04-26 13:15:48.081822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.090 [2024-04-26 13:15:48.082047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.090 [2024-04-26 13:15:48.082054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.090 qpair failed and we were unable to recover it. 00:32:43.090 [2024-04-26 13:15:48.082440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.090 [2024-04-26 13:15:48.082760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.090 [2024-04-26 13:15:48.082766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.090 qpair failed and we were unable to recover it. 00:32:43.090 [2024-04-26 13:15:48.083146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.090 [2024-04-26 13:15:48.083354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.090 [2024-04-26 13:15:48.083360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.090 qpair failed and we were unable to recover it. 00:32:43.090 [2024-04-26 13:15:48.083681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.090 [2024-04-26 13:15:48.083913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.090 [2024-04-26 13:15:48.083920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.090 qpair failed and we were unable to recover it. 00:32:43.090 [2024-04-26 13:15:48.084148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.090 [2024-04-26 13:15:48.084442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.090 [2024-04-26 13:15:48.084449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.090 qpair failed and we were unable to recover it. 00:32:43.090 [2024-04-26 13:15:48.084778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.090 [2024-04-26 13:15:48.084994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.090 [2024-04-26 13:15:48.085001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.090 qpair failed and we were unable to recover it. 00:32:43.090 [2024-04-26 13:15:48.085324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.090 [2024-04-26 13:15:48.085657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.090 [2024-04-26 13:15:48.085663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.090 qpair failed and we were unable to recover it. 00:32:43.090 [2024-04-26 13:15:48.085982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.090 [2024-04-26 13:15:48.086323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.090 [2024-04-26 13:15:48.086329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.090 qpair failed and we were unable to recover it. 00:32:43.090 [2024-04-26 13:15:48.086611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.090 [2024-04-26 13:15:48.086784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.090 [2024-04-26 13:15:48.086791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.090 qpair failed and we were unable to recover it. 00:32:43.090 [2024-04-26 13:15:48.086999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.090 [2024-04-26 13:15:48.087229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.090 [2024-04-26 13:15:48.087235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.091 qpair failed and we were unable to recover it. 00:32:43.091 [2024-04-26 13:15:48.087470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.091 [2024-04-26 13:15:48.087849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.091 [2024-04-26 13:15:48.087855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.091 qpair failed and we were unable to recover it. 00:32:43.091 [2024-04-26 13:15:48.088248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.091 [2024-04-26 13:15:48.088585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.091 [2024-04-26 13:15:48.088591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.091 qpair failed and we were unable to recover it. 00:32:43.091 [2024-04-26 13:15:48.088897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.091 [2024-04-26 13:15:48.089119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.091 [2024-04-26 13:15:48.089125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.091 qpair failed and we were unable to recover it. 00:32:43.091 [2024-04-26 13:15:48.089443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.091 [2024-04-26 13:15:48.089607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.091 [2024-04-26 13:15:48.089613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.091 qpair failed and we were unable to recover it. 00:32:43.091 [2024-04-26 13:15:48.089842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.091 [2024-04-26 13:15:48.090222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.091 [2024-04-26 13:15:48.090228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.091 qpair failed and we were unable to recover it. 00:32:43.091 [2024-04-26 13:15:48.090529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.091 [2024-04-26 13:15:48.090850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.091 [2024-04-26 13:15:48.090857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.091 qpair failed and we were unable to recover it. 00:32:43.091 [2024-04-26 13:15:48.091062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.091 [2024-04-26 13:15:48.091385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.091 [2024-04-26 13:15:48.091391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.091 qpair failed and we were unable to recover it. 00:32:43.091 [2024-04-26 13:15:48.091691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.091 [2024-04-26 13:15:48.092010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.091 [2024-04-26 13:15:48.092017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.091 qpair failed and we were unable to recover it. 00:32:43.091 [2024-04-26 13:15:48.092353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.091 [2024-04-26 13:15:48.092627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.091 [2024-04-26 13:15:48.092633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.091 qpair failed and we were unable to recover it. 00:32:43.091 [2024-04-26 13:15:48.092807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.091 [2024-04-26 13:15:48.093058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.091 [2024-04-26 13:15:48.093065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.091 qpair failed and we were unable to recover it. 00:32:43.091 [2024-04-26 13:15:48.093394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.091 [2024-04-26 13:15:48.093673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.091 [2024-04-26 13:15:48.093680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.091 qpair failed and we were unable to recover it. 00:32:43.091 [2024-04-26 13:15:48.093991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.091 [2024-04-26 13:15:48.094311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.091 [2024-04-26 13:15:48.094317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.091 qpair failed and we were unable to recover it. 00:32:43.091 [2024-04-26 13:15:48.094523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.091 [2024-04-26 13:15:48.094906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.091 [2024-04-26 13:15:48.094912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.091 qpair failed and we were unable to recover it. 00:32:43.091 [2024-04-26 13:15:48.095132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.091 [2024-04-26 13:15:48.095347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.091 [2024-04-26 13:15:48.095355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.091 qpair failed and we were unable to recover it. 00:32:43.091 [2024-04-26 13:15:48.095651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.091 [2024-04-26 13:15:48.095937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.091 [2024-04-26 13:15:48.095944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.091 qpair failed and we were unable to recover it. 00:32:43.091 [2024-04-26 13:15:48.096287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.091 [2024-04-26 13:15:48.096437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.091 [2024-04-26 13:15:48.096444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.091 qpair failed and we were unable to recover it. 00:32:43.091 [2024-04-26 13:15:48.096611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.091 [2024-04-26 13:15:48.096992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.091 [2024-04-26 13:15:48.096999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.091 qpair failed and we were unable to recover it. 00:32:43.091 [2024-04-26 13:15:48.097331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.091 [2024-04-26 13:15:48.097534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.091 [2024-04-26 13:15:48.097541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.091 qpair failed and we were unable to recover it. 00:32:43.091 [2024-04-26 13:15:48.097897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.091 [2024-04-26 13:15:48.098193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.091 [2024-04-26 13:15:48.098199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.091 qpair failed and we were unable to recover it. 00:32:43.091 [2024-04-26 13:15:48.098416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.091 [2024-04-26 13:15:48.098690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.091 [2024-04-26 13:15:48.098696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.091 qpair failed and we were unable to recover it. 00:32:43.091 [2024-04-26 13:15:48.099044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.091 [2024-04-26 13:15:48.099254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.091 [2024-04-26 13:15:48.099261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.091 qpair failed and we were unable to recover it. 00:32:43.091 [2024-04-26 13:15:48.099464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.091 [2024-04-26 13:15:48.099752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.091 [2024-04-26 13:15:48.099758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.091 qpair failed and we were unable to recover it. 00:32:43.091 [2024-04-26 13:15:48.099985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.091 [2024-04-26 13:15:48.100318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.091 [2024-04-26 13:15:48.100325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.091 qpair failed and we were unable to recover it. 00:32:43.091 [2024-04-26 13:15:48.100620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.091 [2024-04-26 13:15:48.100907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.091 [2024-04-26 13:15:48.100914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.091 qpair failed and we were unable to recover it. 00:32:43.091 [2024-04-26 13:15:48.101102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.091 [2024-04-26 13:15:48.101358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.091 [2024-04-26 13:15:48.101366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.091 qpair failed and we were unable to recover it. 00:32:43.091 [2024-04-26 13:15:48.101660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.091 [2024-04-26 13:15:48.101984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.091 [2024-04-26 13:15:48.101992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.091 qpair failed and we were unable to recover it. 00:32:43.091 [2024-04-26 13:15:48.102360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.092 [2024-04-26 13:15:48.102638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.092 [2024-04-26 13:15:48.102644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.092 qpair failed and we were unable to recover it. 00:32:43.092 [2024-04-26 13:15:48.102984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.092 [2024-04-26 13:15:48.103332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.092 [2024-04-26 13:15:48.103338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.092 qpair failed and we were unable to recover it. 00:32:43.092 [2024-04-26 13:15:48.103672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.092 [2024-04-26 13:15:48.104006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.092 [2024-04-26 13:15:48.104013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.092 qpair failed and we were unable to recover it. 00:32:43.092 [2024-04-26 13:15:48.104236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.092 [2024-04-26 13:15:48.104559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.092 [2024-04-26 13:15:48.104565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.092 qpair failed and we were unable to recover it. 00:32:43.092 [2024-04-26 13:15:48.104934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.092 [2024-04-26 13:15:48.105176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.092 [2024-04-26 13:15:48.105183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.092 qpair failed and we were unable to recover it. 00:32:43.092 [2024-04-26 13:15:48.105485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.092 [2024-04-26 13:15:48.105792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.092 [2024-04-26 13:15:48.105798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.092 qpair failed and we were unable to recover it. 00:32:43.092 [2024-04-26 13:15:48.106098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.092 [2024-04-26 13:15:48.106430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.092 [2024-04-26 13:15:48.106437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.092 qpair failed and we were unable to recover it. 00:32:43.092 [2024-04-26 13:15:48.106712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.092 [2024-04-26 13:15:48.107123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.092 [2024-04-26 13:15:48.107129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.092 qpair failed and we were unable to recover it. 00:32:43.092 [2024-04-26 13:15:48.107419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.092 [2024-04-26 13:15:48.107712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.092 [2024-04-26 13:15:48.107718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.092 qpair failed and we were unable to recover it. 00:32:43.092 [2024-04-26 13:15:48.108013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.092 [2024-04-26 13:15:48.108317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.092 [2024-04-26 13:15:48.108323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.092 qpair failed and we were unable to recover it. 00:32:43.092 [2024-04-26 13:15:48.108513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.092 [2024-04-26 13:15:48.108696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.092 [2024-04-26 13:15:48.108703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.092 qpair failed and we were unable to recover it. 00:32:43.092 [2024-04-26 13:15:48.108884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.092 [2024-04-26 13:15:48.109176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.092 [2024-04-26 13:15:48.109183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.092 qpair failed and we were unable to recover it. 00:32:43.092 [2024-04-26 13:15:48.109400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.092 [2024-04-26 13:15:48.109690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.092 [2024-04-26 13:15:48.109697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.092 qpair failed and we were unable to recover it. 00:32:43.092 [2024-04-26 13:15:48.109990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.092 [2024-04-26 13:15:48.110309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.092 [2024-04-26 13:15:48.110315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.092 qpair failed and we were unable to recover it. 00:32:43.092 [2024-04-26 13:15:48.110609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.092 [2024-04-26 13:15:48.110792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.092 [2024-04-26 13:15:48.110799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.092 qpair failed and we were unable to recover it. 00:32:43.092 [2024-04-26 13:15:48.111112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.092 [2024-04-26 13:15:48.111451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.092 [2024-04-26 13:15:48.111458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.092 qpair failed and we were unable to recover it. 00:32:43.092 [2024-04-26 13:15:48.111769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.092 [2024-04-26 13:15:48.112065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.092 [2024-04-26 13:15:48.112072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.092 qpair failed and we were unable to recover it. 00:32:43.092 [2024-04-26 13:15:48.112227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.092 [2024-04-26 13:15:48.112524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.092 [2024-04-26 13:15:48.112531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.092 qpair failed and we were unable to recover it. 00:32:43.092 [2024-04-26 13:15:48.112857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.092 [2024-04-26 13:15:48.113170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.092 [2024-04-26 13:15:48.113176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.092 qpair failed and we were unable to recover it. 00:32:43.092 [2024-04-26 13:15:48.113465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.092 [2024-04-26 13:15:48.113748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.092 [2024-04-26 13:15:48.113755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.092 qpair failed and we were unable to recover it. 00:32:43.092 [2024-04-26 13:15:48.114041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.092 [2024-04-26 13:15:48.114248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.092 [2024-04-26 13:15:48.114254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.092 qpair failed and we were unable to recover it. 00:32:43.092 [2024-04-26 13:15:48.114568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.092 [2024-04-26 13:15:48.114907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.092 [2024-04-26 13:15:48.114914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.092 qpair failed and we were unable to recover it. 00:32:43.092 [2024-04-26 13:15:48.115221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.092 [2024-04-26 13:15:48.115531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.092 [2024-04-26 13:15:48.115537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.092 qpair failed and we were unable to recover it. 00:32:43.092 [2024-04-26 13:15:48.115848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.092 [2024-04-26 13:15:48.116045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.092 [2024-04-26 13:15:48.116051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.092 qpair failed and we were unable to recover it. 00:32:43.092 [2024-04-26 13:15:48.116379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.092 [2024-04-26 13:15:48.116713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.092 [2024-04-26 13:15:48.116721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.092 qpair failed and we were unable to recover it. 00:32:43.092 [2024-04-26 13:15:48.117098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.092 [2024-04-26 13:15:48.117417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.092 [2024-04-26 13:15:48.117424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.092 qpair failed and we were unable to recover it. 00:32:43.092 [2024-04-26 13:15:48.117734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.092 [2024-04-26 13:15:48.118064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.092 [2024-04-26 13:15:48.118071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.092 qpair failed and we were unable to recover it. 00:32:43.092 [2024-04-26 13:15:48.118410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.092 [2024-04-26 13:15:48.118696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.092 [2024-04-26 13:15:48.118702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.092 qpair failed and we were unable to recover it. 00:32:43.093 [2024-04-26 13:15:48.119023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.093 [2024-04-26 13:15:48.119305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.093 [2024-04-26 13:15:48.119311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.093 qpair failed and we were unable to recover it. 00:32:43.093 [2024-04-26 13:15:48.119627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.093 [2024-04-26 13:15:48.119822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.093 [2024-04-26 13:15:48.119828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.093 qpair failed and we were unable to recover it. 00:32:43.093 [2024-04-26 13:15:48.120140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.093 [2024-04-26 13:15:48.120450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.093 [2024-04-26 13:15:48.120457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.093 qpair failed and we were unable to recover it. 00:32:43.093 [2024-04-26 13:15:48.120772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.093 [2024-04-26 13:15:48.120994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.093 [2024-04-26 13:15:48.121000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.093 qpair failed and we were unable to recover it. 00:32:43.093 [2024-04-26 13:15:48.121199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.093 [2024-04-26 13:15:48.121481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.093 [2024-04-26 13:15:48.121487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.093 qpair failed and we were unable to recover it. 00:32:43.093 [2024-04-26 13:15:48.121805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.093 [2024-04-26 13:15:48.122137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.093 [2024-04-26 13:15:48.122144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.093 qpair failed and we were unable to recover it. 00:32:43.093 [2024-04-26 13:15:48.122445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.093 [2024-04-26 13:15:48.122764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.093 [2024-04-26 13:15:48.122773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.093 qpair failed and we were unable to recover it. 00:32:43.093 [2024-04-26 13:15:48.122967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.093 [2024-04-26 13:15:48.123310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.093 [2024-04-26 13:15:48.123317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.093 qpair failed and we were unable to recover it. 00:32:43.093 [2024-04-26 13:15:48.123631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.093 [2024-04-26 13:15:48.123928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.093 [2024-04-26 13:15:48.123935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.093 qpair failed and we were unable to recover it. 00:32:43.093 [2024-04-26 13:15:48.124259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.093 [2024-04-26 13:15:48.124564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.093 [2024-04-26 13:15:48.124571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.093 qpair failed and we were unable to recover it. 00:32:43.093 [2024-04-26 13:15:48.124766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.093 [2024-04-26 13:15:48.125134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.093 [2024-04-26 13:15:48.125141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.093 qpair failed and we were unable to recover it. 00:32:43.093 [2024-04-26 13:15:48.125329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.093 [2024-04-26 13:15:48.125677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.093 [2024-04-26 13:15:48.125683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.093 qpair failed and we were unable to recover it. 00:32:43.093 [2024-04-26 13:15:48.125937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.093 [2024-04-26 13:15:48.126276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.093 [2024-04-26 13:15:48.126282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.093 qpair failed and we were unable to recover it. 00:32:43.093 [2024-04-26 13:15:48.126651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.093 [2024-04-26 13:15:48.126919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.093 [2024-04-26 13:15:48.126925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.093 qpair failed and we were unable to recover it. 00:32:43.363 [2024-04-26 13:15:48.127256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.363 [2024-04-26 13:15:48.127606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.363 [2024-04-26 13:15:48.127613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.363 qpair failed and we were unable to recover it. 00:32:43.363 [2024-04-26 13:15:48.127834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.363 [2024-04-26 13:15:48.128151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.363 [2024-04-26 13:15:48.128157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.363 qpair failed and we were unable to recover it. 00:32:43.363 [2024-04-26 13:15:48.128531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.363 [2024-04-26 13:15:48.128731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.363 [2024-04-26 13:15:48.128739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.363 qpair failed and we were unable to recover it. 00:32:43.363 [2024-04-26 13:15:48.129087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.363 [2024-04-26 13:15:48.129280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.363 [2024-04-26 13:15:48.129287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.363 qpair failed and we were unable to recover it. 00:32:43.363 [2024-04-26 13:15:48.129602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.363 [2024-04-26 13:15:48.129910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.363 [2024-04-26 13:15:48.129917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.363 qpair failed and we were unable to recover it. 00:32:43.363 [2024-04-26 13:15:48.130236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.363 [2024-04-26 13:15:48.130580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.363 [2024-04-26 13:15:48.130586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.363 qpair failed and we were unable to recover it. 00:32:43.363 [2024-04-26 13:15:48.130889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.363 [2024-04-26 13:15:48.131085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.363 [2024-04-26 13:15:48.131091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.363 qpair failed and we were unable to recover it. 00:32:43.363 [2024-04-26 13:15:48.131266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.363 [2024-04-26 13:15:48.131600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.363 [2024-04-26 13:15:48.131607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.363 qpair failed and we were unable to recover it. 00:32:43.363 [2024-04-26 13:15:48.131906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.363 [2024-04-26 13:15:48.132219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.363 [2024-04-26 13:15:48.132226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.363 qpair failed and we were unable to recover it. 00:32:43.363 [2024-04-26 13:15:48.132525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.363 [2024-04-26 13:15:48.132843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.363 [2024-04-26 13:15:48.132850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.363 qpair failed and we were unable to recover it. 00:32:43.363 [2024-04-26 13:15:48.133155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.363 [2024-04-26 13:15:48.133442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.363 [2024-04-26 13:15:48.133448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.363 qpair failed and we were unable to recover it. 00:32:43.363 [2024-04-26 13:15:48.133750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.363 [2024-04-26 13:15:48.134058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.363 [2024-04-26 13:15:48.134065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.363 qpair failed and we were unable to recover it. 00:32:43.363 [2024-04-26 13:15:48.134362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.363 [2024-04-26 13:15:48.134676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.363 [2024-04-26 13:15:48.134684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.363 qpair failed and we were unable to recover it. 00:32:43.363 [2024-04-26 13:15:48.134983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.363 [2024-04-26 13:15:48.135289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.363 [2024-04-26 13:15:48.135295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.363 qpair failed and we were unable to recover it. 00:32:43.363 [2024-04-26 13:15:48.135604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.363 [2024-04-26 13:15:48.135910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.363 [2024-04-26 13:15:48.135916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.363 qpair failed and we were unable to recover it. 00:32:43.363 [2024-04-26 13:15:48.136238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.363 [2024-04-26 13:15:48.136565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.363 [2024-04-26 13:15:48.136573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.363 qpair failed and we were unable to recover it. 00:32:43.363 [2024-04-26 13:15:48.136775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.363 [2024-04-26 13:15:48.137112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.363 [2024-04-26 13:15:48.137119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.363 qpair failed and we were unable to recover it. 00:32:43.363 [2024-04-26 13:15:48.137433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.363 [2024-04-26 13:15:48.137777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.363 [2024-04-26 13:15:48.137784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.363 qpair failed and we were unable to recover it. 00:32:43.363 [2024-04-26 13:15:48.138075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.363 [2024-04-26 13:15:48.138388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.363 [2024-04-26 13:15:48.138395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.363 qpair failed and we were unable to recover it. 00:32:43.363 [2024-04-26 13:15:48.138685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.363 [2024-04-26 13:15:48.139002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.363 [2024-04-26 13:15:48.139008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.363 qpair failed and we were unable to recover it. 00:32:43.363 [2024-04-26 13:15:48.139329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.363 [2024-04-26 13:15:48.139653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.363 [2024-04-26 13:15:48.139660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.363 qpair failed and we were unable to recover it. 00:32:43.363 [2024-04-26 13:15:48.139972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.363 [2024-04-26 13:15:48.140248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.363 [2024-04-26 13:15:48.140254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.363 qpair failed and we were unable to recover it. 00:32:43.363 [2024-04-26 13:15:48.140605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.363 [2024-04-26 13:15:48.140910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.363 [2024-04-26 13:15:48.140917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.363 qpair failed and we were unable to recover it. 00:32:43.363 [2024-04-26 13:15:48.141250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.363 [2024-04-26 13:15:48.141556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.363 [2024-04-26 13:15:48.141563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.363 qpair failed and we were unable to recover it. 00:32:43.363 [2024-04-26 13:15:48.141791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.363 [2024-04-26 13:15:48.142080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.363 [2024-04-26 13:15:48.142087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.363 qpair failed and we were unable to recover it. 00:32:43.363 [2024-04-26 13:15:48.142301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.363 [2024-04-26 13:15:48.142591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.363 [2024-04-26 13:15:48.142598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.363 qpair failed and we were unable to recover it. 00:32:43.363 [2024-04-26 13:15:48.142953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.363 [2024-04-26 13:15:48.143264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.363 [2024-04-26 13:15:48.143270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.363 qpair failed and we were unable to recover it. 00:32:43.363 [2024-04-26 13:15:48.143466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.363 [2024-04-26 13:15:48.143787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.363 [2024-04-26 13:15:48.143793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.363 qpair failed and we were unable to recover it. 00:32:43.363 [2024-04-26 13:15:48.144104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.363 [2024-04-26 13:15:48.144451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.363 [2024-04-26 13:15:48.144457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.363 qpair failed and we were unable to recover it. 00:32:43.363 [2024-04-26 13:15:48.144758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.363 [2024-04-26 13:15:48.145057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.363 [2024-04-26 13:15:48.145063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.363 qpair failed and we were unable to recover it. 00:32:43.364 [2024-04-26 13:15:48.145378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.364 [2024-04-26 13:15:48.145691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.364 [2024-04-26 13:15:48.145697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.364 qpair failed and we were unable to recover it. 00:32:43.364 [2024-04-26 13:15:48.145995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.364 [2024-04-26 13:15:48.146165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.364 [2024-04-26 13:15:48.146172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.364 qpair failed and we were unable to recover it. 00:32:43.364 [2024-04-26 13:15:48.146492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.364 [2024-04-26 13:15:48.146812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.364 [2024-04-26 13:15:48.146819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.364 qpair failed and we were unable to recover it. 00:32:43.364 [2024-04-26 13:15:48.147192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.364 [2024-04-26 13:15:48.147507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.364 [2024-04-26 13:15:48.147515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.364 qpair failed and we were unable to recover it. 00:32:43.364 [2024-04-26 13:15:48.147836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.364 [2024-04-26 13:15:48.148171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.364 [2024-04-26 13:15:48.148177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.364 qpair failed and we were unable to recover it. 00:32:43.364 [2024-04-26 13:15:48.148473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.364 [2024-04-26 13:15:48.148639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.364 [2024-04-26 13:15:48.148645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.364 qpair failed and we were unable to recover it. 00:32:43.364 [2024-04-26 13:15:48.148928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.364 [2024-04-26 13:15:48.149253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.364 [2024-04-26 13:15:48.149259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.364 qpair failed and we were unable to recover it. 00:32:43.364 [2024-04-26 13:15:48.149572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.364 [2024-04-26 13:15:48.149891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.364 [2024-04-26 13:15:48.149897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.364 qpair failed and we were unable to recover it. 00:32:43.364 [2024-04-26 13:15:48.150214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.364 [2024-04-26 13:15:48.150547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.364 [2024-04-26 13:15:48.150554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.364 qpair failed and we were unable to recover it. 00:32:43.364 [2024-04-26 13:15:48.150849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.364 [2024-04-26 13:15:48.151160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.364 [2024-04-26 13:15:48.151166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.364 qpair failed and we were unable to recover it. 00:32:43.364 [2024-04-26 13:15:48.151358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.364 [2024-04-26 13:15:48.151660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.364 [2024-04-26 13:15:48.151666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.364 qpair failed and we were unable to recover it. 00:32:43.364 [2024-04-26 13:15:48.151983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.364 [2024-04-26 13:15:48.152301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.364 [2024-04-26 13:15:48.152307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.364 qpair failed and we were unable to recover it. 00:32:43.364 [2024-04-26 13:15:48.152616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.364 [2024-04-26 13:15:48.152655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.364 [2024-04-26 13:15:48.152662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.364 qpair failed and we were unable to recover it. 00:32:43.364 [2024-04-26 13:15:48.152972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.364 [2024-04-26 13:15:48.153275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.364 [2024-04-26 13:15:48.153281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.364 qpair failed and we were unable to recover it. 00:32:43.364 [2024-04-26 13:15:48.153597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.364 [2024-04-26 13:15:48.153916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.364 [2024-04-26 13:15:48.153923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.364 qpair failed and we were unable to recover it. 00:32:43.364 [2024-04-26 13:15:48.154117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.364 [2024-04-26 13:15:48.154478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.364 [2024-04-26 13:15:48.154484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.364 qpair failed and we were unable to recover it. 00:32:43.364 [2024-04-26 13:15:48.154775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.364 [2024-04-26 13:15:48.155081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.364 [2024-04-26 13:15:48.155087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.364 qpair failed and we were unable to recover it. 00:32:43.364 [2024-04-26 13:15:48.155393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.364 [2024-04-26 13:15:48.156256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.364 [2024-04-26 13:15:48.156277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.364 qpair failed and we were unable to recover it. 00:32:43.364 [2024-04-26 13:15:48.156575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.364 [2024-04-26 13:15:48.156862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.364 [2024-04-26 13:15:48.156870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.364 qpair failed and we were unable to recover it. 00:32:43.364 [2024-04-26 13:15:48.157193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.364 [2024-04-26 13:15:48.157494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.364 [2024-04-26 13:15:48.157500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.364 qpair failed and we were unable to recover it. 00:32:43.364 [2024-04-26 13:15:48.157818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.364 [2024-04-26 13:15:48.158154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.364 [2024-04-26 13:15:48.158160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.364 qpair failed and we were unable to recover it. 00:32:43.364 [2024-04-26 13:15:48.158452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.364 [2024-04-26 13:15:48.158774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.364 [2024-04-26 13:15:48.158781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.364 qpair failed and we were unable to recover it. 00:32:43.364 [2024-04-26 13:15:48.159094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.364 [2024-04-26 13:15:48.159391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.364 [2024-04-26 13:15:48.159398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.364 qpair failed and we were unable to recover it. 00:32:43.364 [2024-04-26 13:15:48.159701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.364 [2024-04-26 13:15:48.160010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.364 [2024-04-26 13:15:48.160017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.364 qpair failed and we were unable to recover it. 00:32:43.364 [2024-04-26 13:15:48.160407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.364 [2024-04-26 13:15:48.160674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.364 [2024-04-26 13:15:48.160680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.364 qpair failed and we were unable to recover it. 00:32:43.364 [2024-04-26 13:15:48.160996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.364 [2024-04-26 13:15:48.161298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.364 [2024-04-26 13:15:48.161304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.364 qpair failed and we were unable to recover it. 00:32:43.364 [2024-04-26 13:15:48.161648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.364 [2024-04-26 13:15:48.161932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.364 [2024-04-26 13:15:48.161939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.364 qpair failed and we were unable to recover it. 00:32:43.364 [2024-04-26 13:15:48.162252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.364 [2024-04-26 13:15:48.162575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.364 [2024-04-26 13:15:48.162582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.364 qpair failed and we were unable to recover it. 00:32:43.364 [2024-04-26 13:15:48.162875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.364 [2024-04-26 13:15:48.163183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.364 [2024-04-26 13:15:48.163190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.364 qpair failed and we were unable to recover it. 00:32:43.364 [2024-04-26 13:15:48.163491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.364 [2024-04-26 13:15:48.163653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.364 [2024-04-26 13:15:48.163660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.364 qpair failed and we were unable to recover it. 00:32:43.364 [2024-04-26 13:15:48.164052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.364 [2024-04-26 13:15:48.164390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.364 [2024-04-26 13:15:48.164397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.364 qpair failed and we were unable to recover it. 00:32:43.364 [2024-04-26 13:15:48.164611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.364 [2024-04-26 13:15:48.164931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.364 [2024-04-26 13:15:48.164938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.364 qpair failed and we were unable to recover it. 00:32:43.364 [2024-04-26 13:15:48.165265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.364 [2024-04-26 13:15:48.165654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.364 [2024-04-26 13:15:48.165660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.364 qpair failed and we were unable to recover it. 00:32:43.364 [2024-04-26 13:15:48.165802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.364 [2024-04-26 13:15:48.166139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.364 [2024-04-26 13:15:48.166145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.364 qpair failed and we were unable to recover it. 00:32:43.364 [2024-04-26 13:15:48.166461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.364 [2024-04-26 13:15:48.166762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.364 [2024-04-26 13:15:48.166769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.364 qpair failed and we were unable to recover it. 00:32:43.364 [2024-04-26 13:15:48.166982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.364 [2024-04-26 13:15:48.167288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.364 [2024-04-26 13:15:48.167295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.364 qpair failed and we were unable to recover it. 00:32:43.364 [2024-04-26 13:15:48.167604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.364 [2024-04-26 13:15:48.167930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.364 [2024-04-26 13:15:48.167937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.364 qpair failed and we were unable to recover it. 00:32:43.364 [2024-04-26 13:15:48.168286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.364 [2024-04-26 13:15:48.168627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.364 [2024-04-26 13:15:48.168635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.364 qpair failed and we were unable to recover it. 00:32:43.364 [2024-04-26 13:15:48.168965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.364 [2024-04-26 13:15:48.169141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.364 [2024-04-26 13:15:48.169148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.364 qpair failed and we were unable to recover it. 00:32:43.364 [2024-04-26 13:15:48.169428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.364 [2024-04-26 13:15:48.169745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.364 [2024-04-26 13:15:48.169758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.364 qpair failed and we were unable to recover it. 00:32:43.364 [2024-04-26 13:15:48.170066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.364 [2024-04-26 13:15:48.170381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.364 [2024-04-26 13:15:48.170387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.364 qpair failed and we were unable to recover it. 00:32:43.364 [2024-04-26 13:15:48.170713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.364 [2024-04-26 13:15:48.170939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.364 [2024-04-26 13:15:48.170946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.364 qpair failed and we were unable to recover it. 00:32:43.364 [2024-04-26 13:15:48.171278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.364 [2024-04-26 13:15:48.171633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.364 [2024-04-26 13:15:48.171640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.364 qpair failed and we were unable to recover it. 00:32:43.364 [2024-04-26 13:15:48.171956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.364 [2024-04-26 13:15:48.172294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.364 [2024-04-26 13:15:48.172301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.364 qpair failed and we were unable to recover it. 00:32:43.364 [2024-04-26 13:15:48.172601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.364 [2024-04-26 13:15:48.172800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.364 [2024-04-26 13:15:48.172806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.364 qpair failed and we were unable to recover it. 00:32:43.364 [2024-04-26 13:15:48.172948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.364 [2024-04-26 13:15:48.173089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.364 [2024-04-26 13:15:48.173096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.364 qpair failed and we were unable to recover it. 00:32:43.364 [2024-04-26 13:15:48.173412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.364 [2024-04-26 13:15:48.173754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.364 [2024-04-26 13:15:48.173760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.364 qpair failed and we were unable to recover it. 00:32:43.364 [2024-04-26 13:15:48.174072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.364 [2024-04-26 13:15:48.174372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.364 [2024-04-26 13:15:48.174378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.364 qpair failed and we were unable to recover it. 00:32:43.364 [2024-04-26 13:15:48.174680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.364 [2024-04-26 13:15:48.174962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.364 [2024-04-26 13:15:48.174968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.364 qpair failed and we were unable to recover it. 00:32:43.364 [2024-04-26 13:15:48.175288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.364 [2024-04-26 13:15:48.175606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.364 [2024-04-26 13:15:48.175612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.364 qpair failed and we were unable to recover it. 00:32:43.364 [2024-04-26 13:15:48.175907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.364 [2024-04-26 13:15:48.176223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.364 [2024-04-26 13:15:48.176230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.364 qpair failed and we were unable to recover it. 00:32:43.364 [2024-04-26 13:15:48.176542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.364 [2024-04-26 13:15:48.176822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.364 [2024-04-26 13:15:48.176828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.364 qpair failed and we were unable to recover it. 00:32:43.364 [2024-04-26 13:15:48.177145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.364 [2024-04-26 13:15:48.177476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.364 [2024-04-26 13:15:48.177483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.365 qpair failed and we were unable to recover it. 00:32:43.365 [2024-04-26 13:15:48.177789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.365 [2024-04-26 13:15:48.178098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.365 [2024-04-26 13:15:48.178105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.365 qpair failed and we were unable to recover it. 00:32:43.365 [2024-04-26 13:15:48.178413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.365 [2024-04-26 13:15:48.178716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.365 [2024-04-26 13:15:48.178723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.365 qpair failed and we were unable to recover it. 00:32:43.365 [2024-04-26 13:15:48.179057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.365 [2024-04-26 13:15:48.179370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.365 [2024-04-26 13:15:48.179377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.365 qpair failed and we were unable to recover it. 00:32:43.365 [2024-04-26 13:15:48.179693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.365 [2024-04-26 13:15:48.180030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.365 [2024-04-26 13:15:48.180038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.365 qpair failed and we were unable to recover it. 00:32:43.365 [2024-04-26 13:15:48.180343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.365 [2024-04-26 13:15:48.180679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.365 [2024-04-26 13:15:48.180687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.365 qpair failed and we were unable to recover it. 00:32:43.365 [2024-04-26 13:15:48.180994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.365 [2024-04-26 13:15:48.181298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.365 [2024-04-26 13:15:48.181305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.365 qpair failed and we were unable to recover it. 00:32:43.365 [2024-04-26 13:15:48.181598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.365 [2024-04-26 13:15:48.181795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.365 [2024-04-26 13:15:48.181801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.365 qpair failed and we were unable to recover it. 00:32:43.365 [2024-04-26 13:15:48.181987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.365 [2024-04-26 13:15:48.182257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.365 [2024-04-26 13:15:48.182264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.365 qpair failed and we were unable to recover it. 00:32:43.365 [2024-04-26 13:15:48.182587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.365 [2024-04-26 13:15:48.182894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.365 [2024-04-26 13:15:48.182901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.365 qpair failed and we were unable to recover it. 00:32:43.365 [2024-04-26 13:15:48.183249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.365 [2024-04-26 13:15:48.183521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.365 [2024-04-26 13:15:48.183528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.365 qpair failed and we were unable to recover it. 00:32:43.365 [2024-04-26 13:15:48.183831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.365 [2024-04-26 13:15:48.184119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.365 [2024-04-26 13:15:48.184126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.365 qpair failed and we were unable to recover it. 00:32:43.365 [2024-04-26 13:15:48.184425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.365 [2024-04-26 13:15:48.184755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.365 [2024-04-26 13:15:48.184762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.365 qpair failed and we were unable to recover it. 00:32:43.365 [2024-04-26 13:15:48.185057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.365 [2024-04-26 13:15:48.185355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.365 [2024-04-26 13:15:48.185361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.365 qpair failed and we were unable to recover it. 00:32:43.365 [2024-04-26 13:15:48.185534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.365 [2024-04-26 13:15:48.185739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.365 [2024-04-26 13:15:48.185746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.365 qpair failed and we were unable to recover it. 00:32:43.365 [2024-04-26 13:15:48.186033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.365 [2024-04-26 13:15:48.186445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.365 [2024-04-26 13:15:48.186451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.365 qpair failed and we were unable to recover it. 00:32:43.365 [2024-04-26 13:15:48.186738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.365 [2024-04-26 13:15:48.187055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.365 [2024-04-26 13:15:48.187062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.365 qpair failed and we were unable to recover it. 00:32:43.365 [2024-04-26 13:15:48.187356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.365 [2024-04-26 13:15:48.187648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.365 [2024-04-26 13:15:48.187654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.365 qpair failed and we were unable to recover it. 00:32:43.365 [2024-04-26 13:15:48.187861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.365 [2024-04-26 13:15:48.188267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.365 [2024-04-26 13:15:48.188274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.365 qpair failed and we were unable to recover it. 00:32:43.365 [2024-04-26 13:15:48.188570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.365 [2024-04-26 13:15:48.188887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.365 [2024-04-26 13:15:48.188893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.365 qpair failed and we were unable to recover it. 00:32:43.365 [2024-04-26 13:15:48.189204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.365 [2024-04-26 13:15:48.189535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.365 [2024-04-26 13:15:48.189542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.365 qpair failed and we were unable to recover it. 00:32:43.365 [2024-04-26 13:15:48.189849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.365 [2024-04-26 13:15:48.190173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.365 [2024-04-26 13:15:48.190180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.365 qpair failed and we were unable to recover it. 00:32:43.365 [2024-04-26 13:15:48.190495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.365 [2024-04-26 13:15:48.190704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.365 [2024-04-26 13:15:48.190712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.365 qpair failed and we were unable to recover it. 00:32:43.365 [2024-04-26 13:15:48.190992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.365 [2024-04-26 13:15:48.191318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.365 [2024-04-26 13:15:48.191325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.365 qpair failed and we were unable to recover it. 00:32:43.365 [2024-04-26 13:15:48.191522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.365 [2024-04-26 13:15:48.191844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.365 [2024-04-26 13:15:48.191851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.365 qpair failed and we were unable to recover it. 00:32:43.365 [2024-04-26 13:15:48.192170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.365 [2024-04-26 13:15:48.192488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.365 [2024-04-26 13:15:48.192494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.365 qpair failed and we were unable to recover it. 00:32:43.365 [2024-04-26 13:15:48.192780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.365 [2024-04-26 13:15:48.193106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.365 [2024-04-26 13:15:48.193112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.365 qpair failed and we were unable to recover it. 00:32:43.365 [2024-04-26 13:15:48.193306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.365 [2024-04-26 13:15:48.193521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.365 [2024-04-26 13:15:48.193528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.365 qpair failed and we were unable to recover it. 00:32:43.365 [2024-04-26 13:15:48.193844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.365 [2024-04-26 13:15:48.194173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.365 [2024-04-26 13:15:48.194179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.365 qpair failed and we were unable to recover it. 00:32:43.365 [2024-04-26 13:15:48.194470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.365 [2024-04-26 13:15:48.194788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.365 [2024-04-26 13:15:48.194794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.365 qpair failed and we were unable to recover it. 00:32:43.365 [2024-04-26 13:15:48.195098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.365 [2024-04-26 13:15:48.195366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.365 [2024-04-26 13:15:48.195372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.365 qpair failed and we were unable to recover it. 00:32:43.365 [2024-04-26 13:15:48.195683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.365 [2024-04-26 13:15:48.195969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.365 [2024-04-26 13:15:48.195976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.365 qpair failed and we were unable to recover it. 00:32:43.365 [2024-04-26 13:15:48.196280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.365 [2024-04-26 13:15:48.196484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.365 [2024-04-26 13:15:48.196490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.365 qpair failed and we were unable to recover it. 00:32:43.365 [2024-04-26 13:15:48.196812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.365 [2024-04-26 13:15:48.197103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.365 [2024-04-26 13:15:48.197110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.365 qpair failed and we were unable to recover it. 00:32:43.365 [2024-04-26 13:15:48.197434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.365 [2024-04-26 13:15:48.197734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.365 [2024-04-26 13:15:48.197741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.365 qpair failed and we were unable to recover it. 00:32:43.365 [2024-04-26 13:15:48.198016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.365 [2024-04-26 13:15:48.198343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.365 [2024-04-26 13:15:48.198350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.365 qpair failed and we were unable to recover it. 00:32:43.365 [2024-04-26 13:15:48.198660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.365 [2024-04-26 13:15:48.198957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.365 [2024-04-26 13:15:48.198964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.365 qpair failed and we were unable to recover it. 00:32:43.365 [2024-04-26 13:15:48.199114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.365 [2024-04-26 13:15:48.199367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.365 [2024-04-26 13:15:48.199374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.365 qpair failed and we were unable to recover it. 00:32:43.365 [2024-04-26 13:15:48.199683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.365 [2024-04-26 13:15:48.199980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.365 [2024-04-26 13:15:48.199986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.365 qpair failed and we were unable to recover it. 00:32:43.365 [2024-04-26 13:15:48.200297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.365 [2024-04-26 13:15:48.200618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.365 [2024-04-26 13:15:48.200625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.365 qpair failed and we were unable to recover it. 00:32:43.365 [2024-04-26 13:15:48.201000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.365 [2024-04-26 13:15:48.201340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.365 [2024-04-26 13:15:48.201346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.365 qpair failed and we were unable to recover it. 00:32:43.365 [2024-04-26 13:15:48.201662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.365 [2024-04-26 13:15:48.201978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.365 [2024-04-26 13:15:48.201984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.365 qpair failed and we were unable to recover it. 00:32:43.365 [2024-04-26 13:15:48.202207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.365 [2024-04-26 13:15:48.202534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.365 [2024-04-26 13:15:48.202540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.365 qpair failed and we were unable to recover it. 00:32:43.365 [2024-04-26 13:15:48.202851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.365 [2024-04-26 13:15:48.203202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.365 [2024-04-26 13:15:48.203208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.365 qpair failed and we were unable to recover it. 00:32:43.365 [2024-04-26 13:15:48.203367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.365 [2024-04-26 13:15:48.203710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.365 [2024-04-26 13:15:48.203716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.365 qpair failed and we were unable to recover it. 00:32:43.365 [2024-04-26 13:15:48.204014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.365 [2024-04-26 13:15:48.204333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.365 [2024-04-26 13:15:48.204340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.365 qpair failed and we were unable to recover it. 00:32:43.365 [2024-04-26 13:15:48.204634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.365 [2024-04-26 13:15:48.204947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.365 [2024-04-26 13:15:48.204954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.365 qpair failed and we were unable to recover it. 00:32:43.365 [2024-04-26 13:15:48.205252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.365 [2024-04-26 13:15:48.205591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.365 [2024-04-26 13:15:48.205598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.365 qpair failed and we were unable to recover it. 00:32:43.365 [2024-04-26 13:15:48.205926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.366 [2024-04-26 13:15:48.206251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.366 [2024-04-26 13:15:48.206257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.366 qpair failed and we were unable to recover it. 00:32:43.366 [2024-04-26 13:15:48.206557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.366 [2024-04-26 13:15:48.206880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.366 [2024-04-26 13:15:48.206887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.366 qpair failed and we were unable to recover it. 00:32:43.366 [2024-04-26 13:15:48.207184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.366 [2024-04-26 13:15:48.207369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.366 [2024-04-26 13:15:48.207375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.366 qpair failed and we were unable to recover it. 00:32:43.366 [2024-04-26 13:15:48.207673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.366 [2024-04-26 13:15:48.207874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.366 [2024-04-26 13:15:48.207881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.366 qpair failed and we were unable to recover it. 00:32:43.366 [2024-04-26 13:15:48.208097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.366 [2024-04-26 13:15:48.208408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.366 [2024-04-26 13:15:48.208415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.366 qpair failed and we were unable to recover it. 00:32:43.366 [2024-04-26 13:15:48.208748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.366 [2024-04-26 13:15:48.209031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.366 [2024-04-26 13:15:48.209039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.366 qpair failed and we were unable to recover it. 00:32:43.366 [2024-04-26 13:15:48.209355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.366 [2024-04-26 13:15:48.209658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.366 [2024-04-26 13:15:48.209664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.366 qpair failed and we were unable to recover it. 00:32:43.366 [2024-04-26 13:15:48.209988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.366 [2024-04-26 13:15:48.210225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.366 [2024-04-26 13:15:48.210232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.366 qpair failed and we were unable to recover it. 00:32:43.366 [2024-04-26 13:15:48.210529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.366 [2024-04-26 13:15:48.210725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.366 [2024-04-26 13:15:48.210732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.366 qpair failed and we were unable to recover it. 00:32:43.366 [2024-04-26 13:15:48.211065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.366 [2024-04-26 13:15:48.211397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.366 [2024-04-26 13:15:48.211404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.366 qpair failed and we were unable to recover it. 00:32:43.366 [2024-04-26 13:15:48.211719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.366 [2024-04-26 13:15:48.212053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.366 [2024-04-26 13:15:48.212060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.366 qpair failed and we were unable to recover it. 00:32:43.366 [2024-04-26 13:15:48.212352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.366 [2024-04-26 13:15:48.212666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.366 [2024-04-26 13:15:48.212673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.366 qpair failed and we were unable to recover it. 00:32:43.366 [2024-04-26 13:15:48.213055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.366 [2024-04-26 13:15:48.213329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.366 [2024-04-26 13:15:48.213335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.366 qpair failed and we were unable to recover it. 00:32:43.366 [2024-04-26 13:15:48.213637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.366 [2024-04-26 13:15:48.213959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.366 [2024-04-26 13:15:48.213969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.366 qpair failed and we were unable to recover it. 00:32:43.366 [2024-04-26 13:15:48.214288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.366 [2024-04-26 13:15:48.214583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.366 [2024-04-26 13:15:48.214591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.366 qpair failed and we were unable to recover it. 00:32:43.366 [2024-04-26 13:15:48.214782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.366 [2024-04-26 13:15:48.215096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.366 [2024-04-26 13:15:48.215103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.366 qpair failed and we were unable to recover it. 00:32:43.366 [2024-04-26 13:15:48.215455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.366 [2024-04-26 13:15:48.215625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.366 [2024-04-26 13:15:48.215633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.366 qpair failed and we were unable to recover it. 00:32:43.366 [2024-04-26 13:15:48.215940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.366 [2024-04-26 13:15:48.216269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.366 [2024-04-26 13:15:48.216275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.366 qpair failed and we were unable to recover it. 00:32:43.366 [2024-04-26 13:15:48.216618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.366 [2024-04-26 13:15:48.216903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.366 [2024-04-26 13:15:48.216910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.366 qpair failed and we were unable to recover it. 00:32:43.366 [2024-04-26 13:15:48.217208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.366 [2024-04-26 13:15:48.217511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.366 [2024-04-26 13:15:48.217518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.366 qpair failed and we were unable to recover it. 00:32:43.366 [2024-04-26 13:15:48.217831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.366 [2024-04-26 13:15:48.218509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.366 [2024-04-26 13:15:48.218524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.366 qpair failed and we were unable to recover it. 00:32:43.366 [2024-04-26 13:15:48.218814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.366 [2024-04-26 13:15:48.219100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.366 [2024-04-26 13:15:48.219107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.366 qpair failed and we were unable to recover it. 00:32:43.366 [2024-04-26 13:15:48.219421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.366 [2024-04-26 13:15:48.219729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.366 [2024-04-26 13:15:48.219735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.366 qpair failed and we were unable to recover it. 00:32:43.366 [2024-04-26 13:15:48.220053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.366 [2024-04-26 13:15:48.220391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.366 [2024-04-26 13:15:48.220399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.366 qpair failed and we were unable to recover it. 00:32:43.366 [2024-04-26 13:15:48.220687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.366 [2024-04-26 13:15:48.220956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.366 [2024-04-26 13:15:48.220963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.366 qpair failed and we were unable to recover it. 00:32:43.366 [2024-04-26 13:15:48.221195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.366 [2024-04-26 13:15:48.221506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.366 [2024-04-26 13:15:48.221512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.366 qpair failed and we were unable to recover it. 00:32:43.366 [2024-04-26 13:15:48.221709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.366 [2024-04-26 13:15:48.221889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.366 [2024-04-26 13:15:48.221897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.366 qpair failed and we were unable to recover it. 00:32:43.366 [2024-04-26 13:15:48.222209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.366 [2024-04-26 13:15:48.222511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.366 [2024-04-26 13:15:48.222518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.366 qpair failed and we were unable to recover it. 00:32:43.366 [2024-04-26 13:15:48.222713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.366 [2024-04-26 13:15:48.222956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.366 [2024-04-26 13:15:48.222964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.366 qpair failed and we were unable to recover it. 00:32:43.366 [2024-04-26 13:15:48.223280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.366 [2024-04-26 13:15:48.223594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.366 [2024-04-26 13:15:48.223600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.366 qpair failed and we were unable to recover it. 00:32:43.366 [2024-04-26 13:15:48.223891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.366 [2024-04-26 13:15:48.224175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.366 [2024-04-26 13:15:48.224181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.366 qpair failed and we were unable to recover it. 00:32:43.366 [2024-04-26 13:15:48.224475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.366 [2024-04-26 13:15:48.224789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.366 [2024-04-26 13:15:48.224796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.366 qpair failed and we were unable to recover it. 00:32:43.366 [2024-04-26 13:15:48.225119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.366 [2024-04-26 13:15:48.225417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.366 [2024-04-26 13:15:48.225424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.366 qpair failed and we were unable to recover it. 00:32:43.366 [2024-04-26 13:15:48.225727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.366 [2024-04-26 13:15:48.225927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.366 [2024-04-26 13:15:48.225935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.366 qpair failed and we were unable to recover it. 00:32:43.366 [2024-04-26 13:15:48.226314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.366 [2024-04-26 13:15:48.226487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.366 [2024-04-26 13:15:48.226495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.366 qpair failed and we were unable to recover it. 00:32:43.366 [2024-04-26 13:15:48.226803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.366 [2024-04-26 13:15:48.226909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.366 [2024-04-26 13:15:48.226916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.366 qpair failed and we were unable to recover it. 00:32:43.366 [2024-04-26 13:15:48.227188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.366 [2024-04-26 13:15:48.227472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.366 [2024-04-26 13:15:48.227478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.366 qpair failed and we were unable to recover it. 00:32:43.366 [2024-04-26 13:15:48.227777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.366 [2024-04-26 13:15:48.228149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.366 [2024-04-26 13:15:48.228156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.366 qpair failed and we were unable to recover it. 00:32:43.366 [2024-04-26 13:15:48.228444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.366 [2024-04-26 13:15:48.228765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.366 [2024-04-26 13:15:48.228771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.366 qpair failed and we were unable to recover it. 00:32:43.366 [2024-04-26 13:15:48.229067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.366 [2024-04-26 13:15:48.229350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.366 [2024-04-26 13:15:48.229356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.366 qpair failed and we were unable to recover it. 00:32:43.366 [2024-04-26 13:15:48.229734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.366 [2024-04-26 13:15:48.230020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.366 [2024-04-26 13:15:48.230027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.366 qpair failed and we were unable to recover it. 00:32:43.366 [2024-04-26 13:15:48.230345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.366 [2024-04-26 13:15:48.230579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.366 [2024-04-26 13:15:48.230586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.366 qpair failed and we were unable to recover it. 00:32:43.366 [2024-04-26 13:15:48.230905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.366 [2024-04-26 13:15:48.231233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.366 [2024-04-26 13:15:48.231239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.366 qpair failed and we were unable to recover it. 00:32:43.366 [2024-04-26 13:15:48.231516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.366 [2024-04-26 13:15:48.231848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.366 [2024-04-26 13:15:48.231856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.366 qpair failed and we were unable to recover it. 00:32:43.366 [2024-04-26 13:15:48.232169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.366 [2024-04-26 13:15:48.232484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.366 [2024-04-26 13:15:48.232490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.366 qpair failed and we were unable to recover it. 00:32:43.366 [2024-04-26 13:15:48.232855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.366 [2024-04-26 13:15:48.233087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.366 [2024-04-26 13:15:48.233093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.366 qpair failed and we were unable to recover it. 00:32:43.366 [2024-04-26 13:15:48.233474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.366 [2024-04-26 13:15:48.233785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.366 [2024-04-26 13:15:48.233792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.366 qpair failed and we were unable to recover it. 00:32:43.366 [2024-04-26 13:15:48.234069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.366 [2024-04-26 13:15:48.234382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.366 [2024-04-26 13:15:48.234389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.366 qpair failed and we were unable to recover it. 00:32:43.366 [2024-04-26 13:15:48.234701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.366 [2024-04-26 13:15:48.235156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.366 [2024-04-26 13:15:48.235163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.366 qpair failed and we were unable to recover it. 00:32:43.366 [2024-04-26 13:15:48.235535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.366 [2024-04-26 13:15:48.235812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.366 [2024-04-26 13:15:48.235819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.366 qpair failed and we were unable to recover it. 00:32:43.366 [2024-04-26 13:15:48.236113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.366 [2024-04-26 13:15:48.236441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.366 [2024-04-26 13:15:48.236448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.366 qpair failed and we were unable to recover it. 00:32:43.366 [2024-04-26 13:15:48.236643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.366 [2024-04-26 13:15:48.236763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.366 [2024-04-26 13:15:48.236769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.366 qpair failed and we were unable to recover it. 00:32:43.366 [2024-04-26 13:15:48.237054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.367 [2024-04-26 13:15:48.237381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.367 [2024-04-26 13:15:48.237387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.367 qpair failed and we were unable to recover it. 00:32:43.367 [2024-04-26 13:15:48.237692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.367 [2024-04-26 13:15:48.237978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.367 [2024-04-26 13:15:48.237984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.367 qpair failed and we were unable to recover it. 00:32:43.367 [2024-04-26 13:15:48.238171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.367 [2024-04-26 13:15:48.238547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.367 [2024-04-26 13:15:48.238553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.367 qpair failed and we were unable to recover it. 00:32:43.367 [2024-04-26 13:15:48.238865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.367 [2024-04-26 13:15:48.239176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.367 [2024-04-26 13:15:48.239182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.367 qpair failed and we were unable to recover it. 00:32:43.367 [2024-04-26 13:15:48.239389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.367 [2024-04-26 13:15:48.239736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.367 [2024-04-26 13:15:48.239742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.367 qpair failed and we were unable to recover it. 00:32:43.367 [2024-04-26 13:15:48.240053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.367 [2024-04-26 13:15:48.240369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.367 [2024-04-26 13:15:48.240375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.367 qpair failed and we were unable to recover it. 00:32:43.367 [2024-04-26 13:15:48.240675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.367 [2024-04-26 13:15:48.240982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.367 [2024-04-26 13:15:48.240990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.367 qpair failed and we were unable to recover it. 00:32:43.367 [2024-04-26 13:15:48.241359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.367 [2024-04-26 13:15:48.241703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.367 [2024-04-26 13:15:48.241710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.367 qpair failed and we were unable to recover it. 00:32:43.367 [2024-04-26 13:15:48.241928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.367 [2024-04-26 13:15:48.242320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.367 [2024-04-26 13:15:48.242326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.367 qpair failed and we were unable to recover it. 00:32:43.367 [2024-04-26 13:15:48.242599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.367 [2024-04-26 13:15:48.242774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.367 [2024-04-26 13:15:48.242781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.367 qpair failed and we were unable to recover it. 00:32:43.367 [2024-04-26 13:15:48.243086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.367 [2024-04-26 13:15:48.243406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.367 [2024-04-26 13:15:48.243412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.367 qpair failed and we were unable to recover it. 00:32:43.367 [2024-04-26 13:15:48.243724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.367 [2024-04-26 13:15:48.244011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.367 [2024-04-26 13:15:48.244017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.367 qpair failed and we were unable to recover it. 00:32:43.367 [2024-04-26 13:15:48.244312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.367 [2024-04-26 13:15:48.244628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.367 [2024-04-26 13:15:48.244634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.367 qpair failed and we were unable to recover it. 00:32:43.367 [2024-04-26 13:15:48.244917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.367 [2024-04-26 13:15:48.245244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.367 [2024-04-26 13:15:48.245250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.367 qpair failed and we were unable to recover it. 00:32:43.367 [2024-04-26 13:15:48.245562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.367 [2024-04-26 13:15:48.245751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.367 [2024-04-26 13:15:48.245758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.367 qpair failed and we were unable to recover it. 00:32:43.367 [2024-04-26 13:15:48.245909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.367 [2024-04-26 13:15:48.246177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.367 [2024-04-26 13:15:48.246183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.367 qpair failed and we were unable to recover it. 00:32:43.367 [2024-04-26 13:15:48.246511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.367 [2024-04-26 13:15:48.246818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.367 [2024-04-26 13:15:48.246825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.367 qpair failed and we were unable to recover it. 00:32:43.367 [2024-04-26 13:15:48.247008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.367 [2024-04-26 13:15:48.247299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.367 [2024-04-26 13:15:48.247306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.367 qpair failed and we were unable to recover it. 00:32:43.367 [2024-04-26 13:15:48.247512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.367 [2024-04-26 13:15:48.247859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.367 [2024-04-26 13:15:48.247866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.367 qpair failed and we were unable to recover it. 00:32:43.367 [2024-04-26 13:15:48.248222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.367 [2024-04-26 13:15:48.248534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.367 [2024-04-26 13:15:48.248541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.367 qpair failed and we were unable to recover it. 00:32:43.367 [2024-04-26 13:15:48.248848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.367 [2024-04-26 13:15:48.248992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.367 [2024-04-26 13:15:48.248999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.367 qpair failed and we were unable to recover it. 00:32:43.367 [2024-04-26 13:15:48.249333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.367 [2024-04-26 13:15:48.249646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.367 [2024-04-26 13:15:48.249652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.367 qpair failed and we were unable to recover it. 00:32:43.367 [2024-04-26 13:15:48.249965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.367 [2024-04-26 13:15:48.250176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.367 [2024-04-26 13:15:48.250182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.367 qpair failed and we were unable to recover it. 00:32:43.367 [2024-04-26 13:15:48.250506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.367 [2024-04-26 13:15:48.250820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.367 [2024-04-26 13:15:48.250827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.367 qpair failed and we were unable to recover it. 00:32:43.367 [2024-04-26 13:15:48.251167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.367 [2024-04-26 13:15:48.251336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.367 [2024-04-26 13:15:48.251343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.367 qpair failed and we were unable to recover it. 00:32:43.367 [2024-04-26 13:15:48.251660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.367 [2024-04-26 13:15:48.251942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.367 [2024-04-26 13:15:48.251949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.367 qpair failed and we were unable to recover it. 00:32:43.367 [2024-04-26 13:15:48.252232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.367 [2024-04-26 13:15:48.252550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.367 [2024-04-26 13:15:48.252556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.367 qpair failed and we were unable to recover it. 00:32:43.367 [2024-04-26 13:15:48.252865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.367 [2024-04-26 13:15:48.253143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.367 [2024-04-26 13:15:48.253150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.367 qpair failed and we were unable to recover it. 00:32:43.367 [2024-04-26 13:15:48.253461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.367 [2024-04-26 13:15:48.253765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.367 [2024-04-26 13:15:48.253772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.367 qpair failed and we were unable to recover it. 00:32:43.367 [2024-04-26 13:15:48.254058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.367 [2024-04-26 13:15:48.254380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.367 [2024-04-26 13:15:48.254386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.367 qpair failed and we were unable to recover it. 00:32:43.367 [2024-04-26 13:15:48.254687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.367 [2024-04-26 13:15:48.254880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.367 [2024-04-26 13:15:48.254888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.367 qpair failed and we were unable to recover it. 00:32:43.367 [2024-04-26 13:15:48.255205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.367 [2024-04-26 13:15:48.255522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.367 [2024-04-26 13:15:48.255528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.367 qpair failed and we were unable to recover it. 00:32:43.367 [2024-04-26 13:15:48.255716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.367 [2024-04-26 13:15:48.256030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.367 [2024-04-26 13:15:48.256037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.367 qpair failed and we were unable to recover it. 00:32:43.367 [2024-04-26 13:15:48.256356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.367 [2024-04-26 13:15:48.256591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.367 [2024-04-26 13:15:48.256597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.367 qpair failed and we were unable to recover it. 00:32:43.367 [2024-04-26 13:15:48.256882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.367 [2024-04-26 13:15:48.257207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.367 [2024-04-26 13:15:48.257213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.367 qpair failed and we were unable to recover it. 00:32:43.367 [2024-04-26 13:15:48.257419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.367 [2024-04-26 13:15:48.257714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.367 [2024-04-26 13:15:48.257720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.367 qpair failed and we were unable to recover it. 00:32:43.367 [2024-04-26 13:15:48.258049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.367 [2024-04-26 13:15:48.258362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.367 [2024-04-26 13:15:48.258369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.367 qpair failed and we were unable to recover it. 00:32:43.367 [2024-04-26 13:15:48.258683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.367 [2024-04-26 13:15:48.259005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.367 [2024-04-26 13:15:48.259013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.367 qpair failed and we were unable to recover it. 00:32:43.367 [2024-04-26 13:15:48.259218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.367 [2024-04-26 13:15:48.259528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.367 [2024-04-26 13:15:48.259535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.367 qpair failed and we were unable to recover it. 00:32:43.367 [2024-04-26 13:15:48.259845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.367 [2024-04-26 13:15:48.260171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.367 [2024-04-26 13:15:48.260177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.367 qpair failed and we were unable to recover it. 00:32:43.367 [2024-04-26 13:15:48.260475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.367 [2024-04-26 13:15:48.260798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.367 [2024-04-26 13:15:48.260804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.367 qpair failed and we were unable to recover it. 00:32:43.367 [2024-04-26 13:15:48.261104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.367 [2024-04-26 13:15:48.261858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.367 [2024-04-26 13:15:48.261873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.367 qpair failed and we were unable to recover it. 00:32:43.367 [2024-04-26 13:15:48.262047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.367 [2024-04-26 13:15:48.262267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.367 [2024-04-26 13:15:48.262274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.367 qpair failed and we were unable to recover it. 00:32:43.367 [2024-04-26 13:15:48.262453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.367 [2024-04-26 13:15:48.262738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.367 [2024-04-26 13:15:48.262745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.367 qpair failed and we were unable to recover it. 00:32:43.367 [2024-04-26 13:15:48.263072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.367 [2024-04-26 13:15:48.263386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.367 [2024-04-26 13:15:48.263392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.367 qpair failed and we were unable to recover it. 00:32:43.367 [2024-04-26 13:15:48.263700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.367 [2024-04-26 13:15:48.264016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.367 [2024-04-26 13:15:48.264023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.367 qpair failed and we were unable to recover it. 00:32:43.367 [2024-04-26 13:15:48.264337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.367 [2024-04-26 13:15:48.264538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.367 [2024-04-26 13:15:48.264545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.367 qpair failed and we were unable to recover it. 00:32:43.367 [2024-04-26 13:15:48.264729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.367 [2024-04-26 13:15:48.265098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.367 [2024-04-26 13:15:48.265105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.367 qpair failed and we were unable to recover it. 00:32:43.367 [2024-04-26 13:15:48.265405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.367 [2024-04-26 13:15:48.265728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.367 [2024-04-26 13:15:48.265734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.367 qpair failed and we were unable to recover it. 00:32:43.367 [2024-04-26 13:15:48.266086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.367 [2024-04-26 13:15:48.266384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.367 [2024-04-26 13:15:48.266391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.367 qpair failed and we were unable to recover it. 00:32:43.367 [2024-04-26 13:15:48.266711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.367 [2024-04-26 13:15:48.266992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.367 [2024-04-26 13:15:48.266998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.367 qpair failed and we were unable to recover it. 00:32:43.367 [2024-04-26 13:15:48.267377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.367 [2024-04-26 13:15:48.267695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.367 [2024-04-26 13:15:48.267701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.367 qpair failed and we were unable to recover it. 00:32:43.367 [2024-04-26 13:15:48.268022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.367 [2024-04-26 13:15:48.268349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.367 [2024-04-26 13:15:48.268356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.367 qpair failed and we were unable to recover it. 00:32:43.367 [2024-04-26 13:15:48.268653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.367 [2024-04-26 13:15:48.268897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.367 [2024-04-26 13:15:48.268904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.367 qpair failed and we were unable to recover it. 00:32:43.367 [2024-04-26 13:15:48.269207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.367 [2024-04-26 13:15:48.269528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.368 [2024-04-26 13:15:48.269534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.368 qpair failed and we were unable to recover it. 00:32:43.368 [2024-04-26 13:15:48.269830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.368 [2024-04-26 13:15:48.270043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.368 [2024-04-26 13:15:48.270049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.368 qpair failed and we were unable to recover it. 00:32:43.368 [2024-04-26 13:15:48.270369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.368 [2024-04-26 13:15:48.270515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.368 [2024-04-26 13:15:48.270522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.368 qpair failed and we were unable to recover it. 00:32:43.368 [2024-04-26 13:15:48.270834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.368 [2024-04-26 13:15:48.271168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.368 [2024-04-26 13:15:48.271174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.368 qpair failed and we were unable to recover it. 00:32:43.368 [2024-04-26 13:15:48.271475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.368 [2024-04-26 13:15:48.271677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.368 [2024-04-26 13:15:48.271683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.368 qpair failed and we were unable to recover it. 00:32:43.368 [2024-04-26 13:15:48.271995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.368 [2024-04-26 13:15:48.272292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.368 [2024-04-26 13:15:48.272298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.368 qpair failed and we were unable to recover it. 00:32:43.368 [2024-04-26 13:15:48.272606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.368 [2024-04-26 13:15:48.272794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.368 [2024-04-26 13:15:48.272800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.368 qpair failed and we were unable to recover it. 00:32:43.368 [2024-04-26 13:15:48.273169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.368 [2024-04-26 13:15:48.273446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.368 [2024-04-26 13:15:48.273453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.368 qpair failed and we were unable to recover it. 00:32:43.368 [2024-04-26 13:15:48.273672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.368 [2024-04-26 13:15:48.273950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.368 [2024-04-26 13:15:48.273957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.368 qpair failed and we were unable to recover it. 00:32:43.368 [2024-04-26 13:15:48.274293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.368 [2024-04-26 13:15:48.274585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.368 [2024-04-26 13:15:48.274592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.368 qpair failed and we were unable to recover it. 00:32:43.368 [2024-04-26 13:15:48.274780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.368 [2024-04-26 13:15:48.275088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.368 [2024-04-26 13:15:48.275095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.368 qpair failed and we were unable to recover it. 00:32:43.368 [2024-04-26 13:15:48.275282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.368 [2024-04-26 13:15:48.275482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.368 [2024-04-26 13:15:48.275488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.368 qpair failed and we were unable to recover it. 00:32:43.368 [2024-04-26 13:15:48.275812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.368 [2024-04-26 13:15:48.276169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.368 [2024-04-26 13:15:48.276177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.368 qpair failed and we were unable to recover it. 00:32:43.368 [2024-04-26 13:15:48.276489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.368 [2024-04-26 13:15:48.276856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.368 [2024-04-26 13:15:48.276864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.368 qpair failed and we were unable to recover it. 00:32:43.368 [2024-04-26 13:15:48.277161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.368 [2024-04-26 13:15:48.277480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.368 [2024-04-26 13:15:48.277487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.368 qpair failed and we were unable to recover it. 00:32:43.368 [2024-04-26 13:15:48.277797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.368 [2024-04-26 13:15:48.278006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.368 [2024-04-26 13:15:48.278013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.368 qpair failed and we were unable to recover it. 00:32:43.368 [2024-04-26 13:15:48.278219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.368 [2024-04-26 13:15:48.278505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.368 [2024-04-26 13:15:48.278511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.368 qpair failed and we were unable to recover it. 00:32:43.368 [2024-04-26 13:15:48.278715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.368 [2024-04-26 13:15:48.278991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.368 [2024-04-26 13:15:48.278998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.368 qpair failed and we were unable to recover it. 00:32:43.368 [2024-04-26 13:15:48.279223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.368 [2024-04-26 13:15:48.279439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.368 [2024-04-26 13:15:48.279446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.368 qpair failed and we were unable to recover it. 00:32:43.368 [2024-04-26 13:15:48.279797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.368 [2024-04-26 13:15:48.280021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.368 [2024-04-26 13:15:48.280028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.368 qpair failed and we were unable to recover it. 00:32:43.368 [2024-04-26 13:15:48.280187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.368 [2024-04-26 13:15:48.280469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.368 [2024-04-26 13:15:48.280476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.368 qpair failed and we were unable to recover it. 00:32:43.368 [2024-04-26 13:15:48.280798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.368 [2024-04-26 13:15:48.281167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.368 [2024-04-26 13:15:48.281175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.368 qpair failed and we were unable to recover it. 00:32:43.368 [2024-04-26 13:15:48.281482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.368 [2024-04-26 13:15:48.281809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.368 [2024-04-26 13:15:48.281816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.368 qpair failed and we were unable to recover it. 00:32:43.368 [2024-04-26 13:15:48.282148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.368 [2024-04-26 13:15:48.282468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.368 [2024-04-26 13:15:48.282475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.368 qpair failed and we were unable to recover it. 00:32:43.368 [2024-04-26 13:15:48.282670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.368 [2024-04-26 13:15:48.283013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.368 [2024-04-26 13:15:48.283020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.368 qpair failed and we were unable to recover it. 00:32:43.368 [2024-04-26 13:15:48.283389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.368 [2024-04-26 13:15:48.283658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.368 [2024-04-26 13:15:48.283665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.368 qpair failed and we were unable to recover it. 00:32:43.368 [2024-04-26 13:15:48.283984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.368 [2024-04-26 13:15:48.284274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.368 [2024-04-26 13:15:48.284281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.368 qpair failed and we were unable to recover it. 00:32:43.368 [2024-04-26 13:15:48.284596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.368 [2024-04-26 13:15:48.284912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.368 [2024-04-26 13:15:48.284919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.368 qpair failed and we were unable to recover it. 00:32:43.368 [2024-04-26 13:15:48.285249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.368 [2024-04-26 13:15:48.285569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.368 [2024-04-26 13:15:48.285575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.368 qpair failed and we were unable to recover it. 00:32:43.368 [2024-04-26 13:15:48.285721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.368 [2024-04-26 13:15:48.286042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.368 [2024-04-26 13:15:48.286049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.368 qpair failed and we were unable to recover it. 00:32:43.368 [2024-04-26 13:15:48.286403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.368 [2024-04-26 13:15:48.286691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.368 [2024-04-26 13:15:48.286698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.368 qpair failed and we were unable to recover it. 00:32:43.368 [2024-04-26 13:15:48.287009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.368 [2024-04-26 13:15:48.287332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.368 [2024-04-26 13:15:48.287338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.368 qpair failed and we were unable to recover it. 00:32:43.368 [2024-04-26 13:15:48.287664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.368 [2024-04-26 13:15:48.288007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.368 [2024-04-26 13:15:48.288014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.368 qpair failed and we were unable to recover it. 00:32:43.368 [2024-04-26 13:15:48.288325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.368 [2024-04-26 13:15:48.288655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.368 [2024-04-26 13:15:48.288661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.368 qpair failed and we were unable to recover it. 00:32:43.368 [2024-04-26 13:15:48.288834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.368 [2024-04-26 13:15:48.289133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.368 [2024-04-26 13:15:48.289139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.368 qpair failed and we were unable to recover it. 00:32:43.368 [2024-04-26 13:15:48.289454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.368 [2024-04-26 13:15:48.289851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.368 [2024-04-26 13:15:48.289858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.368 qpair failed and we were unable to recover it. 00:32:43.368 [2024-04-26 13:15:48.290239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.368 [2024-04-26 13:15:48.290454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.368 [2024-04-26 13:15:48.290461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.368 qpair failed and we were unable to recover it. 00:32:43.368 [2024-04-26 13:15:48.290834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.368 [2024-04-26 13:15:48.291140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.368 [2024-04-26 13:15:48.291147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.368 qpair failed and we were unable to recover it. 00:32:43.368 [2024-04-26 13:15:48.291352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.368 [2024-04-26 13:15:48.291640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.368 [2024-04-26 13:15:48.291647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.368 qpair failed and we were unable to recover it. 00:32:43.368 [2024-04-26 13:15:48.291843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.368 [2024-04-26 13:15:48.292147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.368 [2024-04-26 13:15:48.292153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.368 qpair failed and we were unable to recover it. 00:32:43.368 [2024-04-26 13:15:48.292474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.368 [2024-04-26 13:15:48.292797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.368 [2024-04-26 13:15:48.292803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.368 qpair failed and we were unable to recover it. 00:32:43.368 [2024-04-26 13:15:48.293129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.368 [2024-04-26 13:15:48.293444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.368 [2024-04-26 13:15:48.293450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.368 qpair failed and we were unable to recover it. 00:32:43.368 [2024-04-26 13:15:48.293645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.368 [2024-04-26 13:15:48.293908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.368 [2024-04-26 13:15:48.293915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.368 qpair failed and we were unable to recover it. 00:32:43.368 [2024-04-26 13:15:48.294152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.368 [2024-04-26 13:15:48.294471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.368 [2024-04-26 13:15:48.294477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.368 qpair failed and we were unable to recover it. 00:32:43.368 [2024-04-26 13:15:48.294683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.368 [2024-04-26 13:15:48.294832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.368 [2024-04-26 13:15:48.294843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.368 qpair failed and we were unable to recover it. 00:32:43.368 [2024-04-26 13:15:48.295001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.368 [2024-04-26 13:15:48.295303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.368 [2024-04-26 13:15:48.295310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.368 qpair failed and we were unable to recover it. 00:32:43.368 [2024-04-26 13:15:48.295613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.368 [2024-04-26 13:15:48.295880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.368 [2024-04-26 13:15:48.295887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.368 qpair failed and we were unable to recover it. 00:32:43.368 [2024-04-26 13:15:48.296184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.368 [2024-04-26 13:15:48.296474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.368 [2024-04-26 13:15:48.296480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.368 qpair failed and we were unable to recover it. 00:32:43.368 [2024-04-26 13:15:48.296686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.368 [2024-04-26 13:15:48.297013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.368 [2024-04-26 13:15:48.297019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.368 qpair failed and we were unable to recover it. 00:32:43.369 [2024-04-26 13:15:48.297345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.369 [2024-04-26 13:15:48.297530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.369 [2024-04-26 13:15:48.297537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.369 qpair failed and we were unable to recover it. 00:32:43.369 [2024-04-26 13:15:48.297897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.369 [2024-04-26 13:15:48.298078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.369 [2024-04-26 13:15:48.298084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.369 qpair failed and we were unable to recover it. 00:32:43.369 [2024-04-26 13:15:48.298428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.369 [2024-04-26 13:15:48.298725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.369 [2024-04-26 13:15:48.298732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.369 qpair failed and we were unable to recover it. 00:32:43.369 [2024-04-26 13:15:48.299070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.369 [2024-04-26 13:15:48.299376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.369 [2024-04-26 13:15:48.299383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.369 qpair failed and we were unable to recover it. 00:32:43.369 [2024-04-26 13:15:48.299697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.369 [2024-04-26 13:15:48.300010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.369 [2024-04-26 13:15:48.300017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.369 qpair failed and we were unable to recover it. 00:32:43.369 [2024-04-26 13:15:48.300209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.369 [2024-04-26 13:15:48.300522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.369 [2024-04-26 13:15:48.300528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.369 qpair failed and we were unable to recover it. 00:32:43.369 [2024-04-26 13:15:48.300859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.369 [2024-04-26 13:15:48.301146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.369 [2024-04-26 13:15:48.301152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.369 qpair failed and we were unable to recover it. 00:32:43.369 [2024-04-26 13:15:48.301352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.369 [2024-04-26 13:15:48.301704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.369 [2024-04-26 13:15:48.301710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.369 qpair failed and we were unable to recover it. 00:32:43.369 [2024-04-26 13:15:48.301995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.369 [2024-04-26 13:15:48.302314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.369 [2024-04-26 13:15:48.302320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.369 qpair failed and we were unable to recover it. 00:32:43.369 [2024-04-26 13:15:48.302639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.369 [2024-04-26 13:15:48.302955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.369 [2024-04-26 13:15:48.302962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.369 qpair failed and we were unable to recover it. 00:32:43.369 [2024-04-26 13:15:48.303273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.369 [2024-04-26 13:15:48.303430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.369 [2024-04-26 13:15:48.303437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.369 qpair failed and we were unable to recover it. 00:32:43.369 [2024-04-26 13:15:48.303800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.369 [2024-04-26 13:15:48.303984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.369 [2024-04-26 13:15:48.303991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.369 qpair failed and we were unable to recover it. 00:32:43.369 [2024-04-26 13:15:48.304292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.369 [2024-04-26 13:15:48.304587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.369 [2024-04-26 13:15:48.304593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.369 qpair failed and we were unable to recover it. 00:32:43.369 [2024-04-26 13:15:48.304916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.369 [2024-04-26 13:15:48.305210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.369 [2024-04-26 13:15:48.305217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.369 qpair failed and we were unable to recover it. 00:32:43.369 [2024-04-26 13:15:48.305397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.369 [2024-04-26 13:15:48.305564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.369 [2024-04-26 13:15:48.305571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.369 qpair failed and we were unable to recover it. 00:32:43.369 [2024-04-26 13:15:48.305881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.369 [2024-04-26 13:15:48.305981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.369 [2024-04-26 13:15:48.305988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.369 qpair failed and we were unable to recover it. 00:32:43.369 [2024-04-26 13:15:48.306277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.369 [2024-04-26 13:15:48.306461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.369 [2024-04-26 13:15:48.306468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.369 qpair failed and we were unable to recover it. 00:32:43.369 [2024-04-26 13:15:48.306751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.369 [2024-04-26 13:15:48.307116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.369 [2024-04-26 13:15:48.307123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.369 qpair failed and we were unable to recover it. 00:32:43.369 [2024-04-26 13:15:48.307446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.369 [2024-04-26 13:15:48.307632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.369 [2024-04-26 13:15:48.307639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.369 qpair failed and we were unable to recover it. 00:32:43.369 [2024-04-26 13:15:48.307847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.369 [2024-04-26 13:15:48.308168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.369 [2024-04-26 13:15:48.308176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.369 qpair failed and we were unable to recover it. 00:32:43.369 [2024-04-26 13:15:48.308477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.369 [2024-04-26 13:15:48.308777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.369 [2024-04-26 13:15:48.308784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.369 qpair failed and we were unable to recover it. 00:32:43.369 [2024-04-26 13:15:48.309014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.369 [2024-04-26 13:15:48.309346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.369 [2024-04-26 13:15:48.309353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.369 qpair failed and we were unable to recover it. 00:32:43.369 [2024-04-26 13:15:48.309659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.369 [2024-04-26 13:15:48.309989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.369 [2024-04-26 13:15:48.309996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.369 qpair failed and we were unable to recover it. 00:32:43.369 [2024-04-26 13:15:48.310340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.369 [2024-04-26 13:15:48.310520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.369 [2024-04-26 13:15:48.310527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.369 qpair failed and we were unable to recover it. 00:32:43.369 [2024-04-26 13:15:48.310843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.369 [2024-04-26 13:15:48.311186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.369 [2024-04-26 13:15:48.311193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.369 qpair failed and we were unable to recover it. 00:32:43.369 [2024-04-26 13:15:48.311509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.369 [2024-04-26 13:15:48.311848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.369 [2024-04-26 13:15:48.311856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.369 qpair failed and we were unable to recover it. 00:32:43.369 [2024-04-26 13:15:48.312052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.369 [2024-04-26 13:15:48.312377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.369 [2024-04-26 13:15:48.312384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.369 qpair failed and we were unable to recover it. 00:32:43.369 [2024-04-26 13:15:48.312718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.369 [2024-04-26 13:15:48.313015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.369 [2024-04-26 13:15:48.313022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.369 qpair failed and we were unable to recover it. 00:32:43.369 [2024-04-26 13:15:48.313354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.369 [2024-04-26 13:15:48.313701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.369 [2024-04-26 13:15:48.313708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.369 qpair failed and we were unable to recover it. 00:32:43.369 [2024-04-26 13:15:48.314050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.369 [2024-04-26 13:15:48.314280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.369 [2024-04-26 13:15:48.314289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.369 qpair failed and we were unable to recover it. 00:32:43.369 [2024-04-26 13:15:48.314607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.369 [2024-04-26 13:15:48.314766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.369 [2024-04-26 13:15:48.314773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.369 qpair failed and we were unable to recover it. 00:32:43.369 [2024-04-26 13:15:48.315068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.369 [2024-04-26 13:15:48.315390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.369 [2024-04-26 13:15:48.315397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.369 qpair failed and we were unable to recover it. 00:32:43.369 [2024-04-26 13:15:48.315700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.369 [2024-04-26 13:15:48.315996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.369 [2024-04-26 13:15:48.316004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.369 qpair failed and we were unable to recover it. 00:32:43.369 [2024-04-26 13:15:48.316202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.369 [2024-04-26 13:15:48.316501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.369 [2024-04-26 13:15:48.316508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.369 qpair failed and we were unable to recover it. 00:32:43.369 [2024-04-26 13:15:48.316819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.369 [2024-04-26 13:15:48.316977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.369 [2024-04-26 13:15:48.316985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.369 qpair failed and we were unable to recover it. 00:32:43.369 [2024-04-26 13:15:48.317183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.369 [2024-04-26 13:15:48.317478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.369 [2024-04-26 13:15:48.317486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.369 qpair failed and we were unable to recover it. 00:32:43.369 [2024-04-26 13:15:48.317714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.369 [2024-04-26 13:15:48.317988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.369 [2024-04-26 13:15:48.317996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.369 qpair failed and we were unable to recover it. 00:32:43.369 [2024-04-26 13:15:48.318314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.369 [2024-04-26 13:15:48.318640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.369 [2024-04-26 13:15:48.318647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.369 qpair failed and we were unable to recover it. 00:32:43.369 [2024-04-26 13:15:48.319094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.369 [2024-04-26 13:15:48.319430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.369 [2024-04-26 13:15:48.319437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.369 qpair failed and we were unable to recover it. 00:32:43.369 [2024-04-26 13:15:48.319753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.369 [2024-04-26 13:15:48.320074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.369 [2024-04-26 13:15:48.320084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.369 qpair failed and we were unable to recover it. 00:32:43.369 [2024-04-26 13:15:48.320382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.369 [2024-04-26 13:15:48.320625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.369 [2024-04-26 13:15:48.320632] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.369 qpair failed and we were unable to recover it. 00:32:43.369 [2024-04-26 13:15:48.320915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.369 [2024-04-26 13:15:48.321111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.369 [2024-04-26 13:15:48.321119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.369 qpair failed and we were unable to recover it. 00:32:43.369 [2024-04-26 13:15:48.321435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.369 [2024-04-26 13:15:48.321756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.369 [2024-04-26 13:15:48.321763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.369 qpair failed and we were unable to recover it. 00:32:43.369 [2024-04-26 13:15:48.322000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.369 [2024-04-26 13:15:48.322335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.369 [2024-04-26 13:15:48.322343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.369 qpair failed and we were unable to recover it. 00:32:43.369 [2024-04-26 13:15:48.322396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.369 [2024-04-26 13:15:48.322578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.369 [2024-04-26 13:15:48.322589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.369 qpair failed and we were unable to recover it. 00:32:43.369 [2024-04-26 13:15:48.322897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.369 [2024-04-26 13:15:48.323192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.369 [2024-04-26 13:15:48.323199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.369 qpair failed and we were unable to recover it. 00:32:43.369 [2024-04-26 13:15:48.323516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.369 [2024-04-26 13:15:48.323858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.369 [2024-04-26 13:15:48.323865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.369 qpair failed and we were unable to recover it. 00:32:43.369 [2024-04-26 13:15:48.323976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.369 [2024-04-26 13:15:48.324247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.369 [2024-04-26 13:15:48.324254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.369 qpair failed and we were unable to recover it. 00:32:43.369 [2024-04-26 13:15:48.324575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.369 [2024-04-26 13:15:48.324852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.369 [2024-04-26 13:15:48.324860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.369 qpair failed and we were unable to recover it. 00:32:43.369 [2024-04-26 13:15:48.325199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.369 [2024-04-26 13:15:48.325357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.369 [2024-04-26 13:15:48.325366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.369 qpair failed and we were unable to recover it. 00:32:43.369 [2024-04-26 13:15:48.325654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.369 [2024-04-26 13:15:48.325829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.369 [2024-04-26 13:15:48.325841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.369 qpair failed and we were unable to recover it. 00:32:43.369 [2024-04-26 13:15:48.326115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.369 [2024-04-26 13:15:48.326430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.369 [2024-04-26 13:15:48.326437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.369 qpair failed and we were unable to recover it. 00:32:43.369 [2024-04-26 13:15:48.326730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.369 [2024-04-26 13:15:48.327069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.369 [2024-04-26 13:15:48.327077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.369 qpair failed and we were unable to recover it. 00:32:43.369 [2024-04-26 13:15:48.327403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.369 [2024-04-26 13:15:48.327712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.370 [2024-04-26 13:15:48.327719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.370 qpair failed and we were unable to recover it. 00:32:43.370 [2024-04-26 13:15:48.328025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.370 [2024-04-26 13:15:48.328277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.370 [2024-04-26 13:15:48.328284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.370 qpair failed and we were unable to recover it. 00:32:43.370 [2024-04-26 13:15:48.328596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.370 [2024-04-26 13:15:48.328879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.370 [2024-04-26 13:15:48.328886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.370 qpair failed and we were unable to recover it. 00:32:43.370 [2024-04-26 13:15:48.329109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.370 [2024-04-26 13:15:48.329413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.370 [2024-04-26 13:15:48.329420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.370 qpair failed and we were unable to recover it. 00:32:43.370 [2024-04-26 13:15:48.329695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.370 [2024-04-26 13:15:48.330084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.370 [2024-04-26 13:15:48.330091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.370 qpair failed and we were unable to recover it. 00:32:43.370 [2024-04-26 13:15:48.330436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.370 [2024-04-26 13:15:48.330744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.370 [2024-04-26 13:15:48.330752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.370 qpair failed and we were unable to recover it. 00:32:43.370 [2024-04-26 13:15:48.331053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.370 [2024-04-26 13:15:48.331289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.370 [2024-04-26 13:15:48.331295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.370 qpair failed and we were unable to recover it. 00:32:43.370 [2024-04-26 13:15:48.331500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.370 [2024-04-26 13:15:48.331793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.370 [2024-04-26 13:15:48.331800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.370 qpair failed and we were unable to recover it. 00:32:43.370 [2024-04-26 13:15:48.332150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.370 [2024-04-26 13:15:48.332459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.370 [2024-04-26 13:15:48.332465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.370 qpair failed and we were unable to recover it. 00:32:43.370 [2024-04-26 13:15:48.332776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.370 [2024-04-26 13:15:48.333090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.370 [2024-04-26 13:15:48.333097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.370 qpair failed and we were unable to recover it. 00:32:43.370 [2024-04-26 13:15:48.333290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.370 [2024-04-26 13:15:48.333634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.370 [2024-04-26 13:15:48.333641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.370 qpair failed and we were unable to recover it. 00:32:43.370 [2024-04-26 13:15:48.333946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.370 [2024-04-26 13:15:48.334279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.370 [2024-04-26 13:15:48.334285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.370 qpair failed and we were unable to recover it. 00:32:43.370 [2024-04-26 13:15:48.334592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.370 [2024-04-26 13:15:48.334920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.370 [2024-04-26 13:15:48.334927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.370 qpair failed and we were unable to recover it. 00:32:43.370 [2024-04-26 13:15:48.335258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.370 [2024-04-26 13:15:48.335600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.370 [2024-04-26 13:15:48.335607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.370 qpair failed and we were unable to recover it. 00:32:43.370 [2024-04-26 13:15:48.335858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.370 [2024-04-26 13:15:48.336142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.370 [2024-04-26 13:15:48.336150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.370 qpair failed and we were unable to recover it. 00:32:43.370 [2024-04-26 13:15:48.336468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.370 [2024-04-26 13:15:48.336789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.370 [2024-04-26 13:15:48.336796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.370 qpair failed and we were unable to recover it. 00:32:43.370 [2024-04-26 13:15:48.336992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.370 [2024-04-26 13:15:48.337222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.370 [2024-04-26 13:15:48.337229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.370 qpair failed and we were unable to recover it. 00:32:43.370 [2024-04-26 13:15:48.337398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.370 [2024-04-26 13:15:48.337739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.370 [2024-04-26 13:15:48.337746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.370 qpair failed and we were unable to recover it. 00:32:43.370 [2024-04-26 13:15:48.338051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.370 [2024-04-26 13:15:48.338351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.370 [2024-04-26 13:15:48.338357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.370 qpair failed and we were unable to recover it. 00:32:43.370 [2024-04-26 13:15:48.338678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.370 [2024-04-26 13:15:48.338993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.370 [2024-04-26 13:15:48.338999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.370 qpair failed and we were unable to recover it. 00:32:43.370 [2024-04-26 13:15:48.339323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.370 [2024-04-26 13:15:48.339525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.370 [2024-04-26 13:15:48.339532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.370 qpair failed and we were unable to recover it. 00:32:43.370 [2024-04-26 13:15:48.339827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.370 [2024-04-26 13:15:48.340178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.370 [2024-04-26 13:15:48.340185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.370 qpair failed and we were unable to recover it. 00:32:43.370 [2024-04-26 13:15:48.340594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.370 [2024-04-26 13:15:48.340893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.370 [2024-04-26 13:15:48.340900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.370 qpair failed and we were unable to recover it. 00:32:43.370 [2024-04-26 13:15:48.341222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.370 [2024-04-26 13:15:48.341559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.370 [2024-04-26 13:15:48.341565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.370 qpair failed and we were unable to recover it. 00:32:43.370 [2024-04-26 13:15:48.341754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.370 [2024-04-26 13:15:48.342128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.370 [2024-04-26 13:15:48.342136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.370 qpair failed and we were unable to recover it. 00:32:43.370 [2024-04-26 13:15:48.342216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.370 [2024-04-26 13:15:48.342575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.370 [2024-04-26 13:15:48.342582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.370 qpair failed and we were unable to recover it. 00:32:43.370 [2024-04-26 13:15:48.342739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.370 [2024-04-26 13:15:48.343002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.370 [2024-04-26 13:15:48.343009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.370 qpair failed and we were unable to recover it. 00:32:43.370 [2024-04-26 13:15:48.343203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.370 [2024-04-26 13:15:48.343528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.370 [2024-04-26 13:15:48.343534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.370 qpair failed and we were unable to recover it. 00:32:43.370 [2024-04-26 13:15:48.343913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.370 [2024-04-26 13:15:48.344246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.370 [2024-04-26 13:15:48.344253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.370 qpair failed and we were unable to recover it. 00:32:43.370 [2024-04-26 13:15:48.344571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.370 [2024-04-26 13:15:48.344885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.370 [2024-04-26 13:15:48.344892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.370 qpair failed and we were unable to recover it. 00:32:43.370 [2024-04-26 13:15:48.345261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.370 [2024-04-26 13:15:48.345597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.370 [2024-04-26 13:15:48.345603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.370 qpair failed and we were unable to recover it. 00:32:43.370 [2024-04-26 13:15:48.345938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.370 [2024-04-26 13:15:48.346250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.370 [2024-04-26 13:15:48.346257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.370 qpair failed and we were unable to recover it. 00:32:43.370 [2024-04-26 13:15:48.346449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.370 [2024-04-26 13:15:48.346720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.370 [2024-04-26 13:15:48.346727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.370 qpair failed and we were unable to recover it. 00:32:43.370 [2024-04-26 13:15:48.346946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.370 [2024-04-26 13:15:48.347199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.370 [2024-04-26 13:15:48.347207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.370 qpair failed and we were unable to recover it. 00:32:43.370 [2024-04-26 13:15:48.347492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.370 [2024-04-26 13:15:48.347813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.370 [2024-04-26 13:15:48.347820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.370 qpair failed and we were unable to recover it. 00:32:43.370 [2024-04-26 13:15:48.348176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.370 [2024-04-26 13:15:48.348325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.370 [2024-04-26 13:15:48.348332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.370 qpair failed and we were unable to recover it. 00:32:43.370 [2024-04-26 13:15:48.348531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.370 [2024-04-26 13:15:48.348730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.370 [2024-04-26 13:15:48.348736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.370 qpair failed and we were unable to recover it. 00:32:43.370 [2024-04-26 13:15:48.348914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.370 [2024-04-26 13:15:48.349328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.370 [2024-04-26 13:15:48.349334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.370 qpair failed and we were unable to recover it. 00:32:43.370 [2024-04-26 13:15:48.349647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.370 [2024-04-26 13:15:48.349954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.370 [2024-04-26 13:15:48.349961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.370 qpair failed and we were unable to recover it. 00:32:43.370 [2024-04-26 13:15:48.350195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.370 [2024-04-26 13:15:48.350532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.370 [2024-04-26 13:15:48.350538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.370 qpair failed and we were unable to recover it. 00:32:43.370 [2024-04-26 13:15:48.350754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.370 [2024-04-26 13:15:48.350944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.370 [2024-04-26 13:15:48.350951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.370 qpair failed and we were unable to recover it. 00:32:43.370 [2024-04-26 13:15:48.351253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.370 [2024-04-26 13:15:48.351325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.370 [2024-04-26 13:15:48.351332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.370 qpair failed and we were unable to recover it. 00:32:43.370 [2024-04-26 13:15:48.351518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.370 [2024-04-26 13:15:48.351835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.370 [2024-04-26 13:15:48.351844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.370 qpair failed and we were unable to recover it. 00:32:43.370 [2024-04-26 13:15:48.352207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.370 [2024-04-26 13:15:48.352426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.370 [2024-04-26 13:15:48.352432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.370 qpair failed and we were unable to recover it. 00:32:43.370 [2024-04-26 13:15:48.352830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.370 [2024-04-26 13:15:48.352944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.370 [2024-04-26 13:15:48.352950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.370 qpair failed and we were unable to recover it. 00:32:43.370 [2024-04-26 13:15:48.353289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.370 [2024-04-26 13:15:48.353610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.370 [2024-04-26 13:15:48.353616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.370 qpair failed and we were unable to recover it. 00:32:43.370 [2024-04-26 13:15:48.353828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.370 [2024-04-26 13:15:48.354004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.370 [2024-04-26 13:15:48.354012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.370 qpair failed and we were unable to recover it. 00:32:43.370 [2024-04-26 13:15:48.354088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.370 [2024-04-26 13:15:48.354375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.370 [2024-04-26 13:15:48.354381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.370 qpair failed and we were unable to recover it. 00:32:43.370 [2024-04-26 13:15:48.354756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.370 [2024-04-26 13:15:48.355075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.370 [2024-04-26 13:15:48.355082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.370 qpair failed and we were unable to recover it. 00:32:43.370 [2024-04-26 13:15:48.355393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.370 [2024-04-26 13:15:48.355720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.370 [2024-04-26 13:15:48.355726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.370 qpair failed and we were unable to recover it. 00:32:43.370 [2024-04-26 13:15:48.356043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.370 [2024-04-26 13:15:48.356381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.371 [2024-04-26 13:15:48.356387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.371 qpair failed and we were unable to recover it. 00:32:43.371 [2024-04-26 13:15:48.356718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.371 [2024-04-26 13:15:48.357017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.371 [2024-04-26 13:15:48.357024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.371 qpair failed and we were unable to recover it. 00:32:43.371 [2024-04-26 13:15:48.357291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.371 [2024-04-26 13:15:48.357590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.371 [2024-04-26 13:15:48.357596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.371 qpair failed and we were unable to recover it. 00:32:43.371 [2024-04-26 13:15:48.357892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.371 [2024-04-26 13:15:48.358144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.371 [2024-04-26 13:15:48.358150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.371 qpair failed and we were unable to recover it. 00:32:43.371 [2024-04-26 13:15:48.358331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.371 [2024-04-26 13:15:48.358657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.371 [2024-04-26 13:15:48.358663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.371 qpair failed and we were unable to recover it. 00:32:43.371 [2024-04-26 13:15:48.358979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.371 [2024-04-26 13:15:48.359202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.371 [2024-04-26 13:15:48.359209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.371 qpair failed and we were unable to recover it. 00:32:43.371 [2024-04-26 13:15:48.359546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.371 [2024-04-26 13:15:48.359801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.371 [2024-04-26 13:15:48.359808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.371 qpair failed and we were unable to recover it. 00:32:43.371 [2024-04-26 13:15:48.360046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.371 [2024-04-26 13:15:48.360347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.371 [2024-04-26 13:15:48.360354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.371 qpair failed and we were unable to recover it. 00:32:43.371 [2024-04-26 13:15:48.360669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.371 [2024-04-26 13:15:48.360999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.371 [2024-04-26 13:15:48.361005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.371 qpair failed and we were unable to recover it. 00:32:43.371 [2024-04-26 13:15:48.361301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.371 [2024-04-26 13:15:48.361500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.371 [2024-04-26 13:15:48.361506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.371 qpair failed and we were unable to recover it. 00:32:43.371 [2024-04-26 13:15:48.361807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.371 [2024-04-26 13:15:48.362124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.371 [2024-04-26 13:15:48.362130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.371 qpair failed and we were unable to recover it. 00:32:43.371 [2024-04-26 13:15:48.362388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.371 [2024-04-26 13:15:48.362653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.371 [2024-04-26 13:15:48.362659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.371 qpair failed and we were unable to recover it. 00:32:43.371 [2024-04-26 13:15:48.362886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.371 [2024-04-26 13:15:48.363107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.371 [2024-04-26 13:15:48.363114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.371 qpair failed and we were unable to recover it. 00:32:43.371 [2024-04-26 13:15:48.363437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.371 [2024-04-26 13:15:48.363730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.371 [2024-04-26 13:15:48.363736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.371 qpair failed and we were unable to recover it. 00:32:43.371 [2024-04-26 13:15:48.364117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.371 [2024-04-26 13:15:48.364394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.371 [2024-04-26 13:15:48.364401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.371 qpair failed and we were unable to recover it. 00:32:43.371 [2024-04-26 13:15:48.364579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.371 [2024-04-26 13:15:48.364853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.371 [2024-04-26 13:15:48.364860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.371 qpair failed and we were unable to recover it. 00:32:43.371 [2024-04-26 13:15:48.365202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.371 [2024-04-26 13:15:48.365522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.371 [2024-04-26 13:15:48.365529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.371 qpair failed and we were unable to recover it. 00:32:43.371 [2024-04-26 13:15:48.365866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.371 [2024-04-26 13:15:48.366003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.371 [2024-04-26 13:15:48.366010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.371 qpair failed and we were unable to recover it. 00:32:43.371 [2024-04-26 13:15:48.366327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.371 [2024-04-26 13:15:48.366622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.371 [2024-04-26 13:15:48.366629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.371 qpair failed and we were unable to recover it. 00:32:43.371 [2024-04-26 13:15:48.366939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.371 [2024-04-26 13:15:48.367291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.371 [2024-04-26 13:15:48.367297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.371 qpair failed and we were unable to recover it. 00:32:43.371 [2024-04-26 13:15:48.367589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.371 [2024-04-26 13:15:48.367655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.371 [2024-04-26 13:15:48.367662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.371 qpair failed and we were unable to recover it. 00:32:43.371 [2024-04-26 13:15:48.367992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.371 [2024-04-26 13:15:48.368303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.371 [2024-04-26 13:15:48.368309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.371 qpair failed and we were unable to recover it. 00:32:43.371 [2024-04-26 13:15:48.368602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.371 [2024-04-26 13:15:48.368917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.371 [2024-04-26 13:15:48.368924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.371 qpair failed and we were unable to recover it. 00:32:43.371 [2024-04-26 13:15:48.369228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.371 [2024-04-26 13:15:48.369539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.371 [2024-04-26 13:15:48.369545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.371 qpair failed and we were unable to recover it. 00:32:43.371 [2024-04-26 13:15:48.369866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.371 [2024-04-26 13:15:48.370198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.371 [2024-04-26 13:15:48.370205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.371 qpair failed and we were unable to recover it. 00:32:43.371 [2024-04-26 13:15:48.370404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.371 [2024-04-26 13:15:48.370750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.371 [2024-04-26 13:15:48.370757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.371 qpair failed and we were unable to recover it. 00:32:43.371 [2024-04-26 13:15:48.370925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.371 [2024-04-26 13:15:48.371163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.371 [2024-04-26 13:15:48.371170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.371 qpair failed and we were unable to recover it. 00:32:43.371 [2024-04-26 13:15:48.371303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.371 [2024-04-26 13:15:48.371583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.371 [2024-04-26 13:15:48.371589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.371 qpair failed and we were unable to recover it. 00:32:43.371 [2024-04-26 13:15:48.371675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.371 [2024-04-26 13:15:48.371977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.371 [2024-04-26 13:15:48.371991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.371 qpair failed and we were unable to recover it. 00:32:43.371 [2024-04-26 13:15:48.372318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.371 [2024-04-26 13:15:48.372475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.371 [2024-04-26 13:15:48.372482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.371 qpair failed and we were unable to recover it. 00:32:43.371 [2024-04-26 13:15:48.372712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.371 [2024-04-26 13:15:48.372929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.371 [2024-04-26 13:15:48.372936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.371 qpair failed and we were unable to recover it. 00:32:43.371 [2024-04-26 13:15:48.373279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.371 [2024-04-26 13:15:48.373477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.371 [2024-04-26 13:15:48.373483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.371 qpair failed and we were unable to recover it. 00:32:43.371 [2024-04-26 13:15:48.373781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.371 [2024-04-26 13:15:48.373920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.371 [2024-04-26 13:15:48.373926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.371 qpair failed and we were unable to recover it. 00:32:43.371 [2024-04-26 13:15:48.374243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.371 [2024-04-26 13:15:48.374586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.371 [2024-04-26 13:15:48.374592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.371 qpair failed and we were unable to recover it. 00:32:43.371 [2024-04-26 13:15:48.374803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.371 [2024-04-26 13:15:48.375134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.371 [2024-04-26 13:15:48.375141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.371 qpair failed and we were unable to recover it. 00:32:43.371 [2024-04-26 13:15:48.375450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.371 [2024-04-26 13:15:48.375769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.371 [2024-04-26 13:15:48.375776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.371 qpair failed and we were unable to recover it. 00:32:43.371 [2024-04-26 13:15:48.376098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.371 [2024-04-26 13:15:48.376416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.371 [2024-04-26 13:15:48.376423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.371 qpair failed and we were unable to recover it. 00:32:43.371 [2024-04-26 13:15:48.376738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.371 [2024-04-26 13:15:48.376944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.371 [2024-04-26 13:15:48.376951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.371 qpair failed and we were unable to recover it. 00:32:43.371 [2024-04-26 13:15:48.377124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.371 [2024-04-26 13:15:48.377483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.371 [2024-04-26 13:15:48.377490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.371 qpair failed and we were unable to recover it. 00:32:43.371 [2024-04-26 13:15:48.377785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.371 [2024-04-26 13:15:48.377962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.371 [2024-04-26 13:15:48.377969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.371 qpair failed and we were unable to recover it. 00:32:43.371 [2024-04-26 13:15:48.378250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.371 [2024-04-26 13:15:48.378472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.371 [2024-04-26 13:15:48.378478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.371 qpair failed and we were unable to recover it. 00:32:43.371 [2024-04-26 13:15:48.378627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.371 [2024-04-26 13:15:48.378822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.371 [2024-04-26 13:15:48.378829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.371 qpair failed and we were unable to recover it. 00:32:43.371 [2024-04-26 13:15:48.379185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.371 [2024-04-26 13:15:48.379433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.371 [2024-04-26 13:15:48.379440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.371 qpair failed and we were unable to recover it. 00:32:43.371 [2024-04-26 13:15:48.379775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.371 [2024-04-26 13:15:48.380072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.371 [2024-04-26 13:15:48.380080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.371 qpair failed and we were unable to recover it. 00:32:43.371 [2024-04-26 13:15:48.380363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.371 [2024-04-26 13:15:48.380599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.371 [2024-04-26 13:15:48.380606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.371 qpair failed and we were unable to recover it. 00:32:43.371 [2024-04-26 13:15:48.380920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.371 [2024-04-26 13:15:48.381238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.371 [2024-04-26 13:15:48.381244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.371 qpair failed and we were unable to recover it. 00:32:43.371 [2024-04-26 13:15:48.381477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.371 [2024-04-26 13:15:48.381820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.371 [2024-04-26 13:15:48.381827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.371 qpair failed and we were unable to recover it. 00:32:43.371 [2024-04-26 13:15:48.381984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.371 [2024-04-26 13:15:48.382352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.371 [2024-04-26 13:15:48.382359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.371 qpair failed and we were unable to recover it. 00:32:43.371 [2024-04-26 13:15:48.382669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.371 [2024-04-26 13:15:48.382966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.371 [2024-04-26 13:15:48.382973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.371 qpair failed and we were unable to recover it. 00:32:43.371 [2024-04-26 13:15:48.383298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.371 [2024-04-26 13:15:48.383621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.371 [2024-04-26 13:15:48.383628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.371 qpair failed and we were unable to recover it. 00:32:43.371 [2024-04-26 13:15:48.383769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.371 [2024-04-26 13:15:48.384145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.371 [2024-04-26 13:15:48.384152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.371 qpair failed and we were unable to recover it. 00:32:43.371 [2024-04-26 13:15:48.384450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.371 [2024-04-26 13:15:48.384777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.371 [2024-04-26 13:15:48.384784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.371 qpair failed and we were unable to recover it. 00:32:43.371 [2024-04-26 13:15:48.385101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.372 [2024-04-26 13:15:48.385418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.372 [2024-04-26 13:15:48.385425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.372 qpair failed and we were unable to recover it. 00:32:43.372 [2024-04-26 13:15:48.385724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.372 [2024-04-26 13:15:48.386049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.372 [2024-04-26 13:15:48.386055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.372 qpair failed and we were unable to recover it. 00:32:43.372 [2024-04-26 13:15:48.386358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.372 [2024-04-26 13:15:48.386624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.372 [2024-04-26 13:15:48.386632] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.372 qpair failed and we were unable to recover it. 00:32:43.372 [2024-04-26 13:15:48.386912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.372 [2024-04-26 13:15:48.387241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.372 [2024-04-26 13:15:48.387247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.372 qpair failed and we were unable to recover it. 00:32:43.372 [2024-04-26 13:15:48.387554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.372 [2024-04-26 13:15:48.387871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.372 [2024-04-26 13:15:48.387877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.372 qpair failed and we were unable to recover it. 00:32:43.372 [2024-04-26 13:15:48.388184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.372 [2024-04-26 13:15:48.388482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.372 [2024-04-26 13:15:48.388489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.372 qpair failed and we were unable to recover it. 00:32:43.372 [2024-04-26 13:15:48.388783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.372 [2024-04-26 13:15:48.389091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.372 [2024-04-26 13:15:48.389097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.372 qpair failed and we were unable to recover it. 00:32:43.372 [2024-04-26 13:15:48.389392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.372 [2024-04-26 13:15:48.389719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.372 [2024-04-26 13:15:48.389725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.372 qpair failed and we were unable to recover it. 00:32:43.372 [2024-04-26 13:15:48.389897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.372 [2024-04-26 13:15:48.390271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.372 [2024-04-26 13:15:48.390278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.372 qpair failed and we were unable to recover it. 00:32:43.372 [2024-04-26 13:15:48.390573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.372 [2024-04-26 13:15:48.390922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.372 [2024-04-26 13:15:48.390928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.372 qpair failed and we were unable to recover it. 00:32:43.372 [2024-04-26 13:15:48.391220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.372 [2024-04-26 13:15:48.391504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.372 [2024-04-26 13:15:48.391510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.372 qpair failed and we were unable to recover it. 00:32:43.372 [2024-04-26 13:15:48.391814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.372 [2024-04-26 13:15:48.392010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.372 [2024-04-26 13:15:48.392016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.372 qpair failed and we were unable to recover it. 00:32:43.372 [2024-04-26 13:15:48.392350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.372 [2024-04-26 13:15:48.392645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.372 [2024-04-26 13:15:48.392651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.372 qpair failed and we were unable to recover it. 00:32:43.372 [2024-04-26 13:15:48.392845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.372 [2024-04-26 13:15:48.392938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.372 [2024-04-26 13:15:48.392944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.372 qpair failed and we were unable to recover it. 00:32:43.372 [2024-04-26 13:15:48.393277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.372 [2024-04-26 13:15:48.393614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.372 [2024-04-26 13:15:48.393621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.372 qpair failed and we were unable to recover it. 00:32:43.372 [2024-04-26 13:15:48.393912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.372 [2024-04-26 13:15:48.394251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.372 [2024-04-26 13:15:48.394261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.372 qpair failed and we were unable to recover it. 00:32:43.372 [2024-04-26 13:15:48.394559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.372 [2024-04-26 13:15:48.394857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.372 [2024-04-26 13:15:48.394864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.372 qpair failed and we were unable to recover it. 00:32:43.372 [2024-04-26 13:15:48.395098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.372 [2024-04-26 13:15:48.395427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.372 [2024-04-26 13:15:48.395434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.372 qpair failed and we were unable to recover it. 00:32:43.372 [2024-04-26 13:15:48.395728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.372 [2024-04-26 13:15:48.396048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.372 [2024-04-26 13:15:48.396055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.372 qpair failed and we were unable to recover it. 00:32:43.372 [2024-04-26 13:15:48.396356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.372 [2024-04-26 13:15:48.396701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.372 [2024-04-26 13:15:48.396708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.372 qpair failed and we were unable to recover it. 00:32:43.372 [2024-04-26 13:15:48.396902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.372 [2024-04-26 13:15:48.397240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.372 [2024-04-26 13:15:48.397246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.372 qpair failed and we were unable to recover it. 00:32:43.372 [2024-04-26 13:15:48.397557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.372 [2024-04-26 13:15:48.397748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.372 [2024-04-26 13:15:48.397754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.372 qpair failed and we were unable to recover it. 00:32:43.372 [2024-04-26 13:15:48.398080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.372 [2024-04-26 13:15:48.398441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.372 [2024-04-26 13:15:48.398447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.372 qpair failed and we were unable to recover it. 00:32:43.372 [2024-04-26 13:15:48.398710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.372 [2024-04-26 13:15:48.399070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.372 [2024-04-26 13:15:48.399077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.372 qpair failed and we were unable to recover it. 00:32:43.372 [2024-04-26 13:15:48.399351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.372 [2024-04-26 13:15:48.399670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.372 [2024-04-26 13:15:48.399676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.372 qpair failed and we were unable to recover it. 00:32:43.372 [2024-04-26 13:15:48.399985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.372 [2024-04-26 13:15:48.400312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.372 [2024-04-26 13:15:48.400320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.372 qpair failed and we were unable to recover it. 00:32:43.372 [2024-04-26 13:15:48.400728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.372 [2024-04-26 13:15:48.400907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.372 [2024-04-26 13:15:48.400914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.372 qpair failed and we were unable to recover it. 00:32:43.372 [2024-04-26 13:15:48.401229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.372 [2024-04-26 13:15:48.401414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.372 [2024-04-26 13:15:48.401421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.372 qpair failed and we were unable to recover it. 00:32:43.372 [2024-04-26 13:15:48.401735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.372 [2024-04-26 13:15:48.402030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.372 [2024-04-26 13:15:48.402037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.372 qpair failed and we were unable to recover it. 00:32:43.372 [2024-04-26 13:15:48.402209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.372 [2024-04-26 13:15:48.402488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.372 [2024-04-26 13:15:48.402495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.372 qpair failed and we were unable to recover it. 00:32:43.372 [2024-04-26 13:15:48.402689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.372 [2024-04-26 13:15:48.402980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.372 [2024-04-26 13:15:48.402987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.372 qpair failed and we were unable to recover it. 00:32:43.372 [2024-04-26 13:15:48.403322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.372 [2024-04-26 13:15:48.403609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.372 [2024-04-26 13:15:48.403615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.372 qpair failed and we were unable to recover it. 00:32:43.372 [2024-04-26 13:15:48.403958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.372 [2024-04-26 13:15:48.404262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.372 [2024-04-26 13:15:48.404268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.372 qpair failed and we were unable to recover it. 00:32:43.372 [2024-04-26 13:15:48.404572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.372 [2024-04-26 13:15:48.404846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.372 [2024-04-26 13:15:48.404852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.372 qpair failed and we were unable to recover it. 00:32:43.372 [2024-04-26 13:15:48.405156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.372 [2024-04-26 13:15:48.405555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.372 [2024-04-26 13:15:48.405561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.372 qpair failed and we were unable to recover it. 00:32:43.372 [2024-04-26 13:15:48.405775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.372 [2024-04-26 13:15:48.406099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.372 [2024-04-26 13:15:48.406108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.372 qpair failed and we were unable to recover it. 00:32:43.372 [2024-04-26 13:15:48.406425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.372 [2024-04-26 13:15:48.406772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.372 [2024-04-26 13:15:48.406778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.372 qpair failed and we were unable to recover it. 00:32:43.372 [2024-04-26 13:15:48.407009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.372 [2024-04-26 13:15:48.407447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.372 [2024-04-26 13:15:48.407453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.372 qpair failed and we were unable to recover it. 00:32:43.372 [2024-04-26 13:15:48.407683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.372 [2024-04-26 13:15:48.408007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.372 [2024-04-26 13:15:48.408014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.372 qpair failed and we were unable to recover it. 00:32:43.372 [2024-04-26 13:15:48.408246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.372 [2024-04-26 13:15:48.408579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.372 [2024-04-26 13:15:48.408586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.372 qpair failed and we were unable to recover it. 00:32:43.372 [2024-04-26 13:15:48.408760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.372 [2024-04-26 13:15:48.409000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.372 [2024-04-26 13:15:48.409006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.372 qpair failed and we were unable to recover it. 00:32:43.372 [2024-04-26 13:15:48.409338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.372 [2024-04-26 13:15:48.409562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.372 [2024-04-26 13:15:48.409568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.372 qpair failed and we were unable to recover it. 00:32:43.372 [2024-04-26 13:15:48.409873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.372 [2024-04-26 13:15:48.410184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.372 [2024-04-26 13:15:48.410190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.372 qpair failed and we were unable to recover it. 00:32:43.372 [2024-04-26 13:15:48.410480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.372 [2024-04-26 13:15:48.410793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.372 [2024-04-26 13:15:48.410799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.372 qpair failed and we were unable to recover it. 00:32:43.372 [2024-04-26 13:15:48.410978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.372 [2024-04-26 13:15:48.411282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.372 [2024-04-26 13:15:48.411289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.372 qpair failed and we were unable to recover it. 00:32:43.372 [2024-04-26 13:15:48.411692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.372 [2024-04-26 13:15:48.412003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.372 [2024-04-26 13:15:48.412011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.372 qpair failed and we were unable to recover it. 00:32:43.372 [2024-04-26 13:15:48.412347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.372 [2024-04-26 13:15:48.412660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.372 [2024-04-26 13:15:48.412667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.372 qpair failed and we were unable to recover it. 00:32:43.372 [2024-04-26 13:15:48.412826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.372 [2024-04-26 13:15:48.413147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.372 [2024-04-26 13:15:48.413153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.372 qpair failed and we were unable to recover it. 00:32:43.372 [2024-04-26 13:15:48.413444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.643 [2024-04-26 13:15:48.413740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.643 [2024-04-26 13:15:48.413748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.643 qpair failed and we were unable to recover it. 00:32:43.643 [2024-04-26 13:15:48.413916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.643 [2024-04-26 13:15:48.414164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.643 [2024-04-26 13:15:48.414170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.643 qpair failed and we were unable to recover it. 00:32:43.643 [2024-04-26 13:15:48.414486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.643 [2024-04-26 13:15:48.414794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.643 [2024-04-26 13:15:48.414800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.643 qpair failed and we were unable to recover it. 00:32:43.643 [2024-04-26 13:15:48.415144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.643 [2024-04-26 13:15:48.415462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.643 [2024-04-26 13:15:48.415468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.643 qpair failed and we were unable to recover it. 00:32:43.643 [2024-04-26 13:15:48.415777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.643 [2024-04-26 13:15:48.416634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.643 [2024-04-26 13:15:48.416651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.643 qpair failed and we were unable to recover it. 00:32:43.643 [2024-04-26 13:15:48.416870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.643 [2024-04-26 13:15:48.417169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.643 [2024-04-26 13:15:48.417175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.643 qpair failed and we were unable to recover it. 00:32:43.643 [2024-04-26 13:15:48.417478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.643 [2024-04-26 13:15:48.417794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.643 [2024-04-26 13:15:48.417801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.643 qpair failed and we were unable to recover it. 00:32:43.643 [2024-04-26 13:15:48.417992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.643 [2024-04-26 13:15:48.418333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.643 [2024-04-26 13:15:48.418340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.643 qpair failed and we were unable to recover it. 00:32:43.643 [2024-04-26 13:15:48.418649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.643 [2024-04-26 13:15:48.418965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.643 [2024-04-26 13:15:48.418972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.643 qpair failed and we were unable to recover it. 00:32:43.643 [2024-04-26 13:15:48.419286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.643 [2024-04-26 13:15:48.419615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.644 [2024-04-26 13:15:48.419621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.644 qpair failed and we were unable to recover it. 00:32:43.644 [2024-04-26 13:15:48.419780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.644 [2024-04-26 13:15:48.420144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.644 [2024-04-26 13:15:48.420152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.644 qpair failed and we were unable to recover it. 00:32:43.644 [2024-04-26 13:15:48.420348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.644 [2024-04-26 13:15:48.420597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.644 [2024-04-26 13:15:48.420603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.644 qpair failed and we were unable to recover it. 00:32:43.644 [2024-04-26 13:15:48.420911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.644 [2024-04-26 13:15:48.421240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.644 [2024-04-26 13:15:48.421246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.644 qpair failed and we were unable to recover it. 00:32:43.644 [2024-04-26 13:15:48.421499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.644 [2024-04-26 13:15:48.421826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.644 [2024-04-26 13:15:48.421833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.644 qpair failed and we were unable to recover it. 00:32:43.644 [2024-04-26 13:15:48.422148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.644 [2024-04-26 13:15:48.422484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.644 [2024-04-26 13:15:48.422490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.644 qpair failed and we were unable to recover it. 00:32:43.644 [2024-04-26 13:15:48.422794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.644 [2024-04-26 13:15:48.423179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.644 [2024-04-26 13:15:48.423185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.644 qpair failed and we were unable to recover it. 00:32:43.644 [2024-04-26 13:15:48.423479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.644 [2024-04-26 13:15:48.423638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.644 [2024-04-26 13:15:48.423645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.644 qpair failed and we were unable to recover it. 00:32:43.644 [2024-04-26 13:15:48.423947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.644 [2024-04-26 13:15:48.424266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.644 [2024-04-26 13:15:48.424272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.644 qpair failed and we were unable to recover it. 00:32:43.644 [2024-04-26 13:15:48.424562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.644 [2024-04-26 13:15:48.424869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.644 [2024-04-26 13:15:48.424876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.644 qpair failed and we were unable to recover it. 00:32:43.644 [2024-04-26 13:15:48.425103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.644 [2024-04-26 13:15:48.425427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.644 [2024-04-26 13:15:48.425433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.644 qpair failed and we were unable to recover it. 00:32:43.644 [2024-04-26 13:15:48.425607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.644 [2024-04-26 13:15:48.425912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.644 [2024-04-26 13:15:48.425919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.644 qpair failed and we were unable to recover it. 00:32:43.644 [2024-04-26 13:15:48.426233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.644 [2024-04-26 13:15:48.426496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.644 [2024-04-26 13:15:48.426502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.644 qpair failed and we were unable to recover it. 00:32:43.644 [2024-04-26 13:15:48.426792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.644 [2024-04-26 13:15:48.426960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.644 [2024-04-26 13:15:48.426968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.644 qpair failed and we were unable to recover it. 00:32:43.644 [2024-04-26 13:15:48.427288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.644 [2024-04-26 13:15:48.427586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.644 [2024-04-26 13:15:48.427592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.644 qpair failed and we were unable to recover it. 00:32:43.644 [2024-04-26 13:15:48.427914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.644 [2024-04-26 13:15:48.428225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.644 [2024-04-26 13:15:48.428232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.644 qpair failed and we were unable to recover it. 00:32:43.644 [2024-04-26 13:15:48.428402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.644 [2024-04-26 13:15:48.428728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.644 [2024-04-26 13:15:48.428735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.644 qpair failed and we were unable to recover it. 00:32:43.644 [2024-04-26 13:15:48.429106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.644 [2024-04-26 13:15:48.429417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.644 [2024-04-26 13:15:48.429424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.644 qpair failed and we were unable to recover it. 00:32:43.644 [2024-04-26 13:15:48.429598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.644 [2024-04-26 13:15:48.429894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.644 [2024-04-26 13:15:48.429901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.644 qpair failed and we were unable to recover it. 00:32:43.644 [2024-04-26 13:15:48.430136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.644 [2024-04-26 13:15:48.430350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.644 [2024-04-26 13:15:48.430357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.644 qpair failed and we were unable to recover it. 00:32:43.644 [2024-04-26 13:15:48.430687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.644 [2024-04-26 13:15:48.430997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.644 [2024-04-26 13:15:48.431003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.644 qpair failed and we were unable to recover it. 00:32:43.644 [2024-04-26 13:15:48.431317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.644 [2024-04-26 13:15:48.431632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.644 [2024-04-26 13:15:48.431638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.644 qpair failed and we were unable to recover it. 00:32:43.644 [2024-04-26 13:15:48.431840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.644 [2024-04-26 13:15:48.432126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.644 [2024-04-26 13:15:48.432132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.644 qpair failed and we were unable to recover it. 00:32:43.644 [2024-04-26 13:15:48.432440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.644 [2024-04-26 13:15:48.432754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.644 [2024-04-26 13:15:48.432761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.644 qpair failed and we were unable to recover it. 00:32:43.644 [2024-04-26 13:15:48.433040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.644 [2024-04-26 13:15:48.433371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.644 [2024-04-26 13:15:48.433378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.644 qpair failed and we were unable to recover it. 00:32:43.644 [2024-04-26 13:15:48.433711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.644 [2024-04-26 13:15:48.434012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.645 [2024-04-26 13:15:48.434019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.645 qpair failed and we were unable to recover it. 00:32:43.645 [2024-04-26 13:15:48.434181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.645 [2024-04-26 13:15:48.434560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.645 [2024-04-26 13:15:48.434567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.645 qpair failed and we were unable to recover it. 00:32:43.645 [2024-04-26 13:15:48.434879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.645 [2024-04-26 13:15:48.435040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.645 [2024-04-26 13:15:48.435047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.645 qpair failed and we were unable to recover it. 00:32:43.645 [2024-04-26 13:15:48.435372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.645 [2024-04-26 13:15:48.435669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.645 [2024-04-26 13:15:48.435676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.645 qpair failed and we were unable to recover it. 00:32:43.645 [2024-04-26 13:15:48.435984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.645 [2024-04-26 13:15:48.436297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.645 [2024-04-26 13:15:48.436304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.645 qpair failed and we were unable to recover it. 00:32:43.645 [2024-04-26 13:15:48.436644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.645 [2024-04-26 13:15:48.436927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.645 [2024-04-26 13:15:48.436934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.645 qpair failed and we were unable to recover it. 00:32:43.645 [2024-04-26 13:15:48.437271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.645 [2024-04-26 13:15:48.437611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.645 [2024-04-26 13:15:48.437618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.645 qpair failed and we were unable to recover it. 00:32:43.645 [2024-04-26 13:15:48.437821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.645 [2024-04-26 13:15:48.438038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.645 [2024-04-26 13:15:48.438045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.645 qpair failed and we were unable to recover it. 00:32:43.645 [2024-04-26 13:15:48.438253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.645 [2024-04-26 13:15:48.438559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.645 [2024-04-26 13:15:48.438566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.645 qpair failed and we were unable to recover it. 00:32:43.645 [2024-04-26 13:15:48.438901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.645 [2024-04-26 13:15:48.439234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.645 [2024-04-26 13:15:48.439242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.645 qpair failed and we were unable to recover it. 00:32:43.645 [2024-04-26 13:15:48.439539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.645 [2024-04-26 13:15:48.439871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.645 [2024-04-26 13:15:48.439878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.645 qpair failed and we were unable to recover it. 00:32:43.645 [2024-04-26 13:15:48.440217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.645 [2024-04-26 13:15:48.440403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.645 [2024-04-26 13:15:48.440410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.645 qpair failed and we were unable to recover it. 00:32:43.645 [2024-04-26 13:15:48.440717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.645 [2024-04-26 13:15:48.441012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.645 [2024-04-26 13:15:48.441019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.645 qpair failed and we were unable to recover it. 00:32:43.645 [2024-04-26 13:15:48.441336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.645 [2024-04-26 13:15:48.441653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.645 [2024-04-26 13:15:48.441660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.645 qpair failed and we were unable to recover it. 00:32:43.645 [2024-04-26 13:15:48.441970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.645 [2024-04-26 13:15:48.442281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.645 [2024-04-26 13:15:48.442288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.645 qpair failed and we were unable to recover it. 00:32:43.645 [2024-04-26 13:15:48.442599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.645 [2024-04-26 13:15:48.442905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.645 [2024-04-26 13:15:48.442912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.645 qpair failed and we were unable to recover it. 00:32:43.645 [2024-04-26 13:15:48.443266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.645 [2024-04-26 13:15:48.443567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.645 [2024-04-26 13:15:48.443574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.645 qpair failed and we were unable to recover it. 00:32:43.645 [2024-04-26 13:15:48.443869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.645 [2024-04-26 13:15:48.444184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.645 [2024-04-26 13:15:48.444190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.645 qpair failed and we were unable to recover it. 00:32:43.645 [2024-04-26 13:15:48.444490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.645 [2024-04-26 13:15:48.444795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.645 [2024-04-26 13:15:48.444802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.645 qpair failed and we were unable to recover it. 00:32:43.645 [2024-04-26 13:15:48.444915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.645 [2024-04-26 13:15:48.445058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.645 [2024-04-26 13:15:48.445066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.645 qpair failed and we were unable to recover it. 00:32:43.645 [2024-04-26 13:15:48.445376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.645 [2024-04-26 13:15:48.445720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.645 [2024-04-26 13:15:48.445727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.645 qpair failed and we were unable to recover it. 00:32:43.645 [2024-04-26 13:15:48.446031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.645 [2024-04-26 13:15:48.446201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.645 [2024-04-26 13:15:48.446208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.645 qpair failed and we were unable to recover it. 00:32:43.645 [2024-04-26 13:15:48.446481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.645 [2024-04-26 13:15:48.446778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.645 [2024-04-26 13:15:48.446786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.645 qpair failed and we were unable to recover it. 00:32:43.645 [2024-04-26 13:15:48.447097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.645 [2024-04-26 13:15:48.447416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.645 [2024-04-26 13:15:48.447423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.645 qpair failed and we were unable to recover it. 00:32:43.645 [2024-04-26 13:15:48.447609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.645 [2024-04-26 13:15:48.447940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.645 [2024-04-26 13:15:48.447947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.645 qpair failed and we were unable to recover it. 00:32:43.645 [2024-04-26 13:15:48.448282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.645 [2024-04-26 13:15:48.448572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.645 [2024-04-26 13:15:48.448579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.645 qpair failed and we were unable to recover it. 00:32:43.645 [2024-04-26 13:15:48.448880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.645 [2024-04-26 13:15:48.449178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.645 [2024-04-26 13:15:48.449184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.645 qpair failed and we were unable to recover it. 00:32:43.645 [2024-04-26 13:15:48.449486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.645 [2024-04-26 13:15:48.449812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.645 [2024-04-26 13:15:48.449818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.645 qpair failed and we were unable to recover it. 00:32:43.645 [2024-04-26 13:15:48.450102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.645 [2024-04-26 13:15:48.450276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.645 [2024-04-26 13:15:48.450283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.645 qpair failed and we were unable to recover it. 00:32:43.646 [2024-04-26 13:15:48.450609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.646 [2024-04-26 13:15:48.450903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.646 [2024-04-26 13:15:48.450910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.646 qpair failed and we were unable to recover it. 00:32:43.646 [2024-04-26 13:15:48.451230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.646 [2024-04-26 13:15:48.451599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.646 [2024-04-26 13:15:48.451606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.646 qpair failed and we were unable to recover it. 00:32:43.646 [2024-04-26 13:15:48.451912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.646 [2024-04-26 13:15:48.452214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.646 [2024-04-26 13:15:48.452221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.646 qpair failed and we were unable to recover it. 00:32:43.646 [2024-04-26 13:15:48.452516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.646 [2024-04-26 13:15:48.452835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.646 [2024-04-26 13:15:48.452846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.646 qpair failed and we were unable to recover it. 00:32:43.646 [2024-04-26 13:15:48.453167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.646 [2024-04-26 13:15:48.453470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.646 [2024-04-26 13:15:48.453477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.646 qpair failed and we were unable to recover it. 00:32:43.646 [2024-04-26 13:15:48.453788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.646 [2024-04-26 13:15:48.454099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.646 [2024-04-26 13:15:48.454106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.646 qpair failed and we were unable to recover it. 00:32:43.646 [2024-04-26 13:15:48.454412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.646 [2024-04-26 13:15:48.454729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.646 [2024-04-26 13:15:48.454736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.646 qpair failed and we were unable to recover it. 00:32:43.646 [2024-04-26 13:15:48.455013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.646 [2024-04-26 13:15:48.455341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.646 [2024-04-26 13:15:48.455348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.646 qpair failed and we were unable to recover it. 00:32:43.646 [2024-04-26 13:15:48.455540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.646 [2024-04-26 13:15:48.455686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.646 [2024-04-26 13:15:48.455693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.646 qpair failed and we were unable to recover it. 00:32:43.646 [2024-04-26 13:15:48.455859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.646 [2024-04-26 13:15:48.456172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.646 [2024-04-26 13:15:48.456178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.646 qpair failed and we were unable to recover it. 00:32:43.646 [2024-04-26 13:15:48.456487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.646 [2024-04-26 13:15:48.456764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.646 [2024-04-26 13:15:48.456771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.646 qpair failed and we were unable to recover it. 00:32:43.646 [2024-04-26 13:15:48.457078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.646 [2024-04-26 13:15:48.457410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.646 [2024-04-26 13:15:48.457417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.646 qpair failed and we were unable to recover it. 00:32:43.646 [2024-04-26 13:15:48.457720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.646 [2024-04-26 13:15:48.458011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.646 [2024-04-26 13:15:48.458018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.646 qpair failed and we were unable to recover it. 00:32:43.646 [2024-04-26 13:15:48.458342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.646 [2024-04-26 13:15:48.458539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.646 [2024-04-26 13:15:48.458545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.646 qpair failed and we were unable to recover it. 00:32:43.646 [2024-04-26 13:15:48.458810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.646 [2024-04-26 13:15:48.459094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.646 [2024-04-26 13:15:48.459101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.646 qpair failed and we were unable to recover it. 00:32:43.646 [2024-04-26 13:15:48.459455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.646 [2024-04-26 13:15:48.459607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.646 [2024-04-26 13:15:48.459614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.646 qpair failed and we were unable to recover it. 00:32:43.646 [2024-04-26 13:15:48.460006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.646 [2024-04-26 13:15:48.460344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.646 [2024-04-26 13:15:48.460350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.646 qpair failed and we were unable to recover it. 00:32:43.646 [2024-04-26 13:15:48.460661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.646 [2024-04-26 13:15:48.460977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.646 [2024-04-26 13:15:48.460984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.646 qpair failed and we were unable to recover it. 00:32:43.646 [2024-04-26 13:15:48.461305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.646 [2024-04-26 13:15:48.461621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.646 [2024-04-26 13:15:48.461628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.646 qpair failed and we were unable to recover it. 00:32:43.646 [2024-04-26 13:15:48.461820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.646 [2024-04-26 13:15:48.462135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.646 [2024-04-26 13:15:48.462141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.646 qpair failed and we were unable to recover it. 00:32:43.646 [2024-04-26 13:15:48.462447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.646 [2024-04-26 13:15:48.462724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.646 [2024-04-26 13:15:48.462730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.646 qpair failed and we were unable to recover it. 00:32:43.646 [2024-04-26 13:15:48.463014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.646 [2024-04-26 13:15:48.463366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.646 [2024-04-26 13:15:48.463372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.646 qpair failed and we were unable to recover it. 00:32:43.646 [2024-04-26 13:15:48.463674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.646 [2024-04-26 13:15:48.463974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.646 [2024-04-26 13:15:48.463981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.646 qpair failed and we were unable to recover it. 00:32:43.646 [2024-04-26 13:15:48.464291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.646 [2024-04-26 13:15:48.464577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.646 [2024-04-26 13:15:48.464584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.646 qpair failed and we were unable to recover it. 00:32:43.646 [2024-04-26 13:15:48.464880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.646 [2024-04-26 13:15:48.465107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.646 [2024-04-26 13:15:48.465113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.646 qpair failed and we were unable to recover it. 00:32:43.646 [2024-04-26 13:15:48.465427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.646 [2024-04-26 13:15:48.465741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.647 [2024-04-26 13:15:48.465747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.647 qpair failed and we were unable to recover it. 00:32:43.647 [2024-04-26 13:15:48.465821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.647 [2024-04-26 13:15:48.465926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.647 [2024-04-26 13:15:48.465933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.647 qpair failed and we were unable to recover it. 00:32:43.647 [2024-04-26 13:15:48.466238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.647 [2024-04-26 13:15:48.466413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.647 [2024-04-26 13:15:48.466420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.647 qpair failed and we were unable to recover it. 00:32:43.647 [2024-04-26 13:15:48.466716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.647 [2024-04-26 13:15:48.467013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.647 [2024-04-26 13:15:48.467020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.647 qpair failed and we were unable to recover it. 00:32:43.647 [2024-04-26 13:15:48.467341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.647 [2024-04-26 13:15:48.467661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.647 [2024-04-26 13:15:48.467668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.647 qpair failed and we were unable to recover it. 00:32:43.647 [2024-04-26 13:15:48.467964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.647 [2024-04-26 13:15:48.468282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.647 [2024-04-26 13:15:48.468289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.647 qpair failed and we were unable to recover it. 00:32:43.647 [2024-04-26 13:15:48.468608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.647 [2024-04-26 13:15:48.468765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.647 [2024-04-26 13:15:48.468772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.647 qpair failed and we were unable to recover it. 00:32:43.647 [2024-04-26 13:15:48.468956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.647 [2024-04-26 13:15:48.469234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.647 [2024-04-26 13:15:48.469241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.647 qpair failed and we were unable to recover it. 00:32:43.647 [2024-04-26 13:15:48.469587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.647 [2024-04-26 13:15:48.469903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.647 [2024-04-26 13:15:48.469910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.647 qpair failed and we were unable to recover it. 00:32:43.647 [2024-04-26 13:15:48.470145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.647 [2024-04-26 13:15:48.470498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.647 [2024-04-26 13:15:48.470504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.647 qpair failed and we were unable to recover it. 00:32:43.647 [2024-04-26 13:15:48.470840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.647 [2024-04-26 13:15:48.471147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.647 [2024-04-26 13:15:48.471154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.647 qpair failed and we were unable to recover it. 00:32:43.647 [2024-04-26 13:15:48.471498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.647 [2024-04-26 13:15:48.471816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.647 [2024-04-26 13:15:48.471822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.647 qpair failed and we were unable to recover it. 00:32:43.647 [2024-04-26 13:15:48.472109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.647 [2024-04-26 13:15:48.472430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.647 [2024-04-26 13:15:48.472437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.647 qpair failed and we were unable to recover it. 00:32:43.647 [2024-04-26 13:15:48.472735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.647 [2024-04-26 13:15:48.473015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.647 [2024-04-26 13:15:48.473022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.647 qpair failed and we were unable to recover it. 00:32:43.647 [2024-04-26 13:15:48.473218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.647 [2024-04-26 13:15:48.473576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.647 [2024-04-26 13:15:48.473583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.647 qpair failed and we were unable to recover it. 00:32:43.647 [2024-04-26 13:15:48.473887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.647 [2024-04-26 13:15:48.474176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.647 [2024-04-26 13:15:48.474182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.647 qpair failed and we were unable to recover it. 00:32:43.647 [2024-04-26 13:15:48.474491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.647 [2024-04-26 13:15:48.474792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.647 [2024-04-26 13:15:48.474799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.647 qpair failed and we were unable to recover it. 00:32:43.647 [2024-04-26 13:15:48.475128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.647 [2024-04-26 13:15:48.475472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.647 [2024-04-26 13:15:48.475479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.647 qpair failed and we were unable to recover it. 00:32:43.647 [2024-04-26 13:15:48.475764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.647 [2024-04-26 13:15:48.476065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.647 [2024-04-26 13:15:48.476072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.647 qpair failed and we were unable to recover it. 00:32:43.647 [2024-04-26 13:15:48.476409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.647 [2024-04-26 13:15:48.476726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.647 [2024-04-26 13:15:48.476733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.647 qpair failed and we were unable to recover it. 00:32:43.647 [2024-04-26 13:15:48.477059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.647 [2024-04-26 13:15:48.477333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.647 [2024-04-26 13:15:48.477340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.647 qpair failed and we were unable to recover it. 00:32:43.647 [2024-04-26 13:15:48.477530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.647 [2024-04-26 13:15:48.477756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.647 [2024-04-26 13:15:48.477763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.647 qpair failed and we were unable to recover it. 00:32:43.647 [2024-04-26 13:15:48.478075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.647 [2024-04-26 13:15:48.478380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.647 [2024-04-26 13:15:48.478386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.647 qpair failed and we were unable to recover it. 00:32:43.647 [2024-04-26 13:15:48.478696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.647 [2024-04-26 13:15:48.479012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.647 [2024-04-26 13:15:48.479019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.647 qpair failed and we were unable to recover it. 00:32:43.647 [2024-04-26 13:15:48.479343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.647 [2024-04-26 13:15:48.479532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.647 [2024-04-26 13:15:48.479538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.647 qpair failed and we were unable to recover it. 00:32:43.647 [2024-04-26 13:15:48.479842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.647 [2024-04-26 13:15:48.480130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.647 [2024-04-26 13:15:48.480137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.647 qpair failed and we were unable to recover it. 00:32:43.647 [2024-04-26 13:15:48.480439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.647 [2024-04-26 13:15:48.480726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.647 [2024-04-26 13:15:48.480732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.647 qpair failed and we were unable to recover it. 00:32:43.647 [2024-04-26 13:15:48.480923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.647 [2024-04-26 13:15:48.481250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.647 [2024-04-26 13:15:48.481256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.647 qpair failed and we were unable to recover it. 00:32:43.647 [2024-04-26 13:15:48.481546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.647 [2024-04-26 13:15:48.481868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.647 [2024-04-26 13:15:48.481875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.647 qpair failed and we were unable to recover it. 00:32:43.647 [2024-04-26 13:15:48.482064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.648 [2024-04-26 13:15:48.482437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.648 [2024-04-26 13:15:48.482443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.648 qpair failed and we were unable to recover it. 00:32:43.648 [2024-04-26 13:15:48.482736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.648 [2024-04-26 13:15:48.483022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.648 [2024-04-26 13:15:48.483029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.648 qpair failed and we were unable to recover it. 00:32:43.648 [2024-04-26 13:15:48.483341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.648 [2024-04-26 13:15:48.483542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.648 [2024-04-26 13:15:48.483548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.648 qpair failed and we were unable to recover it. 00:32:43.648 [2024-04-26 13:15:48.483867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.648 [2024-04-26 13:15:48.484553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.648 [2024-04-26 13:15:48.484568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.648 qpair failed and we were unable to recover it. 00:32:43.648 [2024-04-26 13:15:48.484861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.648 [2024-04-26 13:15:48.485562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.648 [2024-04-26 13:15:48.485577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.648 qpair failed and we were unable to recover it. 00:32:43.648 [2024-04-26 13:15:48.485873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.648 [2024-04-26 13:15:48.486093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.648 [2024-04-26 13:15:48.486099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.648 qpair failed and we were unable to recover it. 00:32:43.648 [2024-04-26 13:15:48.486445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.648 [2024-04-26 13:15:48.486770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.648 [2024-04-26 13:15:48.486777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.648 qpair failed and we were unable to recover it. 00:32:43.648 [2024-04-26 13:15:48.487097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.648 [2024-04-26 13:15:48.487366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.648 [2024-04-26 13:15:48.487373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.648 qpair failed and we were unable to recover it. 00:32:43.648 [2024-04-26 13:15:48.487417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.648 [2024-04-26 13:15:48.487720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.648 [2024-04-26 13:15:48.487727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.648 qpair failed and we were unable to recover it. 00:32:43.648 [2024-04-26 13:15:48.488098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.648 [2024-04-26 13:15:48.488414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.648 [2024-04-26 13:15:48.488422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.648 qpair failed and we were unable to recover it. 00:32:43.648 [2024-04-26 13:15:48.488625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.648 [2024-04-26 13:15:48.488827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.648 [2024-04-26 13:15:48.488833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.648 qpair failed and we were unable to recover it. 00:32:43.648 [2024-04-26 13:15:48.489182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.648 [2024-04-26 13:15:48.489496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.648 [2024-04-26 13:15:48.489505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.648 qpair failed and we were unable to recover it. 00:32:43.648 [2024-04-26 13:15:48.489814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.648 [2024-04-26 13:15:48.490131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.648 [2024-04-26 13:15:48.490138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.648 qpair failed and we were unable to recover it. 00:32:43.648 [2024-04-26 13:15:48.490459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.648 [2024-04-26 13:15:48.490647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.648 [2024-04-26 13:15:48.490654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.648 qpair failed and we were unable to recover it. 00:32:43.648 [2024-04-26 13:15:48.490810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.648 [2024-04-26 13:15:48.491159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.648 [2024-04-26 13:15:48.491166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.648 qpair failed and we were unable to recover it. 00:32:43.648 [2024-04-26 13:15:48.491481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.648 [2024-04-26 13:15:48.491797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.648 [2024-04-26 13:15:48.491803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.648 qpair failed and we were unable to recover it. 00:32:43.648 [2024-04-26 13:15:48.492030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.648 [2024-04-26 13:15:48.492362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.648 [2024-04-26 13:15:48.492368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.648 qpair failed and we were unable to recover it. 00:32:43.648 [2024-04-26 13:15:48.492665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.648 [2024-04-26 13:15:48.492949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.648 [2024-04-26 13:15:48.492955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.648 qpair failed and we were unable to recover it. 00:32:43.648 [2024-04-26 13:15:48.493275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.648 [2024-04-26 13:15:48.493559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.648 [2024-04-26 13:15:48.493566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.648 qpair failed and we were unable to recover it. 00:32:43.648 [2024-04-26 13:15:48.493884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.648 [2024-04-26 13:15:48.494193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.648 [2024-04-26 13:15:48.494199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.648 qpair failed and we were unable to recover it. 00:32:43.648 [2024-04-26 13:15:48.494383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.648 [2024-04-26 13:15:48.494744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.648 [2024-04-26 13:15:48.494750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.648 qpair failed and we were unable to recover it. 00:32:43.648 [2024-04-26 13:15:48.495037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.649 [2024-04-26 13:15:48.495340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.649 [2024-04-26 13:15:48.495348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.649 qpair failed and we were unable to recover it. 00:32:43.649 [2024-04-26 13:15:48.495640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.649 [2024-04-26 13:15:48.495931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.649 [2024-04-26 13:15:48.495937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.649 qpair failed and we were unable to recover it. 00:32:43.649 [2024-04-26 13:15:48.496250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.649 [2024-04-26 13:15:48.496559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.649 [2024-04-26 13:15:48.496565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.649 qpair failed and we were unable to recover it. 00:32:43.649 [2024-04-26 13:15:48.496866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.649 [2024-04-26 13:15:48.497180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.649 [2024-04-26 13:15:48.497186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.649 qpair failed and we were unable to recover it. 00:32:43.649 [2024-04-26 13:15:48.497485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.649 [2024-04-26 13:15:48.497795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.649 [2024-04-26 13:15:48.497802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.649 qpair failed and we were unable to recover it. 00:32:43.649 [2024-04-26 13:15:48.498151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.649 [2024-04-26 13:15:48.498500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.649 [2024-04-26 13:15:48.498507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.649 qpair failed and we were unable to recover it. 00:32:43.649 [2024-04-26 13:15:48.498823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.649 [2024-04-26 13:15:48.499009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.649 [2024-04-26 13:15:48.499018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.649 qpair failed and we were unable to recover it. 00:32:43.649 [2024-04-26 13:15:48.499334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.649 [2024-04-26 13:15:48.499644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.649 [2024-04-26 13:15:48.499651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.649 qpair failed and we were unable to recover it. 00:32:43.649 [2024-04-26 13:15:48.499998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.649 [2024-04-26 13:15:48.500192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.649 [2024-04-26 13:15:48.500198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.649 qpair failed and we were unable to recover it. 00:32:43.649 [2024-04-26 13:15:48.500507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.649 [2024-04-26 13:15:48.500852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.649 [2024-04-26 13:15:48.500859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.649 qpair failed and we were unable to recover it. 00:32:43.649 [2024-04-26 13:15:48.501143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.649 [2024-04-26 13:15:48.501336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.649 [2024-04-26 13:15:48.501344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.649 qpair failed and we were unable to recover it. 00:32:43.649 [2024-04-26 13:15:48.501625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.649 [2024-04-26 13:15:48.501933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.649 [2024-04-26 13:15:48.501940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.649 qpair failed and we were unable to recover it. 00:32:43.649 [2024-04-26 13:15:48.502291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.649 [2024-04-26 13:15:48.502570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.649 [2024-04-26 13:15:48.502576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.649 qpair failed and we were unable to recover it. 00:32:43.649 [2024-04-26 13:15:48.502881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.649 [2024-04-26 13:15:48.503188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.649 [2024-04-26 13:15:48.503195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.649 qpair failed and we were unable to recover it. 00:32:43.649 [2024-04-26 13:15:48.503378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.649 [2024-04-26 13:15:48.503605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.649 [2024-04-26 13:15:48.503617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.649 qpair failed and we were unable to recover it. 00:32:43.649 [2024-04-26 13:15:48.503959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.649 [2024-04-26 13:15:48.504122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.649 [2024-04-26 13:15:48.504130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.649 qpair failed and we were unable to recover it. 00:32:43.649 [2024-04-26 13:15:48.504491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.649 [2024-04-26 13:15:48.504787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.649 [2024-04-26 13:15:48.504794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.649 qpair failed and we were unable to recover it. 00:32:43.649 [2024-04-26 13:15:48.505144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.649 [2024-04-26 13:15:48.505422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.649 [2024-04-26 13:15:48.505428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.649 qpair failed and we were unable to recover it. 00:32:43.649 [2024-04-26 13:15:48.505818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.649 [2024-04-26 13:15:48.506091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.649 [2024-04-26 13:15:48.506098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.649 qpair failed and we were unable to recover it. 00:32:43.649 [2024-04-26 13:15:48.506419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.649 [2024-04-26 13:15:48.506721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.649 [2024-04-26 13:15:48.506728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.649 qpair failed and we were unable to recover it. 00:32:43.649 [2024-04-26 13:15:48.507021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.649 [2024-04-26 13:15:48.507354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.649 [2024-04-26 13:15:48.507362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.649 qpair failed and we were unable to recover it. 00:32:43.649 [2024-04-26 13:15:48.507659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.649 [2024-04-26 13:15:48.507945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.649 [2024-04-26 13:15:48.507952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.649 qpair failed and we were unable to recover it. 00:32:43.649 [2024-04-26 13:15:48.508262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.649 [2024-04-26 13:15:48.508576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.649 [2024-04-26 13:15:48.508582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.649 qpair failed and we were unable to recover it. 00:32:43.649 [2024-04-26 13:15:48.508896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.649 [2024-04-26 13:15:48.509236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.649 [2024-04-26 13:15:48.509242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.649 qpair failed and we were unable to recover it. 00:32:43.649 [2024-04-26 13:15:48.509523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.649 [2024-04-26 13:15:48.509841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.649 [2024-04-26 13:15:48.509848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.649 qpair failed and we were unable to recover it. 00:32:43.649 [2024-04-26 13:15:48.510150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.649 [2024-04-26 13:15:48.510454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.649 [2024-04-26 13:15:48.510461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.649 qpair failed and we were unable to recover it. 00:32:43.649 [2024-04-26 13:15:48.510796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.649 [2024-04-26 13:15:48.511103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.650 [2024-04-26 13:15:48.511110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.650 qpair failed and we were unable to recover it. 00:32:43.650 [2024-04-26 13:15:48.511424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.650 [2024-04-26 13:15:48.511715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.650 [2024-04-26 13:15:48.511721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.650 qpair failed and we were unable to recover it. 00:32:43.650 [2024-04-26 13:15:48.512013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.650 [2024-04-26 13:15:48.512325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.650 [2024-04-26 13:15:48.512332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.650 qpair failed and we were unable to recover it. 00:32:43.650 [2024-04-26 13:15:48.512664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.650 [2024-04-26 13:15:48.512987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.650 [2024-04-26 13:15:48.512994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.650 qpair failed and we were unable to recover it. 00:32:43.650 [2024-04-26 13:15:48.513179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.650 [2024-04-26 13:15:48.513355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.650 [2024-04-26 13:15:48.513362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.650 qpair failed and we were unable to recover it. 00:32:43.650 [2024-04-26 13:15:48.513646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.650 [2024-04-26 13:15:48.513974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.650 [2024-04-26 13:15:48.513981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.650 qpair failed and we were unable to recover it. 00:32:43.650 [2024-04-26 13:15:48.514283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.650 [2024-04-26 13:15:48.514620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.650 [2024-04-26 13:15:48.514627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.650 qpair failed and we were unable to recover it. 00:32:43.650 [2024-04-26 13:15:48.514909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.650 [2024-04-26 13:15:48.515217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.650 [2024-04-26 13:15:48.515223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.650 qpair failed and we were unable to recover it. 00:32:43.650 [2024-04-26 13:15:48.515395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.650 [2024-04-26 13:15:48.515773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.650 [2024-04-26 13:15:48.515780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.650 qpair failed and we were unable to recover it. 00:32:43.650 [2024-04-26 13:15:48.516156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.650 [2024-04-26 13:15:48.516491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.650 [2024-04-26 13:15:48.516497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.650 qpair failed and we were unable to recover it. 00:32:43.650 [2024-04-26 13:15:48.516821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.650 [2024-04-26 13:15:48.517106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.650 [2024-04-26 13:15:48.517112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.650 qpair failed and we were unable to recover it. 00:32:43.650 [2024-04-26 13:15:48.517414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.650 [2024-04-26 13:15:48.517600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.650 [2024-04-26 13:15:48.517607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.650 qpair failed and we were unable to recover it. 00:32:43.650 [2024-04-26 13:15:48.517923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.650 [2024-04-26 13:15:48.518218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.650 [2024-04-26 13:15:48.518225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.650 qpair failed and we were unable to recover it. 00:32:43.650 [2024-04-26 13:15:48.518542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.650 [2024-04-26 13:15:48.518861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.650 [2024-04-26 13:15:48.518869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.650 qpair failed and we were unable to recover it. 00:32:43.650 [2024-04-26 13:15:48.519060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.650 [2024-04-26 13:15:48.519242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.650 [2024-04-26 13:15:48.519249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.650 qpair failed and we were unable to recover it. 00:32:43.650 [2024-04-26 13:15:48.519521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.650 [2024-04-26 13:15:48.519827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.650 [2024-04-26 13:15:48.519834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.650 qpair failed and we were unable to recover it. 00:32:43.650 [2024-04-26 13:15:48.520163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.650 [2024-04-26 13:15:48.520354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.650 [2024-04-26 13:15:48.520361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.650 qpair failed and we were unable to recover it. 00:32:43.650 [2024-04-26 13:15:48.520657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.650 [2024-04-26 13:15:48.520939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.650 [2024-04-26 13:15:48.520946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.650 qpair failed and we were unable to recover it. 00:32:43.650 [2024-04-26 13:15:48.521102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.650 [2024-04-26 13:15:48.521371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.650 [2024-04-26 13:15:48.521377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.650 qpair failed and we were unable to recover it. 00:32:43.650 [2024-04-26 13:15:48.521700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.650 [2024-04-26 13:15:48.522015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.650 [2024-04-26 13:15:48.522022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.650 qpair failed and we were unable to recover it. 00:32:43.650 [2024-04-26 13:15:48.522219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.650 [2024-04-26 13:15:48.522560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.650 [2024-04-26 13:15:48.522566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.650 qpair failed and we were unable to recover it. 00:32:43.650 [2024-04-26 13:15:48.522866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.650 [2024-04-26 13:15:48.523070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.650 [2024-04-26 13:15:48.523076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.650 qpair failed and we were unable to recover it. 00:32:43.650 [2024-04-26 13:15:48.523379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.650 [2024-04-26 13:15:48.523681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.650 [2024-04-26 13:15:48.523687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.650 qpair failed and we were unable to recover it. 00:32:43.650 [2024-04-26 13:15:48.523886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.650 [2024-04-26 13:15:48.524212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.650 [2024-04-26 13:15:48.524219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.650 qpair failed and we were unable to recover it. 00:32:43.650 [2024-04-26 13:15:48.524532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.650 [2024-04-26 13:15:48.524855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.650 [2024-04-26 13:15:48.524862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.650 qpair failed and we were unable to recover it. 00:32:43.650 [2024-04-26 13:15:48.525186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.650 [2024-04-26 13:15:48.525413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.650 [2024-04-26 13:15:48.525420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.650 qpair failed and we were unable to recover it. 00:32:43.650 [2024-04-26 13:15:48.525722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.650 [2024-04-26 13:15:48.526057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.650 [2024-04-26 13:15:48.526063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.650 qpair failed and we were unable to recover it. 00:32:43.650 [2024-04-26 13:15:48.526438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.650 [2024-04-26 13:15:48.526733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.650 [2024-04-26 13:15:48.526740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.650 qpair failed and we were unable to recover it. 00:32:43.650 [2024-04-26 13:15:48.527105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.650 [2024-04-26 13:15:48.527422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.650 [2024-04-26 13:15:48.527428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.650 qpair failed and we were unable to recover it. 00:32:43.650 [2024-04-26 13:15:48.527728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.650 [2024-04-26 13:15:48.527940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.651 [2024-04-26 13:15:48.527948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.651 qpair failed and we were unable to recover it. 00:32:43.651 [2024-04-26 13:15:48.528306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.651 [2024-04-26 13:15:48.528623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.651 [2024-04-26 13:15:48.528630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.651 qpair failed and we were unable to recover it. 00:32:43.651 [2024-04-26 13:15:48.528828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.651 [2024-04-26 13:15:48.529149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.651 [2024-04-26 13:15:48.529157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.651 qpair failed and we were unable to recover it. 00:32:43.651 [2024-04-26 13:15:48.529474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.651 [2024-04-26 13:15:48.529645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.651 [2024-04-26 13:15:48.529653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.651 qpair failed and we were unable to recover it. 00:32:43.651 [2024-04-26 13:15:48.530017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.651 [2024-04-26 13:15:48.530335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.651 [2024-04-26 13:15:48.530341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.651 qpair failed and we were unable to recover it. 00:32:43.651 [2024-04-26 13:15:48.530662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.651 [2024-04-26 13:15:48.530942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.651 [2024-04-26 13:15:48.530949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.651 qpair failed and we were unable to recover it. 00:32:43.651 [2024-04-26 13:15:48.531259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.651 [2024-04-26 13:15:48.531585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.651 [2024-04-26 13:15:48.531591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.651 qpair failed and we were unable to recover it. 00:32:43.651 [2024-04-26 13:15:48.531912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.651 [2024-04-26 13:15:48.532251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.651 [2024-04-26 13:15:48.532257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.651 qpair failed and we were unable to recover it. 00:32:43.651 [2024-04-26 13:15:48.532564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.651 [2024-04-26 13:15:48.532900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.651 [2024-04-26 13:15:48.532907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.651 qpair failed and we were unable to recover it. 00:32:43.651 [2024-04-26 13:15:48.533124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.651 [2024-04-26 13:15:48.533469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.651 [2024-04-26 13:15:48.533475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.651 qpair failed and we were unable to recover it. 00:32:43.651 [2024-04-26 13:15:48.533805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.651 [2024-04-26 13:15:48.534096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.651 [2024-04-26 13:15:48.534102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.651 qpair failed and we were unable to recover it. 00:32:43.651 [2024-04-26 13:15:48.534415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.651 [2024-04-26 13:15:48.534728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.651 [2024-04-26 13:15:48.534734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.651 qpair failed and we were unable to recover it. 00:32:43.651 [2024-04-26 13:15:48.535026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.651 [2024-04-26 13:15:48.535320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.651 [2024-04-26 13:15:48.535327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.651 qpair failed and we were unable to recover it. 00:32:43.651 [2024-04-26 13:15:48.535521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.651 [2024-04-26 13:15:48.535852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.651 [2024-04-26 13:15:48.535861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.651 qpair failed and we were unable to recover it. 00:32:43.651 [2024-04-26 13:15:48.536069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.651 [2024-04-26 13:15:48.536269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.651 [2024-04-26 13:15:48.536276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.651 qpair failed and we were unable to recover it. 00:32:43.651 [2024-04-26 13:15:48.536601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.651 [2024-04-26 13:15:48.536903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.651 [2024-04-26 13:15:48.536910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.651 qpair failed and we were unable to recover it. 00:32:43.651 [2024-04-26 13:15:48.537279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.651 [2024-04-26 13:15:48.537599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.651 [2024-04-26 13:15:48.537607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.651 qpair failed and we were unable to recover it. 00:32:43.651 [2024-04-26 13:15:48.537912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.651 [2024-04-26 13:15:48.538229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.651 [2024-04-26 13:15:48.538236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.651 qpair failed and we were unable to recover it. 00:32:43.651 [2024-04-26 13:15:48.538569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.651 [2024-04-26 13:15:48.538876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.651 [2024-04-26 13:15:48.538884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.651 qpair failed and we were unable to recover it. 00:32:43.651 [2024-04-26 13:15:48.539096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.651 [2024-04-26 13:15:48.539297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.651 [2024-04-26 13:15:48.539302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.651 qpair failed and we were unable to recover it. 00:32:43.651 [2024-04-26 13:15:48.539583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.651 [2024-04-26 13:15:48.539891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.651 [2024-04-26 13:15:48.539898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.651 qpair failed and we were unable to recover it. 00:32:43.651 [2024-04-26 13:15:48.540209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.651 [2024-04-26 13:15:48.540427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.651 [2024-04-26 13:15:48.540434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.651 qpair failed and we were unable to recover it. 00:32:43.651 [2024-04-26 13:15:48.540631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.651 [2024-04-26 13:15:48.540828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.651 [2024-04-26 13:15:48.540835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.651 qpair failed and we were unable to recover it. 00:32:43.651 [2024-04-26 13:15:48.541168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.651 [2024-04-26 13:15:48.541483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.651 [2024-04-26 13:15:48.541491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.651 qpair failed and we were unable to recover it. 00:32:43.651 [2024-04-26 13:15:48.541640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.651 [2024-04-26 13:15:48.541922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.651 [2024-04-26 13:15:48.541930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.651 qpair failed and we were unable to recover it. 00:32:43.651 [2024-04-26 13:15:48.542291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.651 [2024-04-26 13:15:48.542603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.651 [2024-04-26 13:15:48.542611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.651 qpair failed and we were unable to recover it. 00:32:43.652 [2024-04-26 13:15:48.542906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.652 [2024-04-26 13:15:48.543215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.652 [2024-04-26 13:15:48.543222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.652 qpair failed and we were unable to recover it. 00:32:43.652 [2024-04-26 13:15:48.543434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.652 [2024-04-26 13:15:48.543752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.652 [2024-04-26 13:15:48.543760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.652 qpair failed and we were unable to recover it. 00:32:43.652 [2024-04-26 13:15:48.544088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.652 [2024-04-26 13:15:48.544398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.652 [2024-04-26 13:15:48.544405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.652 qpair failed and we were unable to recover it. 00:32:43.652 [2024-04-26 13:15:48.544547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.652 [2024-04-26 13:15:48.544844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.652 [2024-04-26 13:15:48.544852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.652 qpair failed and we were unable to recover it. 00:32:43.652 [2024-04-26 13:15:48.545151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.652 [2024-04-26 13:15:48.545444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.652 [2024-04-26 13:15:48.545452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.652 qpair failed and we were unable to recover it. 00:32:43.652 [2024-04-26 13:15:48.545763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.652 [2024-04-26 13:15:48.546038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.652 [2024-04-26 13:15:48.546046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.652 qpair failed and we were unable to recover it. 00:32:43.652 [2024-04-26 13:15:48.546252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.652 [2024-04-26 13:15:48.546416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.652 [2024-04-26 13:15:48.546423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.652 qpair failed and we were unable to recover it. 00:32:43.652 [2024-04-26 13:15:48.546700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.652 [2024-04-26 13:15:48.547014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.652 [2024-04-26 13:15:48.547022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.652 qpair failed and we were unable to recover it. 00:32:43.652 [2024-04-26 13:15:48.547326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.652 [2024-04-26 13:15:48.547533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.652 [2024-04-26 13:15:48.547540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.652 qpair failed and we were unable to recover it. 00:32:43.652 [2024-04-26 13:15:48.547694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.652 [2024-04-26 13:15:48.547979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.652 [2024-04-26 13:15:48.547986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.652 qpair failed and we were unable to recover it. 00:32:43.652 [2024-04-26 13:15:48.548166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.652 [2024-04-26 13:15:48.548335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.652 [2024-04-26 13:15:48.548343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.652 qpair failed and we were unable to recover it. 00:32:43.652 [2024-04-26 13:15:48.548643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.652 [2024-04-26 13:15:48.548970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.652 [2024-04-26 13:15:48.548977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.652 qpair failed and we were unable to recover it. 00:32:43.652 [2024-04-26 13:15:48.549296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.652 [2024-04-26 13:15:48.549624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.652 [2024-04-26 13:15:48.549631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.652 qpair failed and we were unable to recover it. 00:32:43.652 [2024-04-26 13:15:48.549913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.652 [2024-04-26 13:15:48.550137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.652 [2024-04-26 13:15:48.550144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.652 qpair failed and we were unable to recover it. 00:32:43.652 [2024-04-26 13:15:48.550455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.652 [2024-04-26 13:15:48.550762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.652 [2024-04-26 13:15:48.550769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.652 qpair failed and we were unable to recover it. 00:32:43.652 [2024-04-26 13:15:48.550956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.652 [2024-04-26 13:15:48.551252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.652 [2024-04-26 13:15:48.551260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.652 qpair failed and we were unable to recover it. 00:32:43.652 [2024-04-26 13:15:48.551466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.652 [2024-04-26 13:15:48.551753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.652 [2024-04-26 13:15:48.551760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.652 qpair failed and we were unable to recover it. 00:32:43.652 [2024-04-26 13:15:48.552048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.652 [2024-04-26 13:15:48.552398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.652 [2024-04-26 13:15:48.552405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.652 qpair failed and we were unable to recover it. 00:32:43.652 [2024-04-26 13:15:48.552696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.652 [2024-04-26 13:15:48.552928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.652 [2024-04-26 13:15:48.552935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.652 qpair failed and we were unable to recover it. 00:32:43.652 [2024-04-26 13:15:48.553143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.652 [2024-04-26 13:15:48.553456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.652 [2024-04-26 13:15:48.553463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.652 qpair failed and we were unable to recover it. 00:32:43.652 [2024-04-26 13:15:48.553673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.652 [2024-04-26 13:15:48.553870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.652 [2024-04-26 13:15:48.553878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.652 qpair failed and we were unable to recover it. 00:32:43.652 [2024-04-26 13:15:48.554094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.652 [2024-04-26 13:15:48.554317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.652 [2024-04-26 13:15:48.554324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.652 qpair failed and we were unable to recover it. 00:32:43.652 [2024-04-26 13:15:48.554777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.652 [2024-04-26 13:15:48.555071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.652 [2024-04-26 13:15:48.555079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.652 qpair failed and we were unable to recover it. 00:32:43.652 [2024-04-26 13:15:48.555383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.652 [2024-04-26 13:15:48.555700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.652 [2024-04-26 13:15:48.555707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.652 qpair failed and we were unable to recover it. 00:32:43.652 [2024-04-26 13:15:48.556071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.652 [2024-04-26 13:15:48.556399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.652 [2024-04-26 13:15:48.556407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.652 qpair failed and we were unable to recover it. 00:32:43.652 [2024-04-26 13:15:48.556586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.652 [2024-04-26 13:15:48.556881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.652 [2024-04-26 13:15:48.556888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.652 qpair failed and we were unable to recover it. 00:32:43.652 [2024-04-26 13:15:48.557083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.652 [2024-04-26 13:15:48.557439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.652 [2024-04-26 13:15:48.557446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.652 qpair failed and we were unable to recover it. 00:32:43.652 [2024-04-26 13:15:48.557755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.652 [2024-04-26 13:15:48.557945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.652 [2024-04-26 13:15:48.557952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.652 qpair failed and we were unable to recover it. 00:32:43.652 [2024-04-26 13:15:48.558268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.652 [2024-04-26 13:15:48.558484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.652 [2024-04-26 13:15:48.558491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.652 qpair failed and we were unable to recover it. 00:32:43.653 [2024-04-26 13:15:48.558800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.653 [2024-04-26 13:15:48.559132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.653 [2024-04-26 13:15:48.559140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.653 qpair failed and we were unable to recover it. 00:32:43.653 [2024-04-26 13:15:48.559458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.653 [2024-04-26 13:15:48.559796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.653 [2024-04-26 13:15:48.559804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.653 qpair failed and we were unable to recover it. 00:32:43.653 [2024-04-26 13:15:48.559978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.653 [2024-04-26 13:15:48.560254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.653 [2024-04-26 13:15:48.560262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.653 qpair failed and we were unable to recover it. 00:32:43.653 [2024-04-26 13:15:48.560316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.653 [2024-04-26 13:15:48.560620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.653 [2024-04-26 13:15:48.560627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.653 qpair failed and we were unable to recover it. 00:32:43.653 [2024-04-26 13:15:48.560973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.653 [2024-04-26 13:15:48.561152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.653 [2024-04-26 13:15:48.561160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.653 qpair failed and we were unable to recover it. 00:32:43.653 [2024-04-26 13:15:48.561438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.653 [2024-04-26 13:15:48.561739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.653 [2024-04-26 13:15:48.561746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.653 qpair failed and we were unable to recover it. 00:32:43.653 [2024-04-26 13:15:48.561925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.653 [2024-04-26 13:15:48.562246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.653 [2024-04-26 13:15:48.562254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.653 qpair failed and we were unable to recover it. 00:32:43.653 [2024-04-26 13:15:48.562590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.653 [2024-04-26 13:15:48.562862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.653 [2024-04-26 13:15:48.562870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.653 qpair failed and we were unable to recover it. 00:32:43.653 [2024-04-26 13:15:48.562928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.653 [2024-04-26 13:15:48.563228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.653 [2024-04-26 13:15:48.563236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.653 qpair failed and we were unable to recover it. 00:32:43.653 [2024-04-26 13:15:48.563548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.653 [2024-04-26 13:15:48.563745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.653 [2024-04-26 13:15:48.563753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.653 qpair failed and we were unable to recover it. 00:32:43.653 [2024-04-26 13:15:48.564100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.653 [2024-04-26 13:15:48.564402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.653 [2024-04-26 13:15:48.564409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.653 qpair failed and we were unable to recover it. 00:32:43.653 [2024-04-26 13:15:48.564708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.653 [2024-04-26 13:15:48.565017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.653 [2024-04-26 13:15:48.565025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.653 qpair failed and we were unable to recover it. 00:32:43.653 [2024-04-26 13:15:48.565347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.653 [2024-04-26 13:15:48.565653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.653 [2024-04-26 13:15:48.565661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.653 qpair failed and we were unable to recover it. 00:32:43.653 [2024-04-26 13:15:48.565995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.653 [2024-04-26 13:15:48.566125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.653 [2024-04-26 13:15:48.566133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.653 qpair failed and we were unable to recover it. 00:32:43.653 [2024-04-26 13:15:48.566441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.653 [2024-04-26 13:15:48.566809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.653 [2024-04-26 13:15:48.566816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.653 qpair failed and we were unable to recover it. 00:32:43.653 [2024-04-26 13:15:48.567143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.653 [2024-04-26 13:15:48.567381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.653 [2024-04-26 13:15:48.567388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.653 qpair failed and we were unable to recover it. 00:32:43.653 [2024-04-26 13:15:48.567699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.653 [2024-04-26 13:15:48.568018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.653 [2024-04-26 13:15:48.568025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.653 qpair failed and we were unable to recover it. 00:32:43.653 [2024-04-26 13:15:48.568364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.653 [2024-04-26 13:15:48.568668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.653 [2024-04-26 13:15:48.568675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.653 qpair failed and we were unable to recover it. 00:32:43.653 [2024-04-26 13:15:48.568999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.653 [2024-04-26 13:15:48.569312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.653 [2024-04-26 13:15:48.569319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.653 qpair failed and we were unable to recover it. 00:32:43.653 [2024-04-26 13:15:48.569501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.653 [2024-04-26 13:15:48.569774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.653 [2024-04-26 13:15:48.569781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.653 qpair failed and we were unable to recover it. 00:32:43.653 [2024-04-26 13:15:48.569968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.653 [2024-04-26 13:15:48.570295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.653 [2024-04-26 13:15:48.570302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.653 qpair failed and we were unable to recover it. 00:32:43.653 [2024-04-26 13:15:48.570614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.653 [2024-04-26 13:15:48.570962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.653 [2024-04-26 13:15:48.570970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.653 qpair failed and we were unable to recover it. 00:32:43.653 [2024-04-26 13:15:48.571141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.653 [2024-04-26 13:15:48.571346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.653 [2024-04-26 13:15:48.571353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.653 qpair failed and we were unable to recover it. 00:32:43.654 [2024-04-26 13:15:48.571648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.654 [2024-04-26 13:15:48.572027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.654 [2024-04-26 13:15:48.572035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.654 qpair failed and we were unable to recover it. 00:32:43.654 [2024-04-26 13:15:48.572347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.654 [2024-04-26 13:15:48.572496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.654 [2024-04-26 13:15:48.572503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.654 qpair failed and we were unable to recover it. 00:32:43.654 [2024-04-26 13:15:48.572810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.654 [2024-04-26 13:15:48.573091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.654 [2024-04-26 13:15:48.573099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.654 qpair failed and we were unable to recover it. 00:32:43.654 [2024-04-26 13:15:48.573382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.654 [2024-04-26 13:15:48.573709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.654 [2024-04-26 13:15:48.573717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.654 qpair failed and we were unable to recover it. 00:32:43.654 [2024-04-26 13:15:48.573953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.654 [2024-04-26 13:15:48.574220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.654 [2024-04-26 13:15:48.574226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.654 qpair failed and we were unable to recover it. 00:32:43.654 [2024-04-26 13:15:48.574430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.654 [2024-04-26 13:15:48.574640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.654 [2024-04-26 13:15:48.574646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.654 qpair failed and we were unable to recover it. 00:32:43.654 [2024-04-26 13:15:48.574917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.654 [2024-04-26 13:15:48.574995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.654 [2024-04-26 13:15:48.575001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.654 qpair failed and we were unable to recover it. 00:32:43.654 [2024-04-26 13:15:48.575316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.654 [2024-04-26 13:15:48.575596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.654 [2024-04-26 13:15:48.575603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.654 qpair failed and we were unable to recover it. 00:32:43.654 [2024-04-26 13:15:48.575914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.654 [2024-04-26 13:15:48.576225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.654 [2024-04-26 13:15:48.576232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.654 qpair failed and we were unable to recover it. 00:32:43.654 [2024-04-26 13:15:48.576560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.654 [2024-04-26 13:15:48.576795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.654 [2024-04-26 13:15:48.576802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.654 qpair failed and we were unable to recover it. 00:32:43.654 [2024-04-26 13:15:48.577111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.654 [2024-04-26 13:15:48.577435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.654 [2024-04-26 13:15:48.577441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.654 qpair failed and we were unable to recover it. 00:32:43.654 [2024-04-26 13:15:48.577625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.654 [2024-04-26 13:15:48.577895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.654 [2024-04-26 13:15:48.577902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.654 qpair failed and we were unable to recover it. 00:32:43.654 [2024-04-26 13:15:48.578137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.654 [2024-04-26 13:15:48.578430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.654 [2024-04-26 13:15:48.578437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.654 qpair failed and we were unable to recover it. 00:32:43.654 [2024-04-26 13:15:48.578693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.654 [2024-04-26 13:15:48.579042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.654 [2024-04-26 13:15:48.579050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.654 qpair failed and we were unable to recover it. 00:32:43.654 [2024-04-26 13:15:48.579342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.654 [2024-04-26 13:15:48.579604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.654 [2024-04-26 13:15:48.579610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.654 qpair failed and we were unable to recover it. 00:32:43.654 [2024-04-26 13:15:48.579914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.654 [2024-04-26 13:15:48.580186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.654 [2024-04-26 13:15:48.580194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.654 qpair failed and we were unable to recover it. 00:32:43.654 [2024-04-26 13:15:48.580529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.654 [2024-04-26 13:15:48.580719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.654 [2024-04-26 13:15:48.580725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.654 qpair failed and we were unable to recover it. 00:32:43.654 [2024-04-26 13:15:48.580952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.654 [2024-04-26 13:15:48.581220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.654 [2024-04-26 13:15:48.581227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.654 qpair failed and we were unable to recover it. 00:32:43.654 [2024-04-26 13:15:48.581493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.654 [2024-04-26 13:15:48.581705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.654 [2024-04-26 13:15:48.581714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.654 qpair failed and we were unable to recover it. 00:32:43.654 [2024-04-26 13:15:48.582020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.654 [2024-04-26 13:15:48.582320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.654 [2024-04-26 13:15:48.582327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.654 qpair failed and we were unable to recover it. 00:32:43.654 [2024-04-26 13:15:48.582628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.654 [2024-04-26 13:15:48.582962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.654 [2024-04-26 13:15:48.582969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.654 qpair failed and we were unable to recover it. 00:32:43.654 [2024-04-26 13:15:48.583315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.654 [2024-04-26 13:15:48.583613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.654 [2024-04-26 13:15:48.583620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.654 qpair failed and we were unable to recover it. 00:32:43.654 [2024-04-26 13:15:48.583878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.654 [2024-04-26 13:15:48.584234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.654 [2024-04-26 13:15:48.584241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.654 qpair failed and we were unable to recover it. 00:32:43.654 [2024-04-26 13:15:48.584607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.654 [2024-04-26 13:15:48.584675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.654 [2024-04-26 13:15:48.584682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.654 qpair failed and we were unable to recover it. 00:32:43.654 [2024-04-26 13:15:48.584890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.654 [2024-04-26 13:15:48.585223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.654 [2024-04-26 13:15:48.585230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.654 qpair failed and we were unable to recover it. 00:32:43.654 [2024-04-26 13:15:48.585544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.654 [2024-04-26 13:15:48.585704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.654 [2024-04-26 13:15:48.585712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.654 qpair failed and we were unable to recover it. 00:32:43.654 [2024-04-26 13:15:48.585928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.654 [2024-04-26 13:15:48.586200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.654 [2024-04-26 13:15:48.586207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.654 qpair failed and we were unable to recover it. 00:32:43.654 [2024-04-26 13:15:48.586522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.654 [2024-04-26 13:15:48.586860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.654 [2024-04-26 13:15:48.586867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.654 qpair failed and we were unable to recover it. 00:32:43.654 [2024-04-26 13:15:48.587028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.654 [2024-04-26 13:15:48.587085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.655 [2024-04-26 13:15:48.587093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.655 qpair failed and we were unable to recover it. 00:32:43.655 [2024-04-26 13:15:48.587308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.655 [2024-04-26 13:15:48.587597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.655 [2024-04-26 13:15:48.587604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.655 qpair failed and we were unable to recover it. 00:32:43.655 [2024-04-26 13:15:48.587899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.655 [2024-04-26 13:15:48.588132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.655 [2024-04-26 13:15:48.588138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.655 qpair failed and we were unable to recover it. 00:32:43.655 [2024-04-26 13:15:48.588420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.655 [2024-04-26 13:15:48.588702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.655 [2024-04-26 13:15:48.588709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.655 qpair failed and we were unable to recover it. 00:32:43.655 [2024-04-26 13:15:48.589122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.655 [2024-04-26 13:15:48.589454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.655 [2024-04-26 13:15:48.589460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.655 qpair failed and we were unable to recover it. 00:32:43.655 [2024-04-26 13:15:48.589778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.655 [2024-04-26 13:15:48.590079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.655 [2024-04-26 13:15:48.590086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.655 qpair failed and we were unable to recover it. 00:32:43.655 [2024-04-26 13:15:48.590265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.655 [2024-04-26 13:15:48.590502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.655 [2024-04-26 13:15:48.590509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.655 qpair failed and we were unable to recover it. 00:32:43.655 [2024-04-26 13:15:48.590831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.655 [2024-04-26 13:15:48.591159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.655 [2024-04-26 13:15:48.591166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.655 qpair failed and we were unable to recover it. 00:32:43.655 [2024-04-26 13:15:48.591404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.655 [2024-04-26 13:15:48.591715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.655 [2024-04-26 13:15:48.591722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.655 qpair failed and we were unable to recover it. 00:32:43.655 [2024-04-26 13:15:48.592059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.655 [2024-04-26 13:15:48.592252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.655 [2024-04-26 13:15:48.592258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.655 qpair failed and we were unable to recover it. 00:32:43.655 [2024-04-26 13:15:48.592695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.655 [2024-04-26 13:15:48.593063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.655 [2024-04-26 13:15:48.593072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.655 qpair failed and we were unable to recover it. 00:32:43.655 [2024-04-26 13:15:48.593305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.655 [2024-04-26 13:15:48.593642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.655 [2024-04-26 13:15:48.593649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.655 qpair failed and we were unable to recover it. 00:32:43.655 [2024-04-26 13:15:48.593962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.655 [2024-04-26 13:15:48.594272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.655 [2024-04-26 13:15:48.594278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.655 qpair failed and we were unable to recover it. 00:32:43.655 [2024-04-26 13:15:48.594576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.655 [2024-04-26 13:15:48.594859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.655 [2024-04-26 13:15:48.594866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.655 qpair failed and we were unable to recover it. 00:32:43.655 [2024-04-26 13:15:48.595080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.655 [2024-04-26 13:15:48.595410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.655 [2024-04-26 13:15:48.595416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.655 qpair failed and we were unable to recover it. 00:32:43.655 [2024-04-26 13:15:48.595762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.655 [2024-04-26 13:15:48.596054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.655 [2024-04-26 13:15:48.596061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.655 qpair failed and we were unable to recover it. 00:32:43.655 [2024-04-26 13:15:48.596356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.655 [2024-04-26 13:15:48.596669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.655 [2024-04-26 13:15:48.596676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.655 qpair failed and we were unable to recover it. 00:32:43.655 [2024-04-26 13:15:48.597062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.655 [2024-04-26 13:15:48.597369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.655 [2024-04-26 13:15:48.597375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.655 qpair failed and we were unable to recover it. 00:32:43.655 [2024-04-26 13:15:48.597695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.655 [2024-04-26 13:15:48.597989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.655 [2024-04-26 13:15:48.597997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.655 qpair failed and we were unable to recover it. 00:32:43.655 [2024-04-26 13:15:48.598332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.655 [2024-04-26 13:15:48.598640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.655 [2024-04-26 13:15:48.598647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.655 qpair failed and we were unable to recover it. 00:32:43.655 [2024-04-26 13:15:48.598840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.655 [2024-04-26 13:15:48.599215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.655 [2024-04-26 13:15:48.599224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.655 qpair failed and we were unable to recover it. 00:32:43.655 [2024-04-26 13:15:48.599531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.655 [2024-04-26 13:15:48.599879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.655 [2024-04-26 13:15:48.599886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.655 qpair failed and we were unable to recover it. 00:32:43.655 [2024-04-26 13:15:48.600256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.655 [2024-04-26 13:15:48.600497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.655 [2024-04-26 13:15:48.600503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.655 qpair failed and we were unable to recover it. 00:32:43.655 [2024-04-26 13:15:48.600844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.655 [2024-04-26 13:15:48.601148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.655 [2024-04-26 13:15:48.601155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.655 qpair failed and we were unable to recover it. 00:32:43.655 [2024-04-26 13:15:48.601483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.655 [2024-04-26 13:15:48.601789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.655 [2024-04-26 13:15:48.601795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.655 qpair failed and we were unable to recover it. 00:32:43.655 [2024-04-26 13:15:48.602099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.655 [2024-04-26 13:15:48.602431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.655 [2024-04-26 13:15:48.602438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.655 qpair failed and we were unable to recover it. 00:32:43.655 [2024-04-26 13:15:48.602738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.655 [2024-04-26 13:15:48.603048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.655 [2024-04-26 13:15:48.603055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.655 qpair failed and we were unable to recover it. 00:32:43.655 [2024-04-26 13:15:48.603370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.655 [2024-04-26 13:15:48.603652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.655 [2024-04-26 13:15:48.603659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.655 qpair failed and we were unable to recover it. 00:32:43.655 [2024-04-26 13:15:48.603961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.655 [2024-04-26 13:15:48.604289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.655 [2024-04-26 13:15:48.604295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.655 qpair failed and we were unable to recover it. 00:32:43.656 [2024-04-26 13:15:48.604492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.656 [2024-04-26 13:15:48.604813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.656 [2024-04-26 13:15:48.604819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.656 qpair failed and we were unable to recover it. 00:32:43.656 [2024-04-26 13:15:48.605139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.656 [2024-04-26 13:15:48.605459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.656 [2024-04-26 13:15:48.605466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.656 qpair failed and we were unable to recover it. 00:32:43.656 [2024-04-26 13:15:48.605625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.656 [2024-04-26 13:15:48.605912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.656 [2024-04-26 13:15:48.605920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.656 qpair failed and we were unable to recover it. 00:32:43.656 [2024-04-26 13:15:48.606093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.656 [2024-04-26 13:15:48.606478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.656 [2024-04-26 13:15:48.606485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.656 qpair failed and we were unable to recover it. 00:32:43.656 [2024-04-26 13:15:48.606822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.656 [2024-04-26 13:15:48.607115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.656 [2024-04-26 13:15:48.607122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.656 qpair failed and we were unable to recover it. 00:32:43.656 [2024-04-26 13:15:48.607409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.656 [2024-04-26 13:15:48.607660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.656 [2024-04-26 13:15:48.607667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.656 qpair failed and we were unable to recover it. 00:32:43.656 [2024-04-26 13:15:48.607988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.656 [2024-04-26 13:15:48.608290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.656 [2024-04-26 13:15:48.608297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.656 qpair failed and we were unable to recover it. 00:32:43.656 [2024-04-26 13:15:48.608616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.656 [2024-04-26 13:15:48.609007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.656 [2024-04-26 13:15:48.609014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.656 qpair failed and we were unable to recover it. 00:32:43.656 [2024-04-26 13:15:48.609374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.656 [2024-04-26 13:15:48.609712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.656 [2024-04-26 13:15:48.609719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.656 qpair failed and we were unable to recover it. 00:32:43.656 [2024-04-26 13:15:48.610106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.656 [2024-04-26 13:15:48.610380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.656 [2024-04-26 13:15:48.610386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.656 qpair failed and we were unable to recover it. 00:32:43.656 [2024-04-26 13:15:48.610688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.656 [2024-04-26 13:15:48.611001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.656 [2024-04-26 13:15:48.611008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.656 qpair failed and we were unable to recover it. 00:32:43.656 [2024-04-26 13:15:48.611190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.656 [2024-04-26 13:15:48.611287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.656 [2024-04-26 13:15:48.611294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.656 qpair failed and we were unable to recover it. 00:32:43.656 [2024-04-26 13:15:48.611612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.656 [2024-04-26 13:15:48.611914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.656 [2024-04-26 13:15:48.611921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.656 qpair failed and we were unable to recover it. 00:32:43.656 [2024-04-26 13:15:48.612248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.656 [2024-04-26 13:15:48.612561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.656 [2024-04-26 13:15:48.612568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.656 qpair failed and we were unable to recover it. 00:32:43.656 [2024-04-26 13:15:48.612758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.656 [2024-04-26 13:15:48.612849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.656 [2024-04-26 13:15:48.612855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.656 qpair failed and we were unable to recover it. 00:32:43.656 [2024-04-26 13:15:48.613159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.656 [2024-04-26 13:15:48.613446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.656 [2024-04-26 13:15:48.613453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.656 qpair failed and we were unable to recover it. 00:32:43.656 [2024-04-26 13:15:48.613787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.656 [2024-04-26 13:15:48.613903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.656 [2024-04-26 13:15:48.613909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.656 qpair failed and we were unable to recover it. 00:32:43.656 [2024-04-26 13:15:48.614206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.656 [2024-04-26 13:15:48.614520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.656 [2024-04-26 13:15:48.614526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.656 qpair failed and we were unable to recover it. 00:32:43.656 [2024-04-26 13:15:48.614835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.656 [2024-04-26 13:15:48.615115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.656 [2024-04-26 13:15:48.615122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.656 qpair failed and we were unable to recover it. 00:32:43.656 [2024-04-26 13:15:48.615323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.656 [2024-04-26 13:15:48.615695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.656 [2024-04-26 13:15:48.615702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.656 qpair failed and we were unable to recover it. 00:32:43.656 [2024-04-26 13:15:48.616009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.656 [2024-04-26 13:15:48.616320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.656 [2024-04-26 13:15:48.616326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.656 qpair failed and we were unable to recover it. 00:32:43.656 [2024-04-26 13:15:48.616647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.656 [2024-04-26 13:15:48.616868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.656 [2024-04-26 13:15:48.616875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.656 qpair failed and we were unable to recover it. 00:32:43.656 [2024-04-26 13:15:48.617193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.656 [2024-04-26 13:15:48.617530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.656 [2024-04-26 13:15:48.617537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.656 qpair failed and we were unable to recover it. 00:32:43.656 [2024-04-26 13:15:48.617847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.656 [2024-04-26 13:15:48.618144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.656 [2024-04-26 13:15:48.618151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.656 qpair failed and we were unable to recover it. 00:32:43.656 [2024-04-26 13:15:48.618529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.656 [2024-04-26 13:15:48.618854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.656 [2024-04-26 13:15:48.618861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.656 qpair failed and we were unable to recover it. 00:32:43.656 [2024-04-26 13:15:48.619169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.656 [2024-04-26 13:15:48.619476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.656 [2024-04-26 13:15:48.619483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.656 qpair failed and we were unable to recover it. 00:32:43.656 [2024-04-26 13:15:48.619792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.656 [2024-04-26 13:15:48.620171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.656 [2024-04-26 13:15:48.620178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.656 qpair failed and we were unable to recover it. 00:32:43.656 [2024-04-26 13:15:48.620489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.656 [2024-04-26 13:15:48.620810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.656 [2024-04-26 13:15:48.620816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.656 qpair failed and we were unable to recover it. 00:32:43.656 [2024-04-26 13:15:48.621178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.656 [2024-04-26 13:15:48.621479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.656 [2024-04-26 13:15:48.621486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.656 qpair failed and we were unable to recover it. 00:32:43.657 [2024-04-26 13:15:48.621799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.657 [2024-04-26 13:15:48.622118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.657 [2024-04-26 13:15:48.622125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.657 qpair failed and we were unable to recover it. 00:32:43.657 [2024-04-26 13:15:48.622337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.657 [2024-04-26 13:15:48.622616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.657 [2024-04-26 13:15:48.622623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.657 qpair failed and we were unable to recover it. 00:32:43.657 [2024-04-26 13:15:48.622938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.657 [2024-04-26 13:15:48.623275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.657 [2024-04-26 13:15:48.623282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.657 qpair failed and we were unable to recover it. 00:32:43.657 [2024-04-26 13:15:48.623618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.657 [2024-04-26 13:15:48.623915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.657 [2024-04-26 13:15:48.623923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.657 qpair failed and we were unable to recover it. 00:32:43.657 [2024-04-26 13:15:48.624244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.657 [2024-04-26 13:15:48.624532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.657 [2024-04-26 13:15:48.624538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.657 qpair failed and we were unable to recover it. 00:32:43.657 [2024-04-26 13:15:48.624842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.657 [2024-04-26 13:15:48.625129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.657 [2024-04-26 13:15:48.625135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.657 qpair failed and we were unable to recover it. 00:32:43.657 [2024-04-26 13:15:48.625445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.657 [2024-04-26 13:15:48.625764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.657 [2024-04-26 13:15:48.625771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.657 qpair failed and we were unable to recover it. 00:32:43.657 [2024-04-26 13:15:48.626085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.657 [2024-04-26 13:15:48.626287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.657 [2024-04-26 13:15:48.626293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.657 qpair failed and we were unable to recover it. 00:32:43.657 [2024-04-26 13:15:48.626629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.657 [2024-04-26 13:15:48.626911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.657 [2024-04-26 13:15:48.626918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.657 qpair failed and we were unable to recover it. 00:32:43.657 [2024-04-26 13:15:48.627206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.657 [2024-04-26 13:15:48.627411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.657 [2024-04-26 13:15:48.627417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.657 qpair failed and we were unable to recover it. 00:32:43.657 [2024-04-26 13:15:48.627730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.657 [2024-04-26 13:15:48.628023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.657 [2024-04-26 13:15:48.628030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.657 qpair failed and we were unable to recover it. 00:32:43.657 [2024-04-26 13:15:48.628325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.657 [2024-04-26 13:15:48.628623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.657 [2024-04-26 13:15:48.628629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.657 qpair failed and we were unable to recover it. 00:32:43.657 [2024-04-26 13:15:48.628919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.657 [2024-04-26 13:15:48.629105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.657 [2024-04-26 13:15:48.629111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.657 qpair failed and we were unable to recover it. 00:32:43.657 [2024-04-26 13:15:48.629419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.657 [2024-04-26 13:15:48.629670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.657 [2024-04-26 13:15:48.629676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.657 qpair failed and we were unable to recover it. 00:32:43.657 [2024-04-26 13:15:48.630017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.657 [2024-04-26 13:15:48.630314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.657 [2024-04-26 13:15:48.630320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.657 qpair failed and we were unable to recover it. 00:32:43.657 [2024-04-26 13:15:48.630471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.657 [2024-04-26 13:15:48.630844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.657 [2024-04-26 13:15:48.630852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.657 qpair failed and we were unable to recover it. 00:32:43.657 [2024-04-26 13:15:48.631018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.657 [2024-04-26 13:15:48.631314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.657 [2024-04-26 13:15:48.631320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.657 qpair failed and we were unable to recover it. 00:32:43.657 [2024-04-26 13:15:48.631622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.657 [2024-04-26 13:15:48.631939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.657 [2024-04-26 13:15:48.631946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.657 qpair failed and we were unable to recover it. 00:32:43.657 [2024-04-26 13:15:48.632151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.657 [2024-04-26 13:15:48.632381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.657 [2024-04-26 13:15:48.632387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.657 qpair failed and we were unable to recover it. 00:32:43.657 [2024-04-26 13:15:48.632618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.657 [2024-04-26 13:15:48.632855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.657 [2024-04-26 13:15:48.632861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.657 qpair failed and we were unable to recover it. 00:32:43.657 [2024-04-26 13:15:48.633171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.657 [2024-04-26 13:15:48.633470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.657 [2024-04-26 13:15:48.633476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.657 qpair failed and we were unable to recover it. 00:32:43.657 [2024-04-26 13:15:48.633787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.657 [2024-04-26 13:15:48.633934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.657 [2024-04-26 13:15:48.633941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.657 qpair failed and we were unable to recover it. 00:32:43.657 [2024-04-26 13:15:48.634268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.657 [2024-04-26 13:15:48.634573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.657 [2024-04-26 13:15:48.634579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.657 qpair failed and we were unable to recover it. 00:32:43.657 [2024-04-26 13:15:48.634872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.657 [2024-04-26 13:15:48.635187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.657 [2024-04-26 13:15:48.635193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.657 qpair failed and we were unable to recover it. 00:32:43.657 [2024-04-26 13:15:48.635496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.657 [2024-04-26 13:15:48.635768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.657 [2024-04-26 13:15:48.635774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.658 qpair failed and we were unable to recover it. 00:32:43.658 [2024-04-26 13:15:48.636089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.658 [2024-04-26 13:15:48.636404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.658 [2024-04-26 13:15:48.636410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.658 qpair failed and we were unable to recover it. 00:32:43.658 [2024-04-26 13:15:48.636720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.658 [2024-04-26 13:15:48.637002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.658 [2024-04-26 13:15:48.637008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.658 qpair failed and we were unable to recover it. 00:32:43.658 [2024-04-26 13:15:48.637316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.658 [2024-04-26 13:15:48.637508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.658 [2024-04-26 13:15:48.637514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.658 qpair failed and we were unable to recover it. 00:32:43.658 [2024-04-26 13:15:48.637823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.658 [2024-04-26 13:15:48.638160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.658 [2024-04-26 13:15:48.638167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.658 qpair failed and we were unable to recover it. 00:32:43.658 [2024-04-26 13:15:48.638457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.658 [2024-04-26 13:15:48.638617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.658 [2024-04-26 13:15:48.638624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.658 qpair failed and we were unable to recover it. 00:32:43.658 [2024-04-26 13:15:48.638971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.658 [2024-04-26 13:15:48.639245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.658 [2024-04-26 13:15:48.639252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.658 qpair failed and we were unable to recover it. 00:32:43.658 [2024-04-26 13:15:48.639550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.658 [2024-04-26 13:15:48.639829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.658 [2024-04-26 13:15:48.639846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.658 qpair failed and we were unable to recover it. 00:32:43.658 [2024-04-26 13:15:48.640170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.658 [2024-04-26 13:15:48.640468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.658 [2024-04-26 13:15:48.640474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.658 qpair failed and we were unable to recover it. 00:32:43.658 [2024-04-26 13:15:48.640800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.658 [2024-04-26 13:15:48.641003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.658 [2024-04-26 13:15:48.641010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.658 qpair failed and we were unable to recover it. 00:32:43.658 [2024-04-26 13:15:48.641303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.658 [2024-04-26 13:15:48.641610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.658 [2024-04-26 13:15:48.641616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.658 qpair failed and we were unable to recover it. 00:32:43.658 [2024-04-26 13:15:48.641920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.658 [2024-04-26 13:15:48.642135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.658 [2024-04-26 13:15:48.642142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.658 qpair failed and we were unable to recover it. 00:32:43.658 [2024-04-26 13:15:48.642449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.658 [2024-04-26 13:15:48.642736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.658 [2024-04-26 13:15:48.642742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.658 qpair failed and we were unable to recover it. 00:32:43.658 [2024-04-26 13:15:48.643030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.658 [2024-04-26 13:15:48.643342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.658 [2024-04-26 13:15:48.643349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.658 qpair failed and we were unable to recover it. 00:32:43.658 [2024-04-26 13:15:48.643655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.658 [2024-04-26 13:15:48.643849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.658 [2024-04-26 13:15:48.643856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.658 qpair failed and we were unable to recover it. 00:32:43.658 [2024-04-26 13:15:48.644206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.658 [2024-04-26 13:15:48.644514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.658 [2024-04-26 13:15:48.644521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.658 qpair failed and we were unable to recover it. 00:32:43.658 [2024-04-26 13:15:48.644848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.658 [2024-04-26 13:15:48.645175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.658 [2024-04-26 13:15:48.645181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.658 qpair failed and we were unable to recover it. 00:32:43.658 [2024-04-26 13:15:48.645458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.658 [2024-04-26 13:15:48.645749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.658 [2024-04-26 13:15:48.645755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.658 qpair failed and we were unable to recover it. 00:32:43.658 [2024-04-26 13:15:48.646044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.658 [2024-04-26 13:15:48.646371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.658 [2024-04-26 13:15:48.646377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.658 qpair failed and we were unable to recover it. 00:32:43.658 [2024-04-26 13:15:48.646530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.658 [2024-04-26 13:15:48.646845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.658 [2024-04-26 13:15:48.646852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.658 qpair failed and we were unable to recover it. 00:32:43.658 [2024-04-26 13:15:48.647161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.658 [2024-04-26 13:15:48.647366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.658 [2024-04-26 13:15:48.647373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.658 qpair failed and we were unable to recover it. 00:32:43.658 [2024-04-26 13:15:48.647690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.658 [2024-04-26 13:15:48.648028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.658 [2024-04-26 13:15:48.648035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.658 qpair failed and we were unable to recover it. 00:32:43.658 [2024-04-26 13:15:48.648309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.658 [2024-04-26 13:15:48.648626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.658 [2024-04-26 13:15:48.648632] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.658 qpair failed and we were unable to recover it. 00:32:43.658 [2024-04-26 13:15:48.648925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.658 [2024-04-26 13:15:48.649220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.658 [2024-04-26 13:15:48.649226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.658 qpair failed and we were unable to recover it. 00:32:43.658 [2024-04-26 13:15:48.649534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.658 [2024-04-26 13:15:48.649827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.658 [2024-04-26 13:15:48.649834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.658 qpair failed and we were unable to recover it. 00:32:43.658 [2024-04-26 13:15:48.650222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.658 [2024-04-26 13:15:48.650550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.658 [2024-04-26 13:15:48.650557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.658 qpair failed and we were unable to recover it. 00:32:43.658 [2024-04-26 13:15:48.650857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.658 [2024-04-26 13:15:48.651171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.658 [2024-04-26 13:15:48.651177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.658 qpair failed and we were unable to recover it. 00:32:43.658 [2024-04-26 13:15:48.651487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.658 [2024-04-26 13:15:48.651767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.658 [2024-04-26 13:15:48.651774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.658 qpair failed and we were unable to recover it. 00:32:43.658 [2024-04-26 13:15:48.652107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.658 [2024-04-26 13:15:48.652422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.658 [2024-04-26 13:15:48.652428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.658 qpair failed and we were unable to recover it. 00:32:43.658 [2024-04-26 13:15:48.652605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.658 [2024-04-26 13:15:48.652912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.659 [2024-04-26 13:15:48.652920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.659 qpair failed and we were unable to recover it. 00:32:43.659 [2024-04-26 13:15:48.653231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.659 [2024-04-26 13:15:48.653540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.659 [2024-04-26 13:15:48.653546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.659 qpair failed and we were unable to recover it. 00:32:43.659 [2024-04-26 13:15:48.653854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.659 [2024-04-26 13:15:48.654143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.659 [2024-04-26 13:15:48.654149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.659 qpair failed and we were unable to recover it. 00:32:43.659 [2024-04-26 13:15:48.654441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.659 [2024-04-26 13:15:48.654765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.659 [2024-04-26 13:15:48.654773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.659 qpair failed and we were unable to recover it. 00:32:43.659 [2024-04-26 13:15:48.655091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.659 [2024-04-26 13:15:48.655398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.659 [2024-04-26 13:15:48.655405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.659 qpair failed and we were unable to recover it. 00:32:43.659 [2024-04-26 13:15:48.655728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.659 [2024-04-26 13:15:48.656057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.659 [2024-04-26 13:15:48.656064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.659 qpair failed and we were unable to recover it. 00:32:43.659 [2024-04-26 13:15:48.656368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.659 [2024-04-26 13:15:48.656667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.659 [2024-04-26 13:15:48.656674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.659 qpair failed and we were unable to recover it. 00:32:43.659 [2024-04-26 13:15:48.656869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.659 [2024-04-26 13:15:48.657189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.659 [2024-04-26 13:15:48.657195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.659 qpair failed and we were unable to recover it. 00:32:43.659 [2024-04-26 13:15:48.657537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.659 [2024-04-26 13:15:48.657855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.659 [2024-04-26 13:15:48.657863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.659 qpair failed and we were unable to recover it. 00:32:43.659 [2024-04-26 13:15:48.658173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.659 [2024-04-26 13:15:48.658489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.659 [2024-04-26 13:15:48.658495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.659 qpair failed and we were unable to recover it. 00:32:43.659 [2024-04-26 13:15:48.658774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.659 [2024-04-26 13:15:48.659115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.659 [2024-04-26 13:15:48.659122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.659 qpair failed and we were unable to recover it. 00:32:43.659 [2024-04-26 13:15:48.659452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.659 [2024-04-26 13:15:48.659590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.659 [2024-04-26 13:15:48.659597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.659 qpair failed and we were unable to recover it. 00:32:43.659 [2024-04-26 13:15:48.659868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.659 [2024-04-26 13:15:48.660176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.659 [2024-04-26 13:15:48.660183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.659 qpair failed and we were unable to recover it. 00:32:43.659 [2024-04-26 13:15:48.660506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.659 [2024-04-26 13:15:48.660813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.659 [2024-04-26 13:15:48.660819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.659 qpair failed and we were unable to recover it. 00:32:43.659 [2024-04-26 13:15:48.661085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.659 [2024-04-26 13:15:48.661402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.659 [2024-04-26 13:15:48.661409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.659 qpair failed and we were unable to recover it. 00:32:43.659 [2024-04-26 13:15:48.661605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.659 [2024-04-26 13:15:48.661937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.659 [2024-04-26 13:15:48.661944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.659 qpair failed and we were unable to recover it. 00:32:43.659 [2024-04-26 13:15:48.662160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.659 [2024-04-26 13:15:48.662496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.659 [2024-04-26 13:15:48.662502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.659 qpair failed and we were unable to recover it. 00:32:43.659 [2024-04-26 13:15:48.662874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.659 [2024-04-26 13:15:48.663167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.659 [2024-04-26 13:15:48.663174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.659 qpair failed and we were unable to recover it. 00:32:43.659 [2024-04-26 13:15:48.663507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.659 [2024-04-26 13:15:48.663823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.659 [2024-04-26 13:15:48.663829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.659 qpair failed and we were unable to recover it. 00:32:43.659 [2024-04-26 13:15:48.664191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.659 [2024-04-26 13:15:48.664504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.659 [2024-04-26 13:15:48.664511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.659 qpair failed and we were unable to recover it. 00:32:43.659 [2024-04-26 13:15:48.664848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.659 [2024-04-26 13:15:48.665145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.659 [2024-04-26 13:15:48.665152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.659 qpair failed and we were unable to recover it. 00:32:43.659 [2024-04-26 13:15:48.665492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.659 [2024-04-26 13:15:48.665843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.659 [2024-04-26 13:15:48.665849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.659 qpair failed and we were unable to recover it. 00:32:43.659 [2024-04-26 13:15:48.666180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.659 [2024-04-26 13:15:48.666480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.659 [2024-04-26 13:15:48.666487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.659 qpair failed and we were unable to recover it. 00:32:43.659 [2024-04-26 13:15:48.666779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.659 [2024-04-26 13:15:48.667094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.659 [2024-04-26 13:15:48.667100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.659 qpair failed and we were unable to recover it. 00:32:43.659 [2024-04-26 13:15:48.667410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.659 [2024-04-26 13:15:48.667749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.659 [2024-04-26 13:15:48.667755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.659 qpair failed and we were unable to recover it. 00:32:43.659 [2024-04-26 13:15:48.668108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.659 [2024-04-26 13:15:48.668435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.659 [2024-04-26 13:15:48.668442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.659 qpair failed and we were unable to recover it. 00:32:43.659 [2024-04-26 13:15:48.668776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.659 [2024-04-26 13:15:48.669160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.659 [2024-04-26 13:15:48.669167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.659 qpair failed and we were unable to recover it. 00:32:43.659 [2024-04-26 13:15:48.669464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.659 [2024-04-26 13:15:48.669781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.659 [2024-04-26 13:15:48.669788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.659 qpair failed and we were unable to recover it. 00:32:43.659 [2024-04-26 13:15:48.670095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.659 [2024-04-26 13:15:48.670420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.659 [2024-04-26 13:15:48.670427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.659 qpair failed and we were unable to recover it. 00:32:43.659 [2024-04-26 13:15:48.670734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.660 [2024-04-26 13:15:48.671010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.660 [2024-04-26 13:15:48.671017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.660 qpair failed and we were unable to recover it. 00:32:43.660 [2024-04-26 13:15:48.671316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.660 [2024-04-26 13:15:48.671639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.660 [2024-04-26 13:15:48.671646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.660 qpair failed and we were unable to recover it. 00:32:43.660 [2024-04-26 13:15:48.671878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.660 [2024-04-26 13:15:48.672176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.660 [2024-04-26 13:15:48.672182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.660 qpair failed and we were unable to recover it. 00:32:43.660 [2024-04-26 13:15:48.672489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.660 [2024-04-26 13:15:48.672803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.660 [2024-04-26 13:15:48.672809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.660 qpair failed and we were unable to recover it. 00:32:43.660 [2024-04-26 13:15:48.673127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.660 [2024-04-26 13:15:48.673413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.660 [2024-04-26 13:15:48.673419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.660 qpair failed and we were unable to recover it. 00:32:43.660 [2024-04-26 13:15:48.673808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.660 [2024-04-26 13:15:48.674072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.660 [2024-04-26 13:15:48.674078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.660 qpair failed and we were unable to recover it. 00:32:43.660 [2024-04-26 13:15:48.674416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.660 [2024-04-26 13:15:48.674801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.660 [2024-04-26 13:15:48.674808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.660 qpair failed and we were unable to recover it. 00:32:43.660 [2024-04-26 13:15:48.675122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.660 [2024-04-26 13:15:48.675438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.660 [2024-04-26 13:15:48.675445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.660 qpair failed and we were unable to recover it. 00:32:43.660 [2024-04-26 13:15:48.675752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.660 [2024-04-26 13:15:48.676120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.660 [2024-04-26 13:15:48.676126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.660 qpair failed and we were unable to recover it. 00:32:43.660 [2024-04-26 13:15:48.676416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.660 [2024-04-26 13:15:48.676752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.660 [2024-04-26 13:15:48.676759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.660 qpair failed and we were unable to recover it. 00:32:43.660 [2024-04-26 13:15:48.677071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.660 [2024-04-26 13:15:48.677392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.660 [2024-04-26 13:15:48.677399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.660 qpair failed and we were unable to recover it. 00:32:43.660 [2024-04-26 13:15:48.677710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.660 [2024-04-26 13:15:48.677990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.660 [2024-04-26 13:15:48.677998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.660 qpair failed and we were unable to recover it. 00:32:43.660 [2024-04-26 13:15:48.678281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.660 [2024-04-26 13:15:48.678473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.660 [2024-04-26 13:15:48.678480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.660 qpair failed and we were unable to recover it. 00:32:43.660 [2024-04-26 13:15:48.678796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.660 [2024-04-26 13:15:48.679025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.660 [2024-04-26 13:15:48.679031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.660 qpair failed and we were unable to recover it. 00:32:43.660 [2024-04-26 13:15:48.679319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.660 [2024-04-26 13:15:48.679619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.660 [2024-04-26 13:15:48.679626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.660 qpair failed and we were unable to recover it. 00:32:43.660 [2024-04-26 13:15:48.679941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.660 [2024-04-26 13:15:48.680272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.660 [2024-04-26 13:15:48.680278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.660 qpair failed and we were unable to recover it. 00:32:43.660 [2024-04-26 13:15:48.680555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.660 [2024-04-26 13:15:48.680865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.660 [2024-04-26 13:15:48.680872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.660 qpair failed and we were unable to recover it. 00:32:43.660 [2024-04-26 13:15:48.681201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.660 [2024-04-26 13:15:48.681491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.660 [2024-04-26 13:15:48.681498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.660 qpair failed and we were unable to recover it. 00:32:43.660 [2024-04-26 13:15:48.681809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.660 [2024-04-26 13:15:48.682110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.660 [2024-04-26 13:15:48.682117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.660 qpair failed and we were unable to recover it. 00:32:43.660 [2024-04-26 13:15:48.682438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.660 [2024-04-26 13:15:48.682723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.660 [2024-04-26 13:15:48.682729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.660 qpair failed and we were unable to recover it. 00:32:43.660 [2024-04-26 13:15:48.683032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.660 [2024-04-26 13:15:48.683349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.660 [2024-04-26 13:15:48.683356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.660 qpair failed and we were unable to recover it. 00:32:43.660 [2024-04-26 13:15:48.683674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.660 [2024-04-26 13:15:48.683981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.660 [2024-04-26 13:15:48.683989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.660 qpair failed and we were unable to recover it. 00:32:43.660 [2024-04-26 13:15:48.684314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.660 [2024-04-26 13:15:48.684506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.660 [2024-04-26 13:15:48.684513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.660 qpair failed and we were unable to recover it. 00:32:43.660 [2024-04-26 13:15:48.684831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.660 [2024-04-26 13:15:48.685157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.660 [2024-04-26 13:15:48.685164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.660 qpair failed and we were unable to recover it. 00:32:43.660 [2024-04-26 13:15:48.685466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.660 [2024-04-26 13:15:48.685785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.660 [2024-04-26 13:15:48.685792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.660 qpair failed and we were unable to recover it. 00:32:43.660 [2024-04-26 13:15:48.686106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.660 [2024-04-26 13:15:48.686415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.660 [2024-04-26 13:15:48.686422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.660 qpair failed and we were unable to recover it. 00:32:43.660 [2024-04-26 13:15:48.686723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.660 [2024-04-26 13:15:48.687031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.660 [2024-04-26 13:15:48.687038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.660 qpair failed and we were unable to recover it. 00:32:43.660 [2024-04-26 13:15:48.687351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.660 [2024-04-26 13:15:48.687643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.660 [2024-04-26 13:15:48.687649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.660 qpair failed and we were unable to recover it. 00:32:43.660 [2024-04-26 13:15:48.687965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.660 [2024-04-26 13:15:48.688259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.660 [2024-04-26 13:15:48.688266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.660 qpair failed and we were unable to recover it. 00:32:43.660 [2024-04-26 13:15:48.688557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.661 [2024-04-26 13:15:48.688891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.661 [2024-04-26 13:15:48.688898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.661 qpair failed and we were unable to recover it. 00:32:43.661 [2024-04-26 13:15:48.689213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.661 [2024-04-26 13:15:48.689491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.661 [2024-04-26 13:15:48.689497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.661 qpair failed and we were unable to recover it. 00:32:43.661 [2024-04-26 13:15:48.689809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.661 [2024-04-26 13:15:48.690102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.661 [2024-04-26 13:15:48.690110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.661 qpair failed and we were unable to recover it. 00:32:43.661 [2024-04-26 13:15:48.690409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.661 [2024-04-26 13:15:48.690715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.661 [2024-04-26 13:15:48.690722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.661 qpair failed and we were unable to recover it. 00:32:43.661 [2024-04-26 13:15:48.691040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.661 [2024-04-26 13:15:48.691337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.661 [2024-04-26 13:15:48.691344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.661 qpair failed and we were unable to recover it. 00:32:43.661 [2024-04-26 13:15:48.691681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.661 [2024-04-26 13:15:48.692005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.661 [2024-04-26 13:15:48.692011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.661 qpair failed and we were unable to recover it. 00:32:43.661 [2024-04-26 13:15:48.692323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.661 [2024-04-26 13:15:48.692642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.661 [2024-04-26 13:15:48.692648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.661 qpair failed and we were unable to recover it. 00:32:43.661 [2024-04-26 13:15:48.692847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.661 [2024-04-26 13:15:48.693190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.661 [2024-04-26 13:15:48.693197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.661 qpair failed and we were unable to recover it. 00:32:43.661 [2024-04-26 13:15:48.693507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.661 [2024-04-26 13:15:48.693801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.661 [2024-04-26 13:15:48.693807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.661 qpair failed and we were unable to recover it. 00:32:43.931 [2024-04-26 13:15:48.694194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.931 [2024-04-26 13:15:48.694510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.931 [2024-04-26 13:15:48.694517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.931 qpair failed and we were unable to recover it. 00:32:43.931 [2024-04-26 13:15:48.694836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.931 [2024-04-26 13:15:48.695159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.931 [2024-04-26 13:15:48.695166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.931 qpair failed and we were unable to recover it. 00:32:43.931 [2024-04-26 13:15:48.695452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.931 [2024-04-26 13:15:48.695655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.931 [2024-04-26 13:15:48.695662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.931 qpair failed and we were unable to recover it. 00:32:43.931 [2024-04-26 13:15:48.695887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.931 [2024-04-26 13:15:48.696247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.931 [2024-04-26 13:15:48.696253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.931 qpair failed and we were unable to recover it. 00:32:43.931 [2024-04-26 13:15:48.696593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.931 [2024-04-26 13:15:48.696915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.931 [2024-04-26 13:15:48.696922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.931 qpair failed and we were unable to recover it. 00:32:43.931 [2024-04-26 13:15:48.697282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.931 [2024-04-26 13:15:48.697464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.931 [2024-04-26 13:15:48.697470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.931 qpair failed and we were unable to recover it. 00:32:43.931 [2024-04-26 13:15:48.697841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.931 [2024-04-26 13:15:48.698157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.931 [2024-04-26 13:15:48.698164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.931 qpair failed and we were unable to recover it. 00:32:43.931 [2024-04-26 13:15:48.698331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.931 [2024-04-26 13:15:48.698610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.931 [2024-04-26 13:15:48.698617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.931 qpair failed and we were unable to recover it. 00:32:43.931 [2024-04-26 13:15:48.698918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.931 [2024-04-26 13:15:48.699246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.931 [2024-04-26 13:15:48.699253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.931 qpair failed and we were unable to recover it. 00:32:43.931 [2024-04-26 13:15:48.699571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.931 [2024-04-26 13:15:48.699896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.931 [2024-04-26 13:15:48.699903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.931 qpair failed and we were unable to recover it. 00:32:43.931 [2024-04-26 13:15:48.700211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.931 [2024-04-26 13:15:48.700486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.931 [2024-04-26 13:15:48.700500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.931 qpair failed and we were unable to recover it. 00:32:43.931 [2024-04-26 13:15:48.700799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.931 [2024-04-26 13:15:48.701096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.931 [2024-04-26 13:15:48.701102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.931 qpair failed and we were unable to recover it. 00:32:43.931 [2024-04-26 13:15:48.701403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.931 [2024-04-26 13:15:48.701716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.931 [2024-04-26 13:15:48.701723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.931 qpair failed and we were unable to recover it. 00:32:43.931 [2024-04-26 13:15:48.701940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.931 [2024-04-26 13:15:48.702264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.931 [2024-04-26 13:15:48.702271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.931 qpair failed and we were unable to recover it. 00:32:43.931 [2024-04-26 13:15:48.702556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.931 [2024-04-26 13:15:48.702770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.931 [2024-04-26 13:15:48.702777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.931 qpair failed and we were unable to recover it. 00:32:43.932 [2024-04-26 13:15:48.703087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.932 [2024-04-26 13:15:48.703408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.932 [2024-04-26 13:15:48.703415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.932 qpair failed and we were unable to recover it. 00:32:43.932 [2024-04-26 13:15:48.703725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.932 [2024-04-26 13:15:48.704038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.932 [2024-04-26 13:15:48.704045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.932 qpair failed and we were unable to recover it. 00:32:43.932 [2024-04-26 13:15:48.704348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.932 [2024-04-26 13:15:48.704646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.932 [2024-04-26 13:15:48.704653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.932 qpair failed and we were unable to recover it. 00:32:43.932 [2024-04-26 13:15:48.704862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.932 [2024-04-26 13:15:48.705129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.932 [2024-04-26 13:15:48.705135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.932 qpair failed and we were unable to recover it. 00:32:43.932 [2024-04-26 13:15:48.705330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.932 [2024-04-26 13:15:48.705658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.932 [2024-04-26 13:15:48.705665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.932 qpair failed and we were unable to recover it. 00:32:43.932 [2024-04-26 13:15:48.705884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.932 [2024-04-26 13:15:48.706229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.932 [2024-04-26 13:15:48.706236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.932 qpair failed and we were unable to recover it. 00:32:43.932 [2024-04-26 13:15:48.706547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.932 [2024-04-26 13:15:48.706875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.932 [2024-04-26 13:15:48.706882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.932 qpair failed and we were unable to recover it. 00:32:43.932 [2024-04-26 13:15:48.707232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.932 [2024-04-26 13:15:48.707551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.932 [2024-04-26 13:15:48.707558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.932 qpair failed and we were unable to recover it. 00:32:43.932 [2024-04-26 13:15:48.707871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.932 [2024-04-26 13:15:48.708200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.932 [2024-04-26 13:15:48.708207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.932 qpair failed and we were unable to recover it. 00:32:43.932 [2024-04-26 13:15:48.708511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.932 [2024-04-26 13:15:48.708853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.932 [2024-04-26 13:15:48.708861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.932 qpair failed and we were unable to recover it. 00:32:43.932 [2024-04-26 13:15:48.709017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.932 [2024-04-26 13:15:48.709314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.932 [2024-04-26 13:15:48.709320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.932 qpair failed and we were unable to recover it. 00:32:43.932 [2024-04-26 13:15:48.709384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.932 [2024-04-26 13:15:48.709678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.932 [2024-04-26 13:15:48.709684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.932 qpair failed and we were unable to recover it. 00:32:43.932 [2024-04-26 13:15:48.709892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.932 [2024-04-26 13:15:48.710205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.932 [2024-04-26 13:15:48.710212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.932 qpair failed and we were unable to recover it. 00:32:43.932 [2024-04-26 13:15:48.710414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.932 [2024-04-26 13:15:48.710732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.932 [2024-04-26 13:15:48.710739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.932 qpair failed and we were unable to recover it. 00:32:43.932 [2024-04-26 13:15:48.711051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.932 [2024-04-26 13:15:48.711240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.932 [2024-04-26 13:15:48.711248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.932 qpair failed and we were unable to recover it. 00:32:43.932 [2024-04-26 13:15:48.711564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.932 [2024-04-26 13:15:48.711754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.932 [2024-04-26 13:15:48.711762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.932 qpair failed and we were unable to recover it. 00:32:43.932 [2024-04-26 13:15:48.712056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.932 [2024-04-26 13:15:48.712401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.932 [2024-04-26 13:15:48.712409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.932 qpair failed and we were unable to recover it. 00:32:43.932 [2024-04-26 13:15:48.712472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.932 [2024-04-26 13:15:48.712761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.932 [2024-04-26 13:15:48.712768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.932 qpair failed and we were unable to recover it. 00:32:43.932 [2024-04-26 13:15:48.712959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.932 [2024-04-26 13:15:48.713266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.932 [2024-04-26 13:15:48.713274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.932 qpair failed and we were unable to recover it. 00:32:43.932 [2024-04-26 13:15:48.713586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.932 [2024-04-26 13:15:48.713896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.932 [2024-04-26 13:15:48.713902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.932 qpair failed and we were unable to recover it. 00:32:43.932 [2024-04-26 13:15:48.714218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.932 [2024-04-26 13:15:48.714548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.932 [2024-04-26 13:15:48.714554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.932 qpair failed and we were unable to recover it. 00:32:43.932 [2024-04-26 13:15:48.714858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.932 [2024-04-26 13:15:48.715049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.932 [2024-04-26 13:15:48.715055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.932 qpair failed and we were unable to recover it. 00:32:43.932 [2024-04-26 13:15:48.715347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.932 [2024-04-26 13:15:48.715540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.932 [2024-04-26 13:15:48.715547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.932 qpair failed and we were unable to recover it. 00:32:43.932 [2024-04-26 13:15:48.715869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.932 [2024-04-26 13:15:48.716188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.932 [2024-04-26 13:15:48.716195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.932 qpair failed and we were unable to recover it. 00:32:43.932 [2024-04-26 13:15:48.716580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.932 [2024-04-26 13:15:48.716752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.932 [2024-04-26 13:15:48.716759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.932 qpair failed and we were unable to recover it. 00:32:43.932 [2024-04-26 13:15:48.717065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.932 [2024-04-26 13:15:48.717387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.932 [2024-04-26 13:15:48.717394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.932 qpair failed and we were unable to recover it. 00:32:43.932 [2024-04-26 13:15:48.717729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.932 [2024-04-26 13:15:48.718038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.932 [2024-04-26 13:15:48.718045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.932 qpair failed and we were unable to recover it. 00:32:43.932 [2024-04-26 13:15:48.718342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.932 [2024-04-26 13:15:48.718692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.932 [2024-04-26 13:15:48.718698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.932 qpair failed and we were unable to recover it. 00:32:43.932 [2024-04-26 13:15:48.719093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.932 [2024-04-26 13:15:48.719293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.932 [2024-04-26 13:15:48.719300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.933 qpair failed and we were unable to recover it. 00:32:43.933 [2024-04-26 13:15:48.719622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.933 [2024-04-26 13:15:48.719964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.933 [2024-04-26 13:15:48.719971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.933 qpair failed and we were unable to recover it. 00:32:43.933 [2024-04-26 13:15:48.720262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.933 [2024-04-26 13:15:48.720593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.933 [2024-04-26 13:15:48.720601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.933 qpair failed and we were unable to recover it. 00:32:43.933 [2024-04-26 13:15:48.720916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.933 [2024-04-26 13:15:48.721300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.933 [2024-04-26 13:15:48.721306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.933 qpair failed and we were unable to recover it. 00:32:43.933 [2024-04-26 13:15:48.721602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.933 [2024-04-26 13:15:48.721846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.933 [2024-04-26 13:15:48.721853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.933 qpair failed and we were unable to recover it. 00:32:43.933 [2024-04-26 13:15:48.722160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.933 [2024-04-26 13:15:48.722450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.933 [2024-04-26 13:15:48.722456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.933 qpair failed and we were unable to recover it. 00:32:43.933 [2024-04-26 13:15:48.722670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.933 [2024-04-26 13:15:48.722957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.933 [2024-04-26 13:15:48.722963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.933 qpair failed and we were unable to recover it. 00:32:43.933 [2024-04-26 13:15:48.723279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.933 [2024-04-26 13:15:48.723486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.933 [2024-04-26 13:15:48.723493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.933 qpair failed and we were unable to recover it. 00:32:43.933 [2024-04-26 13:15:48.723811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.933 [2024-04-26 13:15:48.724149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.933 [2024-04-26 13:15:48.724156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.933 qpair failed and we were unable to recover it. 00:32:43.933 [2024-04-26 13:15:48.724310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.933 [2024-04-26 13:15:48.724653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.933 [2024-04-26 13:15:48.724659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.933 qpair failed and we were unable to recover it. 00:32:43.933 [2024-04-26 13:15:48.724961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.933 [2024-04-26 13:15:48.725298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.933 [2024-04-26 13:15:48.725305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.933 qpair failed and we were unable to recover it. 00:32:43.933 [2024-04-26 13:15:48.725613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.933 [2024-04-26 13:15:48.725893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.933 [2024-04-26 13:15:48.725899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.933 qpair failed and we were unable to recover it. 00:32:43.933 [2024-04-26 13:15:48.726293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.933 [2024-04-26 13:15:48.726590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.933 [2024-04-26 13:15:48.726597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.933 qpair failed and we were unable to recover it. 00:32:43.933 [2024-04-26 13:15:48.726690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.933 [2024-04-26 13:15:48.726961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.933 [2024-04-26 13:15:48.726968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.933 qpair failed and we were unable to recover it. 00:32:43.933 [2024-04-26 13:15:48.727298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.933 [2024-04-26 13:15:48.727621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.933 [2024-04-26 13:15:48.727628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.933 qpair failed and we were unable to recover it. 00:32:43.933 [2024-04-26 13:15:48.727810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.933 [2024-04-26 13:15:48.728094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.933 [2024-04-26 13:15:48.728101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.933 qpair failed and we were unable to recover it. 00:32:43.933 [2024-04-26 13:15:48.728251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.933 [2024-04-26 13:15:48.728626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.933 [2024-04-26 13:15:48.728634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.933 qpair failed and we were unable to recover it. 00:32:43.933 [2024-04-26 13:15:48.728957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.933 [2024-04-26 13:15:48.729259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.933 [2024-04-26 13:15:48.729266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.933 qpair failed and we were unable to recover it. 00:32:43.933 [2024-04-26 13:15:48.729593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.933 [2024-04-26 13:15:48.729869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.933 [2024-04-26 13:15:48.729876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.933 qpair failed and we were unable to recover it. 00:32:43.933 [2024-04-26 13:15:48.730056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.933 [2024-04-26 13:15:48.730369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.933 [2024-04-26 13:15:48.730376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.933 qpair failed and we were unable to recover it. 00:32:43.933 [2024-04-26 13:15:48.730580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.933 [2024-04-26 13:15:48.730893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.933 [2024-04-26 13:15:48.730901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.933 qpair failed and we were unable to recover it. 00:32:43.933 [2024-04-26 13:15:48.731238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.933 [2024-04-26 13:15:48.731547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.933 [2024-04-26 13:15:48.731553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.933 qpair failed and we were unable to recover it. 00:32:43.933 [2024-04-26 13:15:48.731776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.933 [2024-04-26 13:15:48.732115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.933 [2024-04-26 13:15:48.732122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.933 qpair failed and we were unable to recover it. 00:32:43.933 [2024-04-26 13:15:48.732308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.933 [2024-04-26 13:15:48.732618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.933 [2024-04-26 13:15:48.732625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.933 qpair failed and we were unable to recover it. 00:32:43.933 [2024-04-26 13:15:48.732833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.933 [2024-04-26 13:15:48.733171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.933 [2024-04-26 13:15:48.733185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.933 qpair failed and we were unable to recover it. 00:32:43.933 [2024-04-26 13:15:48.733524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.933 [2024-04-26 13:15:48.733820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.933 [2024-04-26 13:15:48.733826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.933 qpair failed and we were unable to recover it. 00:32:43.933 [2024-04-26 13:15:48.734147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.933 [2024-04-26 13:15:48.734313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.933 [2024-04-26 13:15:48.734320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.933 qpair failed and we were unable to recover it. 00:32:43.933 [2024-04-26 13:15:48.734593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.933 [2024-04-26 13:15:48.734903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.933 [2024-04-26 13:15:48.734917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.933 qpair failed and we were unable to recover it. 00:32:43.933 [2024-04-26 13:15:48.735245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.933 [2024-04-26 13:15:48.735543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.933 [2024-04-26 13:15:48.735550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.933 qpair failed and we were unable to recover it. 00:32:43.933 [2024-04-26 13:15:48.735891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.933 [2024-04-26 13:15:48.736228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.934 [2024-04-26 13:15:48.736235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.934 qpair failed and we were unable to recover it. 00:32:43.934 [2024-04-26 13:15:48.736514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.934 [2024-04-26 13:15:48.736816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.934 [2024-04-26 13:15:48.736823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.934 qpair failed and we were unable to recover it. 00:32:43.934 [2024-04-26 13:15:48.737014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.934 [2024-04-26 13:15:48.737323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.934 [2024-04-26 13:15:48.737329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.934 qpair failed and we were unable to recover it. 00:32:43.934 [2024-04-26 13:15:48.737626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.934 [2024-04-26 13:15:48.737911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.934 [2024-04-26 13:15:48.737917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.934 qpair failed and we were unable to recover it. 00:32:43.934 [2024-04-26 13:15:48.738120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.934 [2024-04-26 13:15:48.738361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.934 [2024-04-26 13:15:48.738368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.934 qpair failed and we were unable to recover it. 00:32:43.934 [2024-04-26 13:15:48.738628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.934 [2024-04-26 13:15:48.738912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.934 [2024-04-26 13:15:48.738920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.934 qpair failed and we were unable to recover it. 00:32:43.934 [2024-04-26 13:15:48.739246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.934 [2024-04-26 13:15:48.739551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.934 [2024-04-26 13:15:48.739558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.934 qpair failed and we were unable to recover it. 00:32:43.934 [2024-04-26 13:15:48.739856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.934 [2024-04-26 13:15:48.740218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.934 [2024-04-26 13:15:48.740224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.934 qpair failed and we were unable to recover it. 00:32:43.934 [2024-04-26 13:15:48.740526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.934 [2024-04-26 13:15:48.740823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.934 [2024-04-26 13:15:48.740829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.934 qpair failed and we were unable to recover it. 00:32:43.934 [2024-04-26 13:15:48.741129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.934 [2024-04-26 13:15:48.741313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.934 [2024-04-26 13:15:48.741319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.934 qpair failed and we were unable to recover it. 00:32:43.934 [2024-04-26 13:15:48.741613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.934 [2024-04-26 13:15:48.741797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.934 [2024-04-26 13:15:48.741803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.934 qpair failed and we were unable to recover it. 00:32:43.934 [2024-04-26 13:15:48.742026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.934 [2024-04-26 13:15:48.742354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.934 [2024-04-26 13:15:48.742360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.934 qpair failed and we were unable to recover it. 00:32:43.934 [2024-04-26 13:15:48.742560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.934 [2024-04-26 13:15:48.742925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.934 [2024-04-26 13:15:48.742932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.934 qpair failed and we were unable to recover it. 00:32:43.934 [2024-04-26 13:15:48.743225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.934 [2024-04-26 13:15:48.743553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.934 [2024-04-26 13:15:48.743561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.934 qpair failed and we were unable to recover it. 00:32:43.934 [2024-04-26 13:15:48.743868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.934 [2024-04-26 13:15:48.744182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.934 [2024-04-26 13:15:48.744188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.934 qpair failed and we were unable to recover it. 00:32:43.934 [2024-04-26 13:15:48.744481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.934 [2024-04-26 13:15:48.744820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.934 [2024-04-26 13:15:48.744826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.934 qpair failed and we were unable to recover it. 00:32:43.934 [2024-04-26 13:15:48.745130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.934 [2024-04-26 13:15:48.745438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.934 [2024-04-26 13:15:48.745444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.934 qpair failed and we were unable to recover it. 00:32:43.934 [2024-04-26 13:15:48.745746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.934 [2024-04-26 13:15:48.746038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.934 [2024-04-26 13:15:48.746044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.934 qpair failed and we were unable to recover it. 00:32:43.934 [2024-04-26 13:15:48.746357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.934 [2024-04-26 13:15:48.746671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.934 [2024-04-26 13:15:48.746677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.934 qpair failed and we were unable to recover it. 00:32:43.934 [2024-04-26 13:15:48.746960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.934 [2024-04-26 13:15:48.747264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.934 [2024-04-26 13:15:48.747270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.934 qpair failed and we were unable to recover it. 00:32:43.934 [2024-04-26 13:15:48.747568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.934 [2024-04-26 13:15:48.747865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.934 [2024-04-26 13:15:48.747871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.934 qpair failed and we were unable to recover it. 00:32:43.934 [2024-04-26 13:15:48.748081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.934 [2024-04-26 13:15:48.748410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.934 [2024-04-26 13:15:48.748416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.934 qpair failed and we were unable to recover it. 00:32:43.934 [2024-04-26 13:15:48.748716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.934 [2024-04-26 13:15:48.749016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.934 [2024-04-26 13:15:48.749023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.934 qpair failed and we were unable to recover it. 00:32:43.934 [2024-04-26 13:15:48.749416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.934 [2024-04-26 13:15:48.749722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.934 [2024-04-26 13:15:48.749729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.934 qpair failed and we were unable to recover it. 00:32:43.934 [2024-04-26 13:15:48.750063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.934 [2024-04-26 13:15:48.750399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.934 [2024-04-26 13:15:48.750406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.934 qpair failed and we were unable to recover it. 00:32:43.934 [2024-04-26 13:15:48.750719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.934 [2024-04-26 13:15:48.750896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.934 [2024-04-26 13:15:48.750903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.934 qpair failed and we were unable to recover it. 00:32:43.934 [2024-04-26 13:15:48.751259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.934 [2024-04-26 13:15:48.751556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.934 [2024-04-26 13:15:48.751562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.934 qpair failed and we were unable to recover it. 00:32:43.934 [2024-04-26 13:15:48.751856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.934 [2024-04-26 13:15:48.752154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.934 [2024-04-26 13:15:48.752162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.934 qpair failed and we were unable to recover it. 00:32:43.934 [2024-04-26 13:15:48.752480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.934 [2024-04-26 13:15:48.752628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.934 [2024-04-26 13:15:48.752635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.934 qpair failed and we were unable to recover it. 00:32:43.934 [2024-04-26 13:15:48.752905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.935 [2024-04-26 13:15:48.753295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.935 [2024-04-26 13:15:48.753301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.935 qpair failed and we were unable to recover it. 00:32:43.935 [2024-04-26 13:15:48.753591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.935 [2024-04-26 13:15:48.753870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.935 [2024-04-26 13:15:48.753876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.935 qpair failed and we were unable to recover it. 00:32:43.935 [2024-04-26 13:15:48.754104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.935 [2024-04-26 13:15:48.754431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.935 [2024-04-26 13:15:48.754437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.935 qpair failed and we were unable to recover it. 00:32:43.935 [2024-04-26 13:15:48.754741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.935 [2024-04-26 13:15:48.754928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.935 [2024-04-26 13:15:48.754935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.935 qpair failed and we were unable to recover it. 00:32:43.935 [2024-04-26 13:15:48.755164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.935 [2024-04-26 13:15:48.755532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.935 [2024-04-26 13:15:48.755538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.935 qpair failed and we were unable to recover it. 00:32:43.935 [2024-04-26 13:15:48.755840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.935 [2024-04-26 13:15:48.756065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.935 [2024-04-26 13:15:48.756071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.935 qpair failed and we were unable to recover it. 00:32:43.935 [2024-04-26 13:15:48.756422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.935 [2024-04-26 13:15:48.756647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.935 [2024-04-26 13:15:48.756654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.935 qpair failed and we were unable to recover it. 00:32:43.935 [2024-04-26 13:15:48.757021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.935 [2024-04-26 13:15:48.757248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.935 [2024-04-26 13:15:48.757254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.935 qpair failed and we were unable to recover it. 00:32:43.935 [2024-04-26 13:15:48.757515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.935 [2024-04-26 13:15:48.757865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.935 [2024-04-26 13:15:48.757871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.935 qpair failed and we were unable to recover it. 00:32:43.935 [2024-04-26 13:15:48.758173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.935 [2024-04-26 13:15:48.758346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.935 [2024-04-26 13:15:48.758354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.935 qpair failed and we were unable to recover it. 00:32:43.935 [2024-04-26 13:15:48.758665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.935 [2024-04-26 13:15:48.758991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.935 [2024-04-26 13:15:48.758997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.935 qpair failed and we were unable to recover it. 00:32:43.935 [2024-04-26 13:15:48.759309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.935 [2024-04-26 13:15:48.759636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.935 [2024-04-26 13:15:48.759642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.935 qpair failed and we were unable to recover it. 00:32:43.935 [2024-04-26 13:15:48.759970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.935 [2024-04-26 13:15:48.760295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.935 [2024-04-26 13:15:48.760301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.935 qpair failed and we were unable to recover it. 00:32:43.935 [2024-04-26 13:15:48.760612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.935 [2024-04-26 13:15:48.760914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.935 [2024-04-26 13:15:48.760921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.935 qpair failed and we were unable to recover it. 00:32:43.935 [2024-04-26 13:15:48.761236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.935 [2024-04-26 13:15:48.761523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.935 [2024-04-26 13:15:48.761529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.935 qpair failed and we were unable to recover it. 00:32:43.935 [2024-04-26 13:15:48.761820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.935 [2024-04-26 13:15:48.761938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.935 [2024-04-26 13:15:48.761945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.935 qpair failed and we were unable to recover it. 00:32:43.935 [2024-04-26 13:15:48.762266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.935 [2024-04-26 13:15:48.762569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.935 [2024-04-26 13:15:48.762576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.935 qpair failed and we were unable to recover it. 00:32:43.935 [2024-04-26 13:15:48.762759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.935 [2024-04-26 13:15:48.763034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.935 [2024-04-26 13:15:48.763041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.935 qpair failed and we were unable to recover it. 00:32:43.935 [2024-04-26 13:15:48.763344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.935 [2024-04-26 13:15:48.763656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.935 [2024-04-26 13:15:48.763663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.935 qpair failed and we were unable to recover it. 00:32:43.935 [2024-04-26 13:15:48.763982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.935 [2024-04-26 13:15:48.764294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.935 [2024-04-26 13:15:48.764301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.935 qpair failed and we were unable to recover it. 00:32:43.935 [2024-04-26 13:15:48.764613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.935 [2024-04-26 13:15:48.764930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.935 [2024-04-26 13:15:48.764937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.935 qpair failed and we were unable to recover it. 00:32:43.935 [2024-04-26 13:15:48.765245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.935 [2024-04-26 13:15:48.765555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.935 [2024-04-26 13:15:48.765561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.935 qpair failed and we were unable to recover it. 00:32:43.935 [2024-04-26 13:15:48.765855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.935 [2024-04-26 13:15:48.766208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.935 [2024-04-26 13:15:48.766214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.935 qpair failed and we were unable to recover it. 00:32:43.935 [2024-04-26 13:15:48.766508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.935 [2024-04-26 13:15:48.766691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.935 [2024-04-26 13:15:48.766699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.935 qpair failed and we were unable to recover it. 00:32:43.935 [2024-04-26 13:15:48.766908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.935 [2024-04-26 13:15:48.767238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.935 [2024-04-26 13:15:48.767245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.935 qpair failed and we were unable to recover it. 00:32:43.935 [2024-04-26 13:15:48.767560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.935 [2024-04-26 13:15:48.767864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.935 [2024-04-26 13:15:48.767871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.935 qpair failed and we were unable to recover it. 00:32:43.935 [2024-04-26 13:15:48.768157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.935 [2024-04-26 13:15:48.768493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.936 [2024-04-26 13:15:48.768499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.936 qpair failed and we were unable to recover it. 00:32:43.936 [2024-04-26 13:15:48.768787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.936 [2024-04-26 13:15:48.769126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.936 [2024-04-26 13:15:48.769132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.936 qpair failed and we were unable to recover it. 00:32:43.936 [2024-04-26 13:15:48.769439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.936 [2024-04-26 13:15:48.769744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.936 [2024-04-26 13:15:48.769751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.936 qpair failed and we were unable to recover it. 00:32:43.936 [2024-04-26 13:15:48.770041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.936 [2024-04-26 13:15:48.770335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.936 [2024-04-26 13:15:48.770341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.936 qpair failed and we were unable to recover it. 00:32:43.936 [2024-04-26 13:15:48.770640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.936 [2024-04-26 13:15:48.770933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.936 [2024-04-26 13:15:48.770939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.936 qpair failed and we were unable to recover it. 00:32:43.936 [2024-04-26 13:15:48.771134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.936 [2024-04-26 13:15:48.771463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.936 [2024-04-26 13:15:48.771469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.936 qpair failed and we were unable to recover it. 00:32:43.936 [2024-04-26 13:15:48.771768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.936 [2024-04-26 13:15:48.772069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.936 [2024-04-26 13:15:48.772075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.936 qpair failed and we were unable to recover it. 00:32:43.936 [2024-04-26 13:15:48.772387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.936 [2024-04-26 13:15:48.772706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.936 [2024-04-26 13:15:48.772715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.936 qpair failed and we were unable to recover it. 00:32:43.936 [2024-04-26 13:15:48.773012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.936 [2024-04-26 13:15:48.773359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.936 [2024-04-26 13:15:48.773367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.936 qpair failed and we were unable to recover it. 00:32:43.936 [2024-04-26 13:15:48.773674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.936 [2024-04-26 13:15:48.773985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.936 [2024-04-26 13:15:48.773992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.936 qpair failed and we were unable to recover it. 00:32:43.936 [2024-04-26 13:15:48.774309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.936 [2024-04-26 13:15:48.774570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.936 [2024-04-26 13:15:48.774576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.936 qpair failed and we were unable to recover it. 00:32:43.936 [2024-04-26 13:15:48.774890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.936 [2024-04-26 13:15:48.775205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.936 [2024-04-26 13:15:48.775213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.936 qpair failed and we were unable to recover it. 00:32:43.936 [2024-04-26 13:15:48.775506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.936 [2024-04-26 13:15:48.775897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.936 [2024-04-26 13:15:48.775903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.936 qpair failed and we were unable to recover it. 00:32:43.936 [2024-04-26 13:15:48.776195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.936 [2024-04-26 13:15:48.776478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.936 [2024-04-26 13:15:48.776485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.936 qpair failed and we were unable to recover it. 00:32:43.936 [2024-04-26 13:15:48.776777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.936 [2024-04-26 13:15:48.777087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.936 [2024-04-26 13:15:48.777093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.936 qpair failed and we were unable to recover it. 00:32:43.936 [2024-04-26 13:15:48.777282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.936 [2024-04-26 13:15:48.777612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.936 [2024-04-26 13:15:48.777618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.936 qpair failed and we were unable to recover it. 00:32:43.936 [2024-04-26 13:15:48.778003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.936 [2024-04-26 13:15:48.778326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.936 [2024-04-26 13:15:48.778334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.936 qpair failed and we were unable to recover it. 00:32:43.936 [2024-04-26 13:15:48.778624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.936 [2024-04-26 13:15:48.778929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.936 [2024-04-26 13:15:48.778938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.936 qpair failed and we were unable to recover it. 00:32:43.936 [2024-04-26 13:15:48.779256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.936 [2024-04-26 13:15:48.779573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.936 [2024-04-26 13:15:48.779580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.936 qpair failed and we were unable to recover it. 00:32:43.936 [2024-04-26 13:15:48.779878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.936 [2024-04-26 13:15:48.780102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.936 [2024-04-26 13:15:48.780108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.936 qpair failed and we were unable to recover it. 00:32:43.936 [2024-04-26 13:15:48.780406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.936 [2024-04-26 13:15:48.780732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.936 [2024-04-26 13:15:48.780739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.936 qpair failed and we were unable to recover it. 00:32:43.936 [2024-04-26 13:15:48.781021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.936 [2024-04-26 13:15:48.781329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.936 [2024-04-26 13:15:48.781336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.936 qpair failed and we were unable to recover it. 00:32:43.936 [2024-04-26 13:15:48.781627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.936 [2024-04-26 13:15:48.781948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.936 [2024-04-26 13:15:48.781955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.936 qpair failed and we were unable to recover it. 00:32:43.936 [2024-04-26 13:15:48.782257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.936 [2024-04-26 13:15:48.782539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.936 [2024-04-26 13:15:48.782545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.936 qpair failed and we were unable to recover it. 00:32:43.936 [2024-04-26 13:15:48.782738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.936 [2024-04-26 13:15:48.783083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.936 [2024-04-26 13:15:48.783090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.936 qpair failed and we were unable to recover it. 00:32:43.936 [2024-04-26 13:15:48.783382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.936 [2024-04-26 13:15:48.783562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.936 [2024-04-26 13:15:48.783568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.936 qpair failed and we were unable to recover it. 00:32:43.936 [2024-04-26 13:15:48.783796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.937 [2024-04-26 13:15:48.784133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.937 [2024-04-26 13:15:48.784140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.937 qpair failed and we were unable to recover it. 00:32:43.937 [2024-04-26 13:15:48.784235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.937 [2024-04-26 13:15:48.784507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.937 [2024-04-26 13:15:48.784516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.937 qpair failed and we were unable to recover it. 00:32:43.937 [2024-04-26 13:15:48.784816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.937 [2024-04-26 13:15:48.785128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.937 [2024-04-26 13:15:48.785135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.937 qpair failed and we were unable to recover it. 00:32:43.937 [2024-04-26 13:15:48.785427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.937 [2024-04-26 13:15:48.785739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.937 [2024-04-26 13:15:48.785745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.937 qpair failed and we were unable to recover it. 00:32:43.937 [2024-04-26 13:15:48.786031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.937 [2024-04-26 13:15:48.786351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.937 [2024-04-26 13:15:48.786358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.937 qpair failed and we were unable to recover it. 00:32:43.937 [2024-04-26 13:15:48.786553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.937 [2024-04-26 13:15:48.786903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.937 [2024-04-26 13:15:48.786910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.937 qpair failed and we were unable to recover it. 00:32:43.937 [2024-04-26 13:15:48.787256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.937 [2024-04-26 13:15:48.787490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.937 [2024-04-26 13:15:48.787496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.937 qpair failed and we were unable to recover it. 00:32:43.937 [2024-04-26 13:15:48.787789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.937 [2024-04-26 13:15:48.788094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.937 [2024-04-26 13:15:48.788100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.937 qpair failed and we were unable to recover it. 00:32:43.937 [2024-04-26 13:15:48.788413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.937 [2024-04-26 13:15:48.788693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.937 [2024-04-26 13:15:48.788699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.937 qpair failed and we were unable to recover it. 00:32:43.937 [2024-04-26 13:15:48.788854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.937 [2024-04-26 13:15:48.789130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.937 [2024-04-26 13:15:48.789136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.937 qpair failed and we were unable to recover it. 00:32:43.937 [2024-04-26 13:15:48.789430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.937 [2024-04-26 13:15:48.789730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.937 [2024-04-26 13:15:48.789736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.937 qpair failed and we were unable to recover it. 00:32:43.937 [2024-04-26 13:15:48.790018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.937 [2024-04-26 13:15:48.790099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.937 [2024-04-26 13:15:48.790105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.937 qpair failed and we were unable to recover it. 00:32:43.937 [2024-04-26 13:15:48.790386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.937 [2024-04-26 13:15:48.790693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.937 [2024-04-26 13:15:48.790700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.937 qpair failed and we were unable to recover it. 00:32:43.937 [2024-04-26 13:15:48.790909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.937 [2024-04-26 13:15:48.791177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.937 [2024-04-26 13:15:48.791183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.937 qpair failed and we were unable to recover it. 00:32:43.937 [2024-04-26 13:15:48.791478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.937 [2024-04-26 13:15:48.791809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.937 [2024-04-26 13:15:48.791816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.937 qpair failed and we were unable to recover it. 00:32:43.937 [2024-04-26 13:15:48.792008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.937 [2024-04-26 13:15:48.792235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.937 [2024-04-26 13:15:48.792242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.937 qpair failed and we were unable to recover it. 00:32:43.937 [2024-04-26 13:15:48.792552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.937 [2024-04-26 13:15:48.792869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.937 [2024-04-26 13:15:48.792875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.937 qpair failed and we were unable to recover it. 00:32:43.937 [2024-04-26 13:15:48.793194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.937 [2024-04-26 13:15:48.793352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.937 [2024-04-26 13:15:48.793359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.937 qpair failed and we were unable to recover it. 00:32:43.937 [2024-04-26 13:15:48.793643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.937 [2024-04-26 13:15:48.793958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.937 [2024-04-26 13:15:48.793965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.937 qpair failed and we were unable to recover it. 00:32:43.937 [2024-04-26 13:15:48.794289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.937 [2024-04-26 13:15:48.794647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.937 [2024-04-26 13:15:48.794653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.937 qpair failed and we were unable to recover it. 00:32:43.937 [2024-04-26 13:15:48.794994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.937 [2024-04-26 13:15:48.795159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.937 [2024-04-26 13:15:48.795165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.937 qpair failed and we were unable to recover it. 00:32:43.937 [2024-04-26 13:15:48.795500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.937 [2024-04-26 13:15:48.795793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.937 [2024-04-26 13:15:48.795800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.937 qpair failed and we were unable to recover it. 00:32:43.937 [2024-04-26 13:15:48.796140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.937 [2024-04-26 13:15:48.796456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.937 [2024-04-26 13:15:48.796463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.937 qpair failed and we were unable to recover it. 00:32:43.937 [2024-04-26 13:15:48.796755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.937 [2024-04-26 13:15:48.797076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.937 [2024-04-26 13:15:48.797083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.937 qpair failed and we were unable to recover it. 00:32:43.937 [2024-04-26 13:15:48.797398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.937 [2024-04-26 13:15:48.797702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.937 [2024-04-26 13:15:48.797708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.937 qpair failed and we were unable to recover it. 00:32:43.937 [2024-04-26 13:15:48.798103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.937 [2024-04-26 13:15:48.798447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.937 [2024-04-26 13:15:48.798453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.937 qpair failed and we were unable to recover it. 00:32:43.937 [2024-04-26 13:15:48.798768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.937 [2024-04-26 13:15:48.799072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.937 [2024-04-26 13:15:48.799079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.937 qpair failed and we were unable to recover it. 00:32:43.937 [2024-04-26 13:15:48.799380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.937 [2024-04-26 13:15:48.799648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.937 [2024-04-26 13:15:48.799655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.937 qpair failed and we were unable to recover it. 00:32:43.937 [2024-04-26 13:15:48.799967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.937 [2024-04-26 13:15:48.800294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.938 [2024-04-26 13:15:48.800301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.938 qpair failed and we were unable to recover it. 00:32:43.938 [2024-04-26 13:15:48.800610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.938 [2024-04-26 13:15:48.800916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.938 [2024-04-26 13:15:48.800923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.938 qpair failed and we were unable to recover it. 00:32:43.938 [2024-04-26 13:15:48.801245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.938 [2024-04-26 13:15:48.801598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.938 [2024-04-26 13:15:48.801604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.938 qpair failed and we were unable to recover it. 00:32:43.938 [2024-04-26 13:15:48.801913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.938 [2024-04-26 13:15:48.802209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.938 [2024-04-26 13:15:48.802215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.938 qpair failed and we were unable to recover it. 00:32:43.938 [2024-04-26 13:15:48.802408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.938 [2024-04-26 13:15:48.802766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.938 [2024-04-26 13:15:48.802773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.938 qpair failed and we were unable to recover it. 00:32:43.938 [2024-04-26 13:15:48.803053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.938 [2024-04-26 13:15:48.803375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.938 [2024-04-26 13:15:48.803381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.938 qpair failed and we were unable to recover it. 00:32:43.938 [2024-04-26 13:15:48.803672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.938 [2024-04-26 13:15:48.804004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.938 [2024-04-26 13:15:48.804011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.938 qpair failed and we were unable to recover it. 00:32:43.938 [2024-04-26 13:15:48.804295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.938 [2024-04-26 13:15:48.804615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.938 [2024-04-26 13:15:48.804622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.938 qpair failed and we were unable to recover it. 00:32:43.938 [2024-04-26 13:15:48.804925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.938 [2024-04-26 13:15:48.805262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.938 [2024-04-26 13:15:48.805269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.938 qpair failed and we were unable to recover it. 00:32:43.938 [2024-04-26 13:15:48.805455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.938 [2024-04-26 13:15:48.805788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.938 [2024-04-26 13:15:48.805795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.938 qpair failed and we were unable to recover it. 00:32:43.938 [2024-04-26 13:15:48.806081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.938 [2024-04-26 13:15:48.806276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.938 [2024-04-26 13:15:48.806283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.938 qpair failed and we were unable to recover it. 00:32:43.938 [2024-04-26 13:15:48.806591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.938 [2024-04-26 13:15:48.806935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.938 [2024-04-26 13:15:48.806943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.938 qpair failed and we were unable to recover it. 00:32:43.938 [2024-04-26 13:15:48.807113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.938 [2024-04-26 13:15:48.807283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.938 [2024-04-26 13:15:48.807291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.938 qpair failed and we were unable to recover it. 00:32:43.938 [2024-04-26 13:15:48.807642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.938 [2024-04-26 13:15:48.807956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.938 [2024-04-26 13:15:48.807964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.938 qpair failed and we were unable to recover it. 00:32:43.938 [2024-04-26 13:15:48.808171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.938 [2024-04-26 13:15:48.808498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.938 [2024-04-26 13:15:48.808505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.938 qpair failed and we were unable to recover it. 00:32:43.938 [2024-04-26 13:15:48.808695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.938 [2024-04-26 13:15:48.808995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.938 [2024-04-26 13:15:48.809003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.938 qpair failed and we were unable to recover it. 00:32:43.938 [2024-04-26 13:15:48.809325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.938 [2024-04-26 13:15:48.809528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.938 [2024-04-26 13:15:48.809535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.938 qpair failed and we were unable to recover it. 00:32:43.938 [2024-04-26 13:15:48.809839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.938 [2024-04-26 13:15:48.810139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.938 [2024-04-26 13:15:48.810147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.938 qpair failed and we were unable to recover it. 00:32:43.938 [2024-04-26 13:15:48.810337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.938 [2024-04-26 13:15:48.810683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.938 [2024-04-26 13:15:48.810690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.938 qpair failed and we were unable to recover it. 00:32:43.938 [2024-04-26 13:15:48.811017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.938 [2024-04-26 13:15:48.811339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.938 [2024-04-26 13:15:48.811346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.938 qpair failed and we were unable to recover it. 00:32:43.938 [2024-04-26 13:15:48.811648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.938 [2024-04-26 13:15:48.812002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.938 [2024-04-26 13:15:48.812010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.938 qpair failed and we were unable to recover it. 00:32:43.938 [2024-04-26 13:15:48.812087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.938 [2024-04-26 13:15:48.812362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.938 [2024-04-26 13:15:48.812370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.938 qpair failed and we were unable to recover it. 00:32:43.938 [2024-04-26 13:15:48.812589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.938 [2024-04-26 13:15:48.812935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.938 [2024-04-26 13:15:48.812943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.938 qpair failed and we were unable to recover it. 00:32:43.938 [2024-04-26 13:15:48.813451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.938 [2024-04-26 13:15:48.813799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.938 [2024-04-26 13:15:48.813807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.938 qpair failed and we were unable to recover it. 00:32:43.938 [2024-04-26 13:15:48.814132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.938 [2024-04-26 13:15:48.814433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.938 [2024-04-26 13:15:48.814440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.938 qpair failed and we were unable to recover it. 00:32:43.938 [2024-04-26 13:15:48.814766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.938 [2024-04-26 13:15:48.814952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.938 [2024-04-26 13:15:48.814960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.938 qpair failed and we were unable to recover it. 00:32:43.938 [2024-04-26 13:15:48.815261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.938 [2024-04-26 13:15:48.815560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.938 [2024-04-26 13:15:48.815567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.938 qpair failed and we were unable to recover it. 00:32:43.938 [2024-04-26 13:15:48.815870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.938 [2024-04-26 13:15:48.816223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.938 [2024-04-26 13:15:48.816231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.938 qpair failed and we were unable to recover it. 00:32:43.938 [2024-04-26 13:15:48.816605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.938 [2024-04-26 13:15:48.816898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.938 [2024-04-26 13:15:48.816906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.938 qpair failed and we were unable to recover it. 00:32:43.938 [2024-04-26 13:15:48.817226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.938 [2024-04-26 13:15:48.817545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.939 [2024-04-26 13:15:48.817552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.939 qpair failed and we were unable to recover it. 00:32:43.939 [2024-04-26 13:15:48.817743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.939 [2024-04-26 13:15:48.817914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.939 [2024-04-26 13:15:48.817922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.939 qpair failed and we were unable to recover it. 00:32:43.939 [2024-04-26 13:15:48.818367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.939 [2024-04-26 13:15:48.818672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.939 [2024-04-26 13:15:48.818680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.939 qpair failed and we were unable to recover it. 00:32:43.939 [2024-04-26 13:15:48.818991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.939 [2024-04-26 13:15:48.819316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.939 [2024-04-26 13:15:48.819323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.939 qpair failed and we were unable to recover it. 00:32:43.939 [2024-04-26 13:15:48.819514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.939 [2024-04-26 13:15:48.819814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.939 [2024-04-26 13:15:48.819822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.939 qpair failed and we were unable to recover it. 00:32:43.939 [2024-04-26 13:15:48.820184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.939 [2024-04-26 13:15:48.820483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.939 [2024-04-26 13:15:48.820491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.939 qpair failed and we were unable to recover it. 00:32:43.939 [2024-04-26 13:15:48.820825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.939 [2024-04-26 13:15:48.821152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.939 [2024-04-26 13:15:48.821160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.939 qpair failed and we were unable to recover it. 00:32:43.939 [2024-04-26 13:15:48.821467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.939 [2024-04-26 13:15:48.821782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.939 [2024-04-26 13:15:48.821789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.939 qpair failed and we were unable to recover it. 00:32:43.939 [2024-04-26 13:15:48.821981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.939 [2024-04-26 13:15:48.822328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.939 [2024-04-26 13:15:48.822335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.939 qpair failed and we were unable to recover it. 00:32:43.939 [2024-04-26 13:15:48.822653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.939 [2024-04-26 13:15:48.822820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.939 [2024-04-26 13:15:48.822828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.939 qpair failed and we were unable to recover it. 00:32:43.939 [2024-04-26 13:15:48.823154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.939 [2024-04-26 13:15:48.823470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.939 [2024-04-26 13:15:48.823476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.939 qpair failed and we were unable to recover it. 00:32:43.939 [2024-04-26 13:15:48.823803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.939 [2024-04-26 13:15:48.824127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.939 [2024-04-26 13:15:48.824134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.939 qpair failed and we were unable to recover it. 00:32:43.939 [2024-04-26 13:15:48.824446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.939 [2024-04-26 13:15:48.824753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.939 [2024-04-26 13:15:48.824760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.939 qpair failed and we were unable to recover it. 00:32:43.939 [2024-04-26 13:15:48.825101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.939 [2024-04-26 13:15:48.825428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.939 [2024-04-26 13:15:48.825434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.939 qpair failed and we were unable to recover it. 00:32:43.939 [2024-04-26 13:15:48.825732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.939 [2024-04-26 13:15:48.825975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.939 [2024-04-26 13:15:48.825981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.939 qpair failed and we were unable to recover it. 00:32:43.939 [2024-04-26 13:15:48.826307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.939 [2024-04-26 13:15:48.826580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.939 [2024-04-26 13:15:48.826586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.939 qpair failed and we were unable to recover it. 00:32:43.939 [2024-04-26 13:15:48.826873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.939 [2024-04-26 13:15:48.827049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.939 [2024-04-26 13:15:48.827057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.939 qpair failed and we were unable to recover it. 00:32:43.939 [2024-04-26 13:15:48.827373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.939 [2024-04-26 13:15:48.827693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.939 [2024-04-26 13:15:48.827699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.939 qpair failed and we were unable to recover it. 00:32:43.939 [2024-04-26 13:15:48.828042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.939 [2024-04-26 13:15:48.828338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.939 [2024-04-26 13:15:48.828345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.939 qpair failed and we were unable to recover it. 00:32:43.939 [2024-04-26 13:15:48.828628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.939 [2024-04-26 13:15:48.828915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.939 [2024-04-26 13:15:48.828922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.939 qpair failed and we were unable to recover it. 00:32:43.939 [2024-04-26 13:15:48.829225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.939 [2024-04-26 13:15:48.829551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.939 [2024-04-26 13:15:48.829558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.939 qpair failed and we were unable to recover it. 00:32:43.939 [2024-04-26 13:15:48.829868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.939 [2024-04-26 13:15:48.830235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.939 [2024-04-26 13:15:48.830241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.939 qpair failed and we were unable to recover it. 00:32:43.939 [2024-04-26 13:15:48.830618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.939 [2024-04-26 13:15:48.830951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.939 [2024-04-26 13:15:48.830958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.939 qpair failed and we were unable to recover it. 00:32:43.939 [2024-04-26 13:15:48.831325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.939 [2024-04-26 13:15:48.831672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.939 [2024-04-26 13:15:48.831679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.939 qpair failed and we were unable to recover it. 00:32:43.939 [2024-04-26 13:15:48.831857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.939 [2024-04-26 13:15:48.832247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.939 [2024-04-26 13:15:48.832253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.939 qpair failed and we were unable to recover it. 00:32:43.939 [2024-04-26 13:15:48.832543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.939 [2024-04-26 13:15:48.832750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.939 [2024-04-26 13:15:48.832758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.939 qpair failed and we were unable to recover it. 00:32:43.939 [2024-04-26 13:15:48.833128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.939 [2024-04-26 13:15:48.833424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.939 [2024-04-26 13:15:48.833430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.939 qpair failed and we were unable to recover it. 00:32:43.939 [2024-04-26 13:15:48.833740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.939 [2024-04-26 13:15:48.834063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.939 [2024-04-26 13:15:48.834069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.939 qpair failed and we were unable to recover it. 00:32:43.939 [2024-04-26 13:15:48.834244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.939 [2024-04-26 13:15:48.834484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.939 [2024-04-26 13:15:48.834491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.939 qpair failed and we were unable to recover it. 00:32:43.939 [2024-04-26 13:15:48.834649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.940 [2024-04-26 13:15:48.834847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.940 [2024-04-26 13:15:48.834854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.940 qpair failed and we were unable to recover it. 00:32:43.940 [2024-04-26 13:15:48.835082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.940 [2024-04-26 13:15:48.835363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.940 [2024-04-26 13:15:48.835370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.940 qpair failed and we were unable to recover it. 00:32:43.940 [2024-04-26 13:15:48.835667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.940 [2024-04-26 13:15:48.835992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.940 [2024-04-26 13:15:48.835999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.940 qpair failed and we were unable to recover it. 00:32:43.940 [2024-04-26 13:15:48.836323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.940 [2024-04-26 13:15:48.836622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.940 [2024-04-26 13:15:48.836628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.940 qpair failed and we were unable to recover it. 00:32:43.940 [2024-04-26 13:15:48.836953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.940 [2024-04-26 13:15:48.837259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.940 [2024-04-26 13:15:48.837265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.940 qpair failed and we were unable to recover it. 00:32:43.940 [2024-04-26 13:15:48.837586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.940 [2024-04-26 13:15:48.837889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.940 [2024-04-26 13:15:48.837896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.940 qpair failed and we were unable to recover it. 00:32:43.940 [2024-04-26 13:15:48.838177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.940 [2024-04-26 13:15:48.838513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.940 [2024-04-26 13:15:48.838520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.940 qpair failed and we were unable to recover it. 00:32:43.940 [2024-04-26 13:15:48.838640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.940 [2024-04-26 13:15:48.838708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.940 [2024-04-26 13:15:48.838715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.940 qpair failed and we were unable to recover it. 00:32:43.940 [2024-04-26 13:15:48.838996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.940 [2024-04-26 13:15:48.839301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.940 [2024-04-26 13:15:48.839308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.940 qpair failed and we were unable to recover it. 00:32:43.940 [2024-04-26 13:15:48.839597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.940 [2024-04-26 13:15:48.839911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.940 [2024-04-26 13:15:48.839917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.940 qpair failed and we were unable to recover it. 00:32:43.940 [2024-04-26 13:15:48.840244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.940 [2024-04-26 13:15:48.840559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.940 [2024-04-26 13:15:48.840565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.940 qpair failed and we were unable to recover it. 00:32:43.940 [2024-04-26 13:15:48.840860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.940 [2024-04-26 13:15:48.841195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.940 [2024-04-26 13:15:48.841202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.940 qpair failed and we were unable to recover it. 00:32:43.940 [2024-04-26 13:15:48.841513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.940 [2024-04-26 13:15:48.841847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.940 [2024-04-26 13:15:48.841854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.940 qpair failed and we were unable to recover it. 00:32:43.940 [2024-04-26 13:15:48.842117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.940 [2024-04-26 13:15:48.842429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.940 [2024-04-26 13:15:48.842435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.940 qpair failed and we were unable to recover it. 00:32:43.940 [2024-04-26 13:15:48.842728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.940 [2024-04-26 13:15:48.843016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.940 [2024-04-26 13:15:48.843022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.940 qpair failed and we were unable to recover it. 00:32:43.940 [2024-04-26 13:15:48.843353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.940 [2024-04-26 13:15:48.843696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.940 [2024-04-26 13:15:48.843703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.941 qpair failed and we were unable to recover it. 00:32:43.941 [2024-04-26 13:15:48.844012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.941 [2024-04-26 13:15:48.844242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.941 [2024-04-26 13:15:48.844249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.941 qpair failed and we were unable to recover it. 00:32:43.941 [2024-04-26 13:15:48.844418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.941 [2024-04-26 13:15:48.844722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.941 [2024-04-26 13:15:48.844729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.941 qpair failed and we were unable to recover it. 00:32:43.941 [2024-04-26 13:15:48.844909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.941 [2024-04-26 13:15:48.845192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.941 [2024-04-26 13:15:48.845198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.941 qpair failed and we were unable to recover it. 00:32:43.941 [2024-04-26 13:15:48.845529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.941 [2024-04-26 13:15:48.845849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.941 [2024-04-26 13:15:48.845856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.941 qpair failed and we were unable to recover it. 00:32:43.941 [2024-04-26 13:15:48.846146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.941 [2024-04-26 13:15:48.846461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.941 [2024-04-26 13:15:48.846467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.941 qpair failed and we were unable to recover it. 00:32:43.941 [2024-04-26 13:15:48.846766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.941 [2024-04-26 13:15:48.847081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.941 [2024-04-26 13:15:48.847087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.941 qpair failed and we were unable to recover it. 00:32:43.941 [2024-04-26 13:15:48.847396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.941 [2024-04-26 13:15:48.847590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.941 [2024-04-26 13:15:48.847596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.941 qpair failed and we were unable to recover it. 00:32:43.941 [2024-04-26 13:15:48.847808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.941 [2024-04-26 13:15:48.848022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.941 [2024-04-26 13:15:48.848029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.941 qpair failed and we were unable to recover it. 00:32:43.941 [2024-04-26 13:15:48.848351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.942 [2024-04-26 13:15:48.848661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.942 [2024-04-26 13:15:48.848668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.942 qpair failed and we were unable to recover it. 00:32:43.942 [2024-04-26 13:15:48.848977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.942 [2024-04-26 13:15:48.849311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.942 [2024-04-26 13:15:48.849317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.942 qpair failed and we were unable to recover it. 00:32:43.942 [2024-04-26 13:15:48.849635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.942 [2024-04-26 13:15:48.849913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.942 [2024-04-26 13:15:48.849920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.942 qpair failed and we were unable to recover it. 00:32:43.942 [2024-04-26 13:15:48.850244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.942 [2024-04-26 13:15:48.850529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.942 [2024-04-26 13:15:48.850535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.942 qpair failed and we were unable to recover it. 00:32:43.942 [2024-04-26 13:15:48.850840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.942 [2024-04-26 13:15:48.851010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.942 [2024-04-26 13:15:48.851016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.942 qpair failed and we were unable to recover it. 00:32:43.942 [2024-04-26 13:15:48.851323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.942 [2024-04-26 13:15:48.851636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.942 [2024-04-26 13:15:48.851642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.942 qpair failed and we were unable to recover it. 00:32:43.942 [2024-04-26 13:15:48.851957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.942 [2024-04-26 13:15:48.852244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.942 [2024-04-26 13:15:48.852250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.942 qpair failed and we were unable to recover it. 00:32:43.942 [2024-04-26 13:15:48.852548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.942 [2024-04-26 13:15:48.852858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.942 [2024-04-26 13:15:48.852865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.942 qpair failed and we were unable to recover it. 00:32:43.942 [2024-04-26 13:15:48.853268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.942 [2024-04-26 13:15:48.853582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.942 [2024-04-26 13:15:48.853589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.942 qpair failed and we were unable to recover it. 00:32:43.942 [2024-04-26 13:15:48.853881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.942 [2024-04-26 13:15:48.854188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.942 [2024-04-26 13:15:48.854195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.942 qpair failed and we were unable to recover it. 00:32:43.942 [2024-04-26 13:15:48.854499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.942 [2024-04-26 13:15:48.854814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.942 [2024-04-26 13:15:48.854820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.942 qpair failed and we were unable to recover it. 00:32:43.942 [2024-04-26 13:15:48.855126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.942 [2024-04-26 13:15:48.855450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.942 [2024-04-26 13:15:48.855456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.942 qpair failed and we were unable to recover it. 00:32:43.942 [2024-04-26 13:15:48.855742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.942 [2024-04-26 13:15:48.856064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.942 [2024-04-26 13:15:48.856071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.942 qpair failed and we were unable to recover it. 00:32:43.942 [2024-04-26 13:15:48.856369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.942 [2024-04-26 13:15:48.856710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.942 [2024-04-26 13:15:48.856718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.942 qpair failed and we were unable to recover it. 00:32:43.942 [2024-04-26 13:15:48.857048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.942 [2024-04-26 13:15:48.857365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.942 [2024-04-26 13:15:48.857371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.942 qpair failed and we were unable to recover it. 00:32:43.942 [2024-04-26 13:15:48.857685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.942 [2024-04-26 13:15:48.857997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.942 [2024-04-26 13:15:48.858003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.942 qpair failed and we were unable to recover it. 00:32:43.942 [2024-04-26 13:15:48.858185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.942 [2024-04-26 13:15:48.858534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.942 [2024-04-26 13:15:48.858540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.942 qpair failed and we were unable to recover it. 00:32:43.942 [2024-04-26 13:15:48.858711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.942 [2024-04-26 13:15:48.859019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.942 [2024-04-26 13:15:48.859025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.942 qpair failed and we were unable to recover it. 00:32:43.942 [2024-04-26 13:15:48.859394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.942 [2024-04-26 13:15:48.859687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.942 [2024-04-26 13:15:48.859693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.942 qpair failed and we were unable to recover it. 00:32:43.942 [2024-04-26 13:15:48.859845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.942 [2024-04-26 13:15:48.860114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.942 [2024-04-26 13:15:48.860121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.942 qpair failed and we were unable to recover it. 00:32:43.942 [2024-04-26 13:15:48.860433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.942 [2024-04-26 13:15:48.860749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.942 [2024-04-26 13:15:48.860755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.942 qpair failed and we were unable to recover it. 00:32:43.942 [2024-04-26 13:15:48.860998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.942 [2024-04-26 13:15:48.861205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.942 [2024-04-26 13:15:48.861219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.942 qpair failed and we were unable to recover it. 00:32:43.942 [2024-04-26 13:15:48.861568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.942 [2024-04-26 13:15:48.861761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.942 [2024-04-26 13:15:48.861769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.942 qpair failed and we were unable to recover it. 00:32:43.942 [2024-04-26 13:15:48.862089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.942 [2024-04-26 13:15:48.862391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.942 [2024-04-26 13:15:48.862397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.942 qpair failed and we were unable to recover it. 00:32:43.942 [2024-04-26 13:15:48.862723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.942 [2024-04-26 13:15:48.863013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.942 [2024-04-26 13:15:48.863020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.942 qpair failed and we were unable to recover it. 00:32:43.942 [2024-04-26 13:15:48.863312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.942 [2024-04-26 13:15:48.863621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.942 [2024-04-26 13:15:48.863627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.942 qpair failed and we were unable to recover it. 00:32:43.942 [2024-04-26 13:15:48.863935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.942 [2024-04-26 13:15:48.864244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.942 [2024-04-26 13:15:48.864250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.942 qpair failed and we were unable to recover it. 00:32:43.942 [2024-04-26 13:15:48.864552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.942 [2024-04-26 13:15:48.864867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.942 [2024-04-26 13:15:48.864875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.942 qpair failed and we were unable to recover it. 00:32:43.942 [2024-04-26 13:15:48.864950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.943 [2024-04-26 13:15:48.865243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.943 [2024-04-26 13:15:48.865250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.943 qpair failed and we were unable to recover it. 00:32:43.943 [2024-04-26 13:15:48.865562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.943 [2024-04-26 13:15:48.865850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.943 [2024-04-26 13:15:48.865857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.943 qpair failed and we were unable to recover it. 00:32:43.943 [2024-04-26 13:15:48.866168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.943 [2024-04-26 13:15:48.866482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.943 [2024-04-26 13:15:48.866489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.943 qpair failed and we were unable to recover it. 00:32:43.943 [2024-04-26 13:15:48.866860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.943 [2024-04-26 13:15:48.867172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.943 [2024-04-26 13:15:48.867179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.943 qpair failed and we were unable to recover it. 00:32:43.943 [2024-04-26 13:15:48.867499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.943 [2024-04-26 13:15:48.867736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.943 [2024-04-26 13:15:48.867744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.943 qpair failed and we were unable to recover it. 00:32:43.943 [2024-04-26 13:15:48.868033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.943 [2024-04-26 13:15:48.868365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.943 [2024-04-26 13:15:48.868371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.943 qpair failed and we were unable to recover it. 00:32:43.943 [2024-04-26 13:15:48.868571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.943 [2024-04-26 13:15:48.868910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.943 [2024-04-26 13:15:48.868917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.943 qpair failed and we were unable to recover it. 00:32:43.943 [2024-04-26 13:15:48.869235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.943 [2024-04-26 13:15:48.869524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.943 [2024-04-26 13:15:48.869531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.943 qpair failed and we were unable to recover it. 00:32:43.943 [2024-04-26 13:15:48.869845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.943 [2024-04-26 13:15:48.870046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.943 [2024-04-26 13:15:48.870053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.943 qpair failed and we were unable to recover it. 00:32:43.943 [2024-04-26 13:15:48.870346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.943 [2024-04-26 13:15:48.870667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.943 [2024-04-26 13:15:48.870673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.943 qpair failed and we were unable to recover it. 00:32:43.943 [2024-04-26 13:15:48.870987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.943 [2024-04-26 13:15:48.871270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.943 [2024-04-26 13:15:48.871276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.943 qpair failed and we were unable to recover it. 00:32:43.943 [2024-04-26 13:15:48.871442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.943 [2024-04-26 13:15:48.871701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.943 [2024-04-26 13:15:48.871707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.943 qpair failed and we were unable to recover it. 00:32:43.943 [2024-04-26 13:15:48.871998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.943 [2024-04-26 13:15:48.872301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.943 [2024-04-26 13:15:48.872308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.943 qpair failed and we were unable to recover it. 00:32:43.943 [2024-04-26 13:15:48.872610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.943 [2024-04-26 13:15:48.872776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.943 [2024-04-26 13:15:48.872782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.943 qpair failed and we were unable to recover it. 00:32:43.943 [2024-04-26 13:15:48.873066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.943 [2024-04-26 13:15:48.873372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.943 [2024-04-26 13:15:48.873380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.943 qpair failed and we were unable to recover it. 00:32:43.943 [2024-04-26 13:15:48.873756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.943 [2024-04-26 13:15:48.874039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.943 [2024-04-26 13:15:48.874046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.943 qpair failed and we were unable to recover it. 00:32:43.943 [2024-04-26 13:15:48.874363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.943 [2024-04-26 13:15:48.874667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.943 [2024-04-26 13:15:48.874673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.943 qpair failed and we were unable to recover it. 00:32:43.943 [2024-04-26 13:15:48.874987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.943 [2024-04-26 13:15:48.875164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.943 [2024-04-26 13:15:48.875172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.943 qpair failed and we were unable to recover it. 00:32:43.943 [2024-04-26 13:15:48.875397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.943 [2024-04-26 13:15:48.875705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.943 [2024-04-26 13:15:48.875712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.943 qpair failed and we were unable to recover it. 00:32:43.943 [2024-04-26 13:15:48.876014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.943 [2024-04-26 13:15:48.876185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.943 [2024-04-26 13:15:48.876192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.943 qpair failed and we were unable to recover it. 00:32:43.943 [2024-04-26 13:15:48.876460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.943 [2024-04-26 13:15:48.876788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.943 [2024-04-26 13:15:48.876794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.943 qpair failed and we were unable to recover it. 00:32:43.943 [2024-04-26 13:15:48.877106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.943 [2024-04-26 13:15:48.877412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.943 [2024-04-26 13:15:48.877418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.943 qpair failed and we were unable to recover it. 00:32:43.943 [2024-04-26 13:15:48.877718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.943 [2024-04-26 13:15:48.877901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.943 [2024-04-26 13:15:48.877907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.943 qpair failed and we were unable to recover it. 00:32:43.943 [2024-04-26 13:15:48.878182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.943 [2024-04-26 13:15:48.878514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.943 [2024-04-26 13:15:48.878521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.943 qpair failed and we were unable to recover it. 00:32:43.943 [2024-04-26 13:15:48.878835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.943 [2024-04-26 13:15:48.879169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.943 [2024-04-26 13:15:48.879177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.943 qpair failed and we were unable to recover it. 00:32:43.943 [2024-04-26 13:15:48.879459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.943 [2024-04-26 13:15:48.879746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.943 [2024-04-26 13:15:48.879753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.943 qpair failed and we were unable to recover it. 00:32:43.943 [2024-04-26 13:15:48.879916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.943 [2024-04-26 13:15:48.880189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.943 [2024-04-26 13:15:48.880195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.943 qpair failed and we were unable to recover it. 00:32:43.943 [2024-04-26 13:15:48.880527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.943 [2024-04-26 13:15:48.880833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.943 [2024-04-26 13:15:48.880842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.943 qpair failed and we were unable to recover it. 00:32:43.943 [2024-04-26 13:15:48.881020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.943 [2024-04-26 13:15:48.881305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.944 [2024-04-26 13:15:48.881311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.944 qpair failed and we were unable to recover it. 00:32:43.944 [2024-04-26 13:15:48.881643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.944 [2024-04-26 13:15:48.881948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.944 [2024-04-26 13:15:48.881955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.944 qpair failed and we were unable to recover it. 00:32:43.944 [2024-04-26 13:15:48.882137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.944 [2024-04-26 13:15:48.882550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.944 [2024-04-26 13:15:48.882556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.944 qpair failed and we were unable to recover it. 00:32:43.944 [2024-04-26 13:15:48.882847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.944 [2024-04-26 13:15:48.883128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.944 [2024-04-26 13:15:48.883135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.944 qpair failed and we were unable to recover it. 00:32:43.944 [2024-04-26 13:15:48.883324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.944 [2024-04-26 13:15:48.883659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.944 [2024-04-26 13:15:48.883665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.944 qpair failed and we were unable to recover it. 00:32:43.944 [2024-04-26 13:15:48.883964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.944 [2024-04-26 13:15:48.884259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.944 [2024-04-26 13:15:48.884265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.944 qpair failed and we were unable to recover it. 00:32:43.944 [2024-04-26 13:15:48.884398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.944 [2024-04-26 13:15:48.884677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.944 [2024-04-26 13:15:48.884684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.944 qpair failed and we were unable to recover it. 00:32:43.944 [2024-04-26 13:15:48.885027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.944 [2024-04-26 13:15:48.885326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.944 [2024-04-26 13:15:48.885333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.944 qpair failed and we were unable to recover it. 00:32:43.944 [2024-04-26 13:15:48.885646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.944 [2024-04-26 13:15:48.885946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.944 [2024-04-26 13:15:48.885953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.944 qpair failed and we were unable to recover it. 00:32:43.944 [2024-04-26 13:15:48.886227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.944 [2024-04-26 13:15:48.886561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.944 [2024-04-26 13:15:48.886567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.944 qpair failed and we were unable to recover it. 00:32:43.944 [2024-04-26 13:15:48.886941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.944 [2024-04-26 13:15:48.887288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.944 [2024-04-26 13:15:48.887295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.944 qpair failed and we were unable to recover it. 00:32:43.944 [2024-04-26 13:15:48.887638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.944 [2024-04-26 13:15:48.887959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.944 [2024-04-26 13:15:48.887966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.944 qpair failed and we were unable to recover it. 00:32:43.944 [2024-04-26 13:15:48.888293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.944 [2024-04-26 13:15:48.888597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.944 [2024-04-26 13:15:48.888603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.944 qpair failed and we were unable to recover it. 00:32:43.944 [2024-04-26 13:15:48.888887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.944 [2024-04-26 13:15:48.889199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.944 [2024-04-26 13:15:48.889206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.944 qpair failed and we were unable to recover it. 00:32:43.944 [2024-04-26 13:15:48.889506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.944 [2024-04-26 13:15:48.889822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.944 [2024-04-26 13:15:48.889829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.944 qpair failed and we were unable to recover it. 00:32:43.944 [2024-04-26 13:15:48.890123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.944 [2024-04-26 13:15:48.890411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.944 [2024-04-26 13:15:48.890418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.944 qpair failed and we were unable to recover it. 00:32:43.944 [2024-04-26 13:15:48.890728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.944 [2024-04-26 13:15:48.891054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.944 [2024-04-26 13:15:48.891060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.944 qpair failed and we were unable to recover it. 00:32:43.944 [2024-04-26 13:15:48.891360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.944 [2024-04-26 13:15:48.891688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.944 [2024-04-26 13:15:48.891694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.944 qpair failed and we were unable to recover it. 00:32:43.944 [2024-04-26 13:15:48.891872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.944 [2024-04-26 13:15:48.892168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.944 [2024-04-26 13:15:48.892175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.944 qpair failed and we were unable to recover it. 00:32:43.944 [2024-04-26 13:15:48.892485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.944 [2024-04-26 13:15:48.892721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.944 [2024-04-26 13:15:48.892728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.944 qpair failed and we were unable to recover it. 00:32:43.944 [2024-04-26 13:15:48.893058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.944 [2024-04-26 13:15:48.893402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.944 [2024-04-26 13:15:48.893409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.944 qpair failed and we were unable to recover it. 00:32:43.944 [2024-04-26 13:15:48.893748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.944 [2024-04-26 13:15:48.894054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.944 [2024-04-26 13:15:48.894061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.944 qpair failed and we were unable to recover it. 00:32:43.944 [2024-04-26 13:15:48.894354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.944 [2024-04-26 13:15:48.894638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.944 [2024-04-26 13:15:48.894644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.944 qpair failed and we were unable to recover it. 00:32:43.944 [2024-04-26 13:15:48.894946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.944 [2024-04-26 13:15:48.895238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.944 [2024-04-26 13:15:48.895244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.944 qpair failed and we were unable to recover it. 00:32:43.944 [2024-04-26 13:15:48.895543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.944 [2024-04-26 13:15:48.895835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.944 [2024-04-26 13:15:48.895843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.944 qpair failed and we were unable to recover it. 00:32:43.944 [2024-04-26 13:15:48.896135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.944 [2024-04-26 13:15:48.896433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.944 [2024-04-26 13:15:48.896439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.944 qpair failed and we were unable to recover it. 00:32:43.944 [2024-04-26 13:15:48.896747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.944 [2024-04-26 13:15:48.896941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.944 [2024-04-26 13:15:48.896947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.944 qpair failed and we were unable to recover it. 00:32:43.944 [2024-04-26 13:15:48.897252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.944 [2024-04-26 13:15:48.897576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.944 [2024-04-26 13:15:48.897582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.944 qpair failed and we were unable to recover it. 00:32:43.944 [2024-04-26 13:15:48.897882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.944 [2024-04-26 13:15:48.898179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.944 [2024-04-26 13:15:48.898185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.944 qpair failed and we were unable to recover it. 00:32:43.944 [2024-04-26 13:15:48.898472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.944 [2024-04-26 13:15:48.898695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.945 [2024-04-26 13:15:48.898701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.945 qpair failed and we were unable to recover it. 00:32:43.945 [2024-04-26 13:15:48.898993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.945 [2024-04-26 13:15:48.899326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.945 [2024-04-26 13:15:48.899332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.945 qpair failed and we were unable to recover it. 00:32:43.945 [2024-04-26 13:15:48.899628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.945 [2024-04-26 13:15:48.899820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.945 [2024-04-26 13:15:48.899826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.945 qpair failed and we were unable to recover it. 00:32:43.945 [2024-04-26 13:15:48.900137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.945 [2024-04-26 13:15:48.900425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.945 [2024-04-26 13:15:48.900432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.945 qpair failed and we were unable to recover it. 00:32:43.945 [2024-04-26 13:15:48.900642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.945 [2024-04-26 13:15:48.900964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.945 [2024-04-26 13:15:48.900970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.945 qpair failed and we were unable to recover it. 00:32:43.945 [2024-04-26 13:15:48.901329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.945 [2024-04-26 13:15:48.901674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.945 [2024-04-26 13:15:48.901681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.945 qpair failed and we were unable to recover it. 00:32:43.945 [2024-04-26 13:15:48.902080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.945 [2024-04-26 13:15:48.902372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.945 [2024-04-26 13:15:48.902378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.945 qpair failed and we were unable to recover it. 00:32:43.945 [2024-04-26 13:15:48.902727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.945 [2024-04-26 13:15:48.903009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.945 [2024-04-26 13:15:48.903015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.945 qpair failed and we were unable to recover it. 00:32:43.945 [2024-04-26 13:15:48.903323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.945 [2024-04-26 13:15:48.903601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.945 [2024-04-26 13:15:48.903607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.945 qpair failed and we were unable to recover it. 00:32:43.945 [2024-04-26 13:15:48.903908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.945 [2024-04-26 13:15:48.904213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.945 [2024-04-26 13:15:48.904219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.945 qpair failed and we were unable to recover it. 00:32:43.945 [2024-04-26 13:15:48.904533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.945 [2024-04-26 13:15:48.904843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.945 [2024-04-26 13:15:48.904849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.945 qpair failed and we were unable to recover it. 00:32:43.945 [2024-04-26 13:15:48.905146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.945 [2024-04-26 13:15:48.905432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.945 [2024-04-26 13:15:48.905438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.945 qpair failed and we were unable to recover it. 00:32:43.945 [2024-04-26 13:15:48.905617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.945 [2024-04-26 13:15:48.905842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.945 [2024-04-26 13:15:48.905850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.945 qpair failed and we were unable to recover it. 00:32:43.945 [2024-04-26 13:15:48.906184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.945 [2024-04-26 13:15:48.906465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.945 [2024-04-26 13:15:48.906472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.945 qpair failed and we were unable to recover it. 00:32:43.945 [2024-04-26 13:15:48.906792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.945 [2024-04-26 13:15:48.907099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.945 [2024-04-26 13:15:48.907107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.945 qpair failed and we were unable to recover it. 00:32:43.945 [2024-04-26 13:15:48.907467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.945 [2024-04-26 13:15:48.907773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.945 [2024-04-26 13:15:48.907779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.945 qpair failed and we were unable to recover it. 00:32:43.945 [2024-04-26 13:15:48.907950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.945 [2024-04-26 13:15:48.908149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.945 [2024-04-26 13:15:48.908156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.945 qpair failed and we were unable to recover it. 00:32:43.945 [2024-04-26 13:15:48.908462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.945 [2024-04-26 13:15:48.908637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.945 [2024-04-26 13:15:48.908644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.945 qpair failed and we were unable to recover it. 00:32:43.945 [2024-04-26 13:15:48.908976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.945 [2024-04-26 13:15:48.909306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.945 [2024-04-26 13:15:48.909312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.945 qpair failed and we were unable to recover it. 00:32:43.945 [2024-04-26 13:15:48.909622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.945 [2024-04-26 13:15:48.909941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.945 [2024-04-26 13:15:48.909949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.945 qpair failed and we were unable to recover it. 00:32:43.945 [2024-04-26 13:15:48.910268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.945 [2024-04-26 13:15:48.910610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.945 [2024-04-26 13:15:48.910616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.945 qpair failed and we were unable to recover it. 00:32:43.945 [2024-04-26 13:15:48.910909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.945 [2024-04-26 13:15:48.911190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.945 [2024-04-26 13:15:48.911197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.945 qpair failed and we were unable to recover it. 00:32:43.945 [2024-04-26 13:15:48.911507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.945 [2024-04-26 13:15:48.911823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.945 [2024-04-26 13:15:48.911829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.945 qpair failed and we were unable to recover it. 00:32:43.945 [2024-04-26 13:15:48.912002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.945 [2024-04-26 13:15:48.912293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.945 [2024-04-26 13:15:48.912300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.945 qpair failed and we were unable to recover it. 00:32:43.945 [2024-04-26 13:15:48.912591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.945 [2024-04-26 13:15:48.912889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.945 [2024-04-26 13:15:48.912896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.945 qpair failed and we were unable to recover it. 00:32:43.945 [2024-04-26 13:15:48.913330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.945 [2024-04-26 13:15:48.913666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.945 [2024-04-26 13:15:48.913673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.945 qpair failed and we were unable to recover it. 00:32:43.945 [2024-04-26 13:15:48.913985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.945 [2024-04-26 13:15:48.914277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.945 [2024-04-26 13:15:48.914284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.945 qpair failed and we were unable to recover it. 00:32:43.945 [2024-04-26 13:15:48.914602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.945 [2024-04-26 13:15:48.914895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.945 [2024-04-26 13:15:48.914902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.945 qpair failed and we were unable to recover it. 00:32:43.945 [2024-04-26 13:15:48.915230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.945 [2024-04-26 13:15:48.915590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.945 [2024-04-26 13:15:48.915596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.945 qpair failed and we were unable to recover it. 00:32:43.946 [2024-04-26 13:15:48.915887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.946 [2024-04-26 13:15:48.916196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.946 [2024-04-26 13:15:48.916202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.946 qpair failed and we were unable to recover it. 00:32:43.946 [2024-04-26 13:15:48.916493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.946 [2024-04-26 13:15:48.916780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.946 [2024-04-26 13:15:48.916786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.946 qpair failed and we were unable to recover it. 00:32:43.946 [2024-04-26 13:15:48.917086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.946 [2024-04-26 13:15:48.917439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.946 [2024-04-26 13:15:48.917446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.946 qpair failed and we were unable to recover it. 00:32:43.946 [2024-04-26 13:15:48.917747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.946 [2024-04-26 13:15:48.918062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.946 [2024-04-26 13:15:48.918068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.946 qpair failed and we were unable to recover it. 00:32:43.946 [2024-04-26 13:15:48.918384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.946 [2024-04-26 13:15:48.918705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.946 [2024-04-26 13:15:48.918711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.946 qpair failed and we were unable to recover it. 00:32:43.946 [2024-04-26 13:15:48.919003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.946 [2024-04-26 13:15:48.919349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.946 [2024-04-26 13:15:48.919355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.946 qpair failed and we were unable to recover it. 00:32:43.946 [2024-04-26 13:15:48.919691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.946 [2024-04-26 13:15:48.919973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.946 [2024-04-26 13:15:48.919979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.946 qpair failed and we were unable to recover it. 00:32:43.946 [2024-04-26 13:15:48.920199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.946 [2024-04-26 13:15:48.920420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.946 [2024-04-26 13:15:48.920427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.946 qpair failed and we were unable to recover it. 00:32:43.946 [2024-04-26 13:15:48.920740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.946 [2024-04-26 13:15:48.921060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.946 [2024-04-26 13:15:48.921066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.946 qpair failed and we were unable to recover it. 00:32:43.946 [2024-04-26 13:15:48.921375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.946 [2024-04-26 13:15:48.921723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.946 [2024-04-26 13:15:48.921731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.946 qpair failed and we were unable to recover it. 00:32:43.946 [2024-04-26 13:15:48.921924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.946 [2024-04-26 13:15:48.922207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.946 [2024-04-26 13:15:48.922214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.946 qpair failed and we were unable to recover it. 00:32:43.946 [2024-04-26 13:15:48.922562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.946 [2024-04-26 13:15:48.922866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.946 [2024-04-26 13:15:48.922874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.946 qpair failed and we were unable to recover it. 00:32:43.946 [2024-04-26 13:15:48.923187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.946 [2024-04-26 13:15:48.923381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.946 [2024-04-26 13:15:48.923387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.946 qpair failed and we were unable to recover it. 00:32:43.946 [2024-04-26 13:15:48.923718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.946 [2024-04-26 13:15:48.924011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.946 [2024-04-26 13:15:48.924018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.946 qpair failed and we were unable to recover it. 00:32:43.946 [2024-04-26 13:15:48.924347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.946 [2024-04-26 13:15:48.924666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.946 [2024-04-26 13:15:48.924672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.946 qpair failed and we were unable to recover it. 00:32:43.946 [2024-04-26 13:15:48.924834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.946 [2024-04-26 13:15:48.925264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.946 [2024-04-26 13:15:48.925270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.946 qpair failed and we were unable to recover it. 00:32:43.946 [2024-04-26 13:15:48.925589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.946 [2024-04-26 13:15:48.925924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.946 [2024-04-26 13:15:48.925931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.946 qpair failed and we were unable to recover it. 00:32:43.946 [2024-04-26 13:15:48.926247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.946 [2024-04-26 13:15:48.926585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.946 [2024-04-26 13:15:48.926591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.946 qpair failed and we were unable to recover it. 00:32:43.946 [2024-04-26 13:15:48.926888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.946 [2024-04-26 13:15:48.927217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.946 [2024-04-26 13:15:48.927224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.946 qpair failed and we were unable to recover it. 00:32:43.946 [2024-04-26 13:15:48.927518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.946 [2024-04-26 13:15:48.927825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.946 [2024-04-26 13:15:48.927831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.946 qpair failed and we were unable to recover it. 00:32:43.946 [2024-04-26 13:15:48.927994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.946 [2024-04-26 13:15:48.928300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.946 [2024-04-26 13:15:48.928307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.946 qpair failed and we were unable to recover it. 00:32:43.946 [2024-04-26 13:15:48.928613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.946 [2024-04-26 13:15:48.928930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.946 [2024-04-26 13:15:48.928936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.946 qpair failed and we were unable to recover it. 00:32:43.946 [2024-04-26 13:15:48.929250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.946 [2024-04-26 13:15:48.929582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.946 [2024-04-26 13:15:48.929589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.946 qpair failed and we were unable to recover it. 00:32:43.946 [2024-04-26 13:15:48.929887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.946 [2024-04-26 13:15:48.930199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.946 [2024-04-26 13:15:48.930205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.946 qpair failed and we were unable to recover it. 00:32:43.946 [2024-04-26 13:15:48.930528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.947 [2024-04-26 13:15:48.930867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.947 [2024-04-26 13:15:48.930873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.947 qpair failed and we were unable to recover it. 00:32:43.947 [2024-04-26 13:15:48.931196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.947 [2024-04-26 13:15:48.931503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.947 [2024-04-26 13:15:48.931509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.947 qpair failed and we were unable to recover it. 00:32:43.947 [2024-04-26 13:15:48.931829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.947 [2024-04-26 13:15:48.932148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.947 [2024-04-26 13:15:48.932155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.947 qpair failed and we were unable to recover it. 00:32:43.947 [2024-04-26 13:15:48.932447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.947 [2024-04-26 13:15:48.932748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.947 [2024-04-26 13:15:48.932754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.947 qpair failed and we were unable to recover it. 00:32:43.947 [2024-04-26 13:15:48.933050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.947 [2024-04-26 13:15:48.933426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.947 [2024-04-26 13:15:48.933433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.947 qpair failed and we were unable to recover it. 00:32:43.947 [2024-04-26 13:15:48.933653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.947 [2024-04-26 13:15:48.933942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.947 [2024-04-26 13:15:48.933949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.947 qpair failed and we were unable to recover it. 00:32:43.947 [2024-04-26 13:15:48.934244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.947 [2024-04-26 13:15:48.934440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.947 [2024-04-26 13:15:48.934446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.947 qpair failed and we were unable to recover it. 00:32:43.947 [2024-04-26 13:15:48.934755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.947 [2024-04-26 13:15:48.935042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.947 [2024-04-26 13:15:48.935049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.947 qpair failed and we were unable to recover it. 00:32:43.947 [2024-04-26 13:15:48.935365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.947 [2024-04-26 13:15:48.935652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.947 [2024-04-26 13:15:48.935658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.947 qpair failed and we were unable to recover it. 00:32:43.947 [2024-04-26 13:15:48.935971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.947 [2024-04-26 13:15:48.936284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.947 [2024-04-26 13:15:48.936290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.947 qpair failed and we were unable to recover it. 00:32:43.947 [2024-04-26 13:15:48.936601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.947 [2024-04-26 13:15:48.936909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.947 [2024-04-26 13:15:48.936916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.947 qpair failed and we were unable to recover it. 00:32:43.947 [2024-04-26 13:15:48.937235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.947 [2024-04-26 13:15:48.937541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.947 [2024-04-26 13:15:48.937548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.947 qpair failed and we were unable to recover it. 00:32:43.947 [2024-04-26 13:15:48.937855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.947 [2024-04-26 13:15:48.938142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.947 [2024-04-26 13:15:48.938148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.947 qpair failed and we were unable to recover it. 00:32:43.947 [2024-04-26 13:15:48.938460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.947 [2024-04-26 13:15:48.938758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.947 [2024-04-26 13:15:48.938765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.947 qpair failed and we were unable to recover it. 00:32:43.947 [2024-04-26 13:15:48.939088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.947 [2024-04-26 13:15:48.939374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.947 [2024-04-26 13:15:48.939380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.947 qpair failed and we were unable to recover it. 00:32:43.947 [2024-04-26 13:15:48.939671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.947 [2024-04-26 13:15:48.939984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.947 [2024-04-26 13:15:48.939990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.947 qpair failed and we were unable to recover it. 00:32:43.947 [2024-04-26 13:15:48.940302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.947 [2024-04-26 13:15:48.940451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.947 [2024-04-26 13:15:48.940458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.947 qpair failed and we were unable to recover it. 00:32:43.947 [2024-04-26 13:15:48.940755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.947 [2024-04-26 13:15:48.940955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.947 [2024-04-26 13:15:48.940961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.947 qpair failed and we were unable to recover it. 00:32:43.947 [2024-04-26 13:15:48.941294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.947 [2024-04-26 13:15:48.941591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.947 [2024-04-26 13:15:48.941597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.947 qpair failed and we were unable to recover it. 00:32:43.947 [2024-04-26 13:15:48.941792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.947 [2024-04-26 13:15:48.942115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.947 [2024-04-26 13:15:48.942122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.947 qpair failed and we were unable to recover it. 00:32:43.947 [2024-04-26 13:15:48.942444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.947 [2024-04-26 13:15:48.942769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.947 [2024-04-26 13:15:48.942776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.947 qpair failed and we were unable to recover it. 00:32:43.947 [2024-04-26 13:15:48.943080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.947 [2024-04-26 13:15:48.943414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.947 [2024-04-26 13:15:48.943420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.947 qpair failed and we were unable to recover it. 00:32:43.947 [2024-04-26 13:15:48.943705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.947 [2024-04-26 13:15:48.943990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.947 [2024-04-26 13:15:48.943997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.947 qpair failed and we were unable to recover it. 00:32:43.947 [2024-04-26 13:15:48.944307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.947 [2024-04-26 13:15:48.944592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.947 [2024-04-26 13:15:48.944598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.947 qpair failed and we were unable to recover it. 00:32:43.947 [2024-04-26 13:15:48.945025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.947 [2024-04-26 13:15:48.945247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.947 [2024-04-26 13:15:48.945253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.947 qpair failed and we were unable to recover it. 00:32:43.947 [2024-04-26 13:15:48.945562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.947 [2024-04-26 13:15:48.945879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.947 [2024-04-26 13:15:48.945885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.947 qpair failed and we were unable to recover it. 00:32:43.947 [2024-04-26 13:15:48.946181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.947 [2024-04-26 13:15:48.946545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.947 [2024-04-26 13:15:48.946551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.947 qpair failed and we were unable to recover it. 00:32:43.947 [2024-04-26 13:15:48.946849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.947 [2024-04-26 13:15:48.947133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.947 [2024-04-26 13:15:48.947140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.947 qpair failed and we were unable to recover it. 00:32:43.947 [2024-04-26 13:15:48.947440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.947 [2024-04-26 13:15:48.947638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.948 [2024-04-26 13:15:48.947644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.948 qpair failed and we were unable to recover it. 00:32:43.948 [2024-04-26 13:15:48.947958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.948 [2024-04-26 13:15:48.948244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.948 [2024-04-26 13:15:48.948250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.948 qpair failed and we were unable to recover it. 00:32:43.948 [2024-04-26 13:15:48.948579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.948 [2024-04-26 13:15:48.948886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.948 [2024-04-26 13:15:48.948892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.948 qpair failed and we were unable to recover it. 00:32:43.948 [2024-04-26 13:15:48.949087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.948 [2024-04-26 13:15:48.949425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.948 [2024-04-26 13:15:48.949431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.948 qpair failed and we were unable to recover it. 00:32:43.948 [2024-04-26 13:15:48.949740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.948 [2024-04-26 13:15:48.950057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.948 [2024-04-26 13:15:48.950063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.948 qpair failed and we were unable to recover it. 00:32:43.948 [2024-04-26 13:15:48.950361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.948 [2024-04-26 13:15:48.950647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.948 [2024-04-26 13:15:48.950653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.948 qpair failed and we were unable to recover it. 00:32:43.948 [2024-04-26 13:15:48.950966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.948 [2024-04-26 13:15:48.951262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.948 [2024-04-26 13:15:48.951268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.948 qpair failed and we were unable to recover it. 00:32:43.948 [2024-04-26 13:15:48.951572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.948 [2024-04-26 13:15:48.951901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.948 [2024-04-26 13:15:48.951909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.948 qpair failed and we were unable to recover it. 00:32:43.948 [2024-04-26 13:15:48.952222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.948 [2024-04-26 13:15:48.952534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.948 [2024-04-26 13:15:48.952540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.948 qpair failed and we were unable to recover it. 00:32:43.948 [2024-04-26 13:15:48.952855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.948 [2024-04-26 13:15:48.953159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.948 [2024-04-26 13:15:48.953165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.948 qpair failed and we were unable to recover it. 00:32:43.948 [2024-04-26 13:15:48.953467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.948 [2024-04-26 13:15:48.953756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.948 [2024-04-26 13:15:48.953762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.948 qpair failed and we were unable to recover it. 00:32:43.948 [2024-04-26 13:15:48.954121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.948 [2024-04-26 13:15:48.954446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.948 [2024-04-26 13:15:48.954452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.948 qpair failed and we were unable to recover it. 00:32:43.948 [2024-04-26 13:15:48.954758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.948 [2024-04-26 13:15:48.955075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.948 [2024-04-26 13:15:48.955082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.948 qpair failed and we were unable to recover it. 00:32:43.948 [2024-04-26 13:15:48.955392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.948 [2024-04-26 13:15:48.955670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.948 [2024-04-26 13:15:48.955677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.948 qpair failed and we were unable to recover it. 00:32:43.948 [2024-04-26 13:15:48.956002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.948 [2024-04-26 13:15:48.956300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.948 [2024-04-26 13:15:48.956306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.948 qpair failed and we were unable to recover it. 00:32:43.948 [2024-04-26 13:15:48.956486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.948 [2024-04-26 13:15:48.956877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.948 [2024-04-26 13:15:48.956883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.948 qpair failed and we were unable to recover it. 00:32:43.948 [2024-04-26 13:15:48.957229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.948 [2024-04-26 13:15:48.957385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.948 [2024-04-26 13:15:48.957391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.948 qpair failed and we were unable to recover it. 00:32:43.948 [2024-04-26 13:15:48.957660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.948 [2024-04-26 13:15:48.958017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.948 [2024-04-26 13:15:48.958025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.948 qpair failed and we were unable to recover it. 00:32:43.948 [2024-04-26 13:15:48.958320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.948 [2024-04-26 13:15:48.958642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.948 [2024-04-26 13:15:48.958649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.948 qpair failed and we were unable to recover it. 00:32:43.948 [2024-04-26 13:15:48.958940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.948 [2024-04-26 13:15:48.959147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.948 [2024-04-26 13:15:48.959154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.948 qpair failed and we were unable to recover it. 00:32:43.948 [2024-04-26 13:15:48.959349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.948 [2024-04-26 13:15:48.959686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.948 [2024-04-26 13:15:48.959692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.948 qpair failed and we were unable to recover it. 00:32:43.948 [2024-04-26 13:15:48.959990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.948 [2024-04-26 13:15:48.960326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.948 [2024-04-26 13:15:48.960332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.948 qpair failed and we were unable to recover it. 00:32:43.948 [2024-04-26 13:15:48.960632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.948 [2024-04-26 13:15:48.960921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.948 [2024-04-26 13:15:48.960928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.948 qpair failed and we were unable to recover it. 00:32:43.948 [2024-04-26 13:15:48.961237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.948 [2024-04-26 13:15:48.961531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.948 [2024-04-26 13:15:48.961537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.948 qpair failed and we were unable to recover it. 00:32:43.948 [2024-04-26 13:15:48.961830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.948 [2024-04-26 13:15:48.962151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.948 [2024-04-26 13:15:48.962158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.948 qpair failed and we were unable to recover it. 00:32:43.948 [2024-04-26 13:15:48.962457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.948 [2024-04-26 13:15:48.962743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.948 [2024-04-26 13:15:48.962749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.948 qpair failed and we were unable to recover it. 00:32:43.948 [2024-04-26 13:15:48.963131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.948 [2024-04-26 13:15:48.963456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.948 [2024-04-26 13:15:48.963463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.948 qpair failed and we were unable to recover it. 00:32:43.948 [2024-04-26 13:15:48.963798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.948 [2024-04-26 13:15:48.964072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.948 [2024-04-26 13:15:48.964081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.948 qpair failed and we were unable to recover it. 00:32:43.948 [2024-04-26 13:15:48.964405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.948 [2024-04-26 13:15:48.964725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.948 [2024-04-26 13:15:48.964732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.948 qpair failed and we were unable to recover it. 00:32:43.949 [2024-04-26 13:15:48.965029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.949 [2024-04-26 13:15:48.965361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.949 [2024-04-26 13:15:48.965367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.949 qpair failed and we were unable to recover it. 00:32:43.949 [2024-04-26 13:15:48.965668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.949 [2024-04-26 13:15:48.965960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.949 [2024-04-26 13:15:48.965967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.949 qpair failed and we were unable to recover it. 00:32:43.949 [2024-04-26 13:15:48.966270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.949 [2024-04-26 13:15:48.966561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.949 [2024-04-26 13:15:48.966567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.949 qpair failed and we were unable to recover it. 00:32:43.949 [2024-04-26 13:15:48.966868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.949 [2024-04-26 13:15:48.967159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.949 [2024-04-26 13:15:48.967165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.949 qpair failed and we were unable to recover it. 00:32:43.949 [2024-04-26 13:15:48.967475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.949 [2024-04-26 13:15:48.967761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.949 [2024-04-26 13:15:48.967767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.949 qpair failed and we were unable to recover it. 00:32:43.949 [2024-04-26 13:15:48.968080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.949 [2024-04-26 13:15:48.968385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.949 [2024-04-26 13:15:48.968391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.949 qpair failed and we were unable to recover it. 00:32:43.949 [2024-04-26 13:15:48.968691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.949 [2024-04-26 13:15:48.968857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.949 [2024-04-26 13:15:48.968865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.949 qpair failed and we were unable to recover it. 00:32:43.949 [2024-04-26 13:15:48.969165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.949 [2024-04-26 13:15:48.969467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.949 [2024-04-26 13:15:48.969474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.949 qpair failed and we were unable to recover it. 00:32:43.949 [2024-04-26 13:15:48.969805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.949 [2024-04-26 13:15:48.969973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.949 [2024-04-26 13:15:48.969981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.949 qpair failed and we were unable to recover it. 00:32:43.949 [2024-04-26 13:15:48.970282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.949 [2024-04-26 13:15:48.970607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.949 [2024-04-26 13:15:48.970613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.949 qpair failed and we were unable to recover it. 00:32:43.949 [2024-04-26 13:15:48.970801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.949 [2024-04-26 13:15:48.971127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.949 [2024-04-26 13:15:48.971134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.949 qpair failed and we were unable to recover it. 00:32:43.949 [2024-04-26 13:15:48.971441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.949 [2024-04-26 13:15:48.971725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.949 [2024-04-26 13:15:48.971731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.949 qpair failed and we were unable to recover it. 00:32:43.949 [2024-04-26 13:15:48.972047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.949 [2024-04-26 13:15:48.972367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.949 [2024-04-26 13:15:48.972374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.949 qpair failed and we were unable to recover it. 00:32:43.949 [2024-04-26 13:15:48.972653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.949 [2024-04-26 13:15:48.972872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.949 [2024-04-26 13:15:48.972884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.949 qpair failed and we were unable to recover it. 00:32:43.949 [2024-04-26 13:15:48.973210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.949 [2024-04-26 13:15:48.973516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.949 [2024-04-26 13:15:48.973522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.949 qpair failed and we were unable to recover it. 00:32:43.949 [2024-04-26 13:15:48.973830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.949 [2024-04-26 13:15:48.974190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.949 [2024-04-26 13:15:48.974196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.949 qpair failed and we were unable to recover it. 00:32:43.949 [2024-04-26 13:15:48.974483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.949 [2024-04-26 13:15:48.974766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.949 [2024-04-26 13:15:48.974772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.949 qpair failed and we were unable to recover it. 00:32:43.949 [2024-04-26 13:15:48.975085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.949 [2024-04-26 13:15:48.975368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.949 [2024-04-26 13:15:48.975374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.949 qpair failed and we were unable to recover it. 00:32:43.949 [2024-04-26 13:15:48.975678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.949 [2024-04-26 13:15:48.975997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.949 [2024-04-26 13:15:48.976004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.949 qpair failed and we were unable to recover it. 00:32:43.949 [2024-04-26 13:15:48.976306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.949 [2024-04-26 13:15:48.976533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.949 [2024-04-26 13:15:48.976539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.949 qpair failed and we were unable to recover it. 00:32:43.949 [2024-04-26 13:15:48.976735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.949 [2024-04-26 13:15:48.977043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.949 [2024-04-26 13:15:48.977050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.949 qpair failed and we were unable to recover it. 00:32:43.949 [2024-04-26 13:15:48.977345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.949 [2024-04-26 13:15:48.977688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.949 [2024-04-26 13:15:48.977694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.949 qpair failed and we were unable to recover it. 00:32:43.949 [2024-04-26 13:15:48.977991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.949 [2024-04-26 13:15:48.978313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.949 [2024-04-26 13:15:48.978319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.949 qpair failed and we were unable to recover it. 00:32:43.949 [2024-04-26 13:15:48.978630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.949 [2024-04-26 13:15:48.978908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.949 [2024-04-26 13:15:48.978915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.949 qpair failed and we were unable to recover it. 00:32:43.949 [2024-04-26 13:15:48.979236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.949 [2024-04-26 13:15:48.979577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.949 [2024-04-26 13:15:48.979583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.949 qpair failed and we were unable to recover it. 00:32:43.949 [2024-04-26 13:15:48.979883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.949 [2024-04-26 13:15:48.980264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.949 [2024-04-26 13:15:48.980271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.949 qpair failed and we were unable to recover it. 00:32:43.949 [2024-04-26 13:15:48.980580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.949 [2024-04-26 13:15:48.980904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.949 [2024-04-26 13:15:48.980910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.949 qpair failed and we were unable to recover it. 00:32:43.949 [2024-04-26 13:15:48.981219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.949 [2024-04-26 13:15:48.981373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.949 [2024-04-26 13:15:48.981380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:43.949 qpair failed and we were unable to recover it. 00:32:43.949 [2024-04-26 13:15:48.981694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.226 [2024-04-26 13:15:48.982000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.226 [2024-04-26 13:15:48.982009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.226 qpair failed and we were unable to recover it. 00:32:44.226 [2024-04-26 13:15:48.982225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.226 [2024-04-26 13:15:48.982518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.226 [2024-04-26 13:15:48.982525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.226 qpair failed and we were unable to recover it. 00:32:44.226 [2024-04-26 13:15:48.982813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.226 [2024-04-26 13:15:48.983091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.226 [2024-04-26 13:15:48.983098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.226 qpair failed and we were unable to recover it. 00:32:44.226 [2024-04-26 13:15:48.983441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.226 [2024-04-26 13:15:48.983784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.226 [2024-04-26 13:15:48.983792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.226 qpair failed and we were unable to recover it. 00:32:44.226 [2024-04-26 13:15:48.984109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.226 [2024-04-26 13:15:48.984431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.226 [2024-04-26 13:15:48.984438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.226 qpair failed and we were unable to recover it. 00:32:44.226 [2024-04-26 13:15:48.984747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.226 [2024-04-26 13:15:48.985062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.226 [2024-04-26 13:15:48.985068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.226 qpair failed and we were unable to recover it. 00:32:44.226 [2024-04-26 13:15:48.985408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.226 [2024-04-26 13:15:48.985570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.226 [2024-04-26 13:15:48.985577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.226 qpair failed and we were unable to recover it. 00:32:44.226 [2024-04-26 13:15:48.985851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.226 [2024-04-26 13:15:48.986003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.226 [2024-04-26 13:15:48.986010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.226 qpair failed and we were unable to recover it. 00:32:44.226 [2024-04-26 13:15:48.986343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.226 [2024-04-26 13:15:48.986531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.226 [2024-04-26 13:15:48.986537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.226 qpair failed and we were unable to recover it. 00:32:44.226 [2024-04-26 13:15:48.986849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.226 [2024-04-26 13:15:48.987184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.226 [2024-04-26 13:15:48.987191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.226 qpair failed and we were unable to recover it. 00:32:44.226 [2024-04-26 13:15:48.987495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.226 [2024-04-26 13:15:48.987811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.226 [2024-04-26 13:15:48.987818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.226 qpair failed and we were unable to recover it. 00:32:44.226 [2024-04-26 13:15:48.988127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.226 [2024-04-26 13:15:48.988454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.226 [2024-04-26 13:15:48.988461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.226 qpair failed and we were unable to recover it. 00:32:44.226 [2024-04-26 13:15:48.988774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.226 [2024-04-26 13:15:48.989090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.226 [2024-04-26 13:15:48.989097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.226 qpair failed and we were unable to recover it. 00:32:44.226 [2024-04-26 13:15:48.989411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.226 [2024-04-26 13:15:48.989737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.226 [2024-04-26 13:15:48.989744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.226 qpair failed and we were unable to recover it. 00:32:44.226 [2024-04-26 13:15:48.990065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.226 [2024-04-26 13:15:48.990385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.226 [2024-04-26 13:15:48.990393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.226 qpair failed and we were unable to recover it. 00:32:44.226 [2024-04-26 13:15:48.990685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.226 [2024-04-26 13:15:48.990964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.226 [2024-04-26 13:15:48.990972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.226 qpair failed and we were unable to recover it. 00:32:44.226 [2024-04-26 13:15:48.991286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.226 [2024-04-26 13:15:48.991600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.226 [2024-04-26 13:15:48.991607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.226 qpair failed and we were unable to recover it. 00:32:44.226 [2024-04-26 13:15:48.991887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.226 [2024-04-26 13:15:48.992091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.226 [2024-04-26 13:15:48.992097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.226 qpair failed and we were unable to recover it. 00:32:44.226 [2024-04-26 13:15:48.992406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.226 [2024-04-26 13:15:48.992723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.226 [2024-04-26 13:15:48.992729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.226 qpair failed and we were unable to recover it. 00:32:44.226 [2024-04-26 13:15:48.992924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.226 [2024-04-26 13:15:48.993265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.226 [2024-04-26 13:15:48.993272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.226 qpair failed and we were unable to recover it. 00:32:44.226 [2024-04-26 13:15:48.993569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.226 [2024-04-26 13:15:48.993861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.226 [2024-04-26 13:15:48.993869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.226 qpair failed and we were unable to recover it. 00:32:44.226 [2024-04-26 13:15:48.994068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.227 [2024-04-26 13:15:48.994356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.227 [2024-04-26 13:15:48.994363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.227 qpair failed and we were unable to recover it. 00:32:44.227 [2024-04-26 13:15:48.994658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.227 [2024-04-26 13:15:48.995036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.227 [2024-04-26 13:15:48.995043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.227 qpair failed and we were unable to recover it. 00:32:44.227 [2024-04-26 13:15:48.995365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.227 [2024-04-26 13:15:48.995683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.227 [2024-04-26 13:15:48.995689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.227 qpair failed and we were unable to recover it. 00:32:44.227 [2024-04-26 13:15:48.996000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.227 [2024-04-26 13:15:48.996306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.227 [2024-04-26 13:15:48.996313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.227 qpair failed and we were unable to recover it. 00:32:44.227 [2024-04-26 13:15:48.996615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.227 [2024-04-26 13:15:48.996954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.227 [2024-04-26 13:15:48.996962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.227 qpair failed and we were unable to recover it. 00:32:44.227 [2024-04-26 13:15:48.997245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.227 [2024-04-26 13:15:48.997575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.227 [2024-04-26 13:15:48.997582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.227 qpair failed and we were unable to recover it. 00:32:44.227 [2024-04-26 13:15:48.997858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.227 [2024-04-26 13:15:48.998141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.227 [2024-04-26 13:15:48.998147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.227 qpair failed and we were unable to recover it. 00:32:44.227 [2024-04-26 13:15:48.998447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.227 [2024-04-26 13:15:48.998753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.227 [2024-04-26 13:15:48.998759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.227 qpair failed and we were unable to recover it. 00:32:44.227 [2024-04-26 13:15:48.999155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.227 [2024-04-26 13:15:48.999421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.227 [2024-04-26 13:15:48.999428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.227 qpair failed and we were unable to recover it. 00:32:44.227 [2024-04-26 13:15:48.999733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.227 [2024-04-26 13:15:49.000022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.227 [2024-04-26 13:15:49.000029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.227 qpair failed and we were unable to recover it. 00:32:44.227 [2024-04-26 13:15:49.000347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.227 [2024-04-26 13:15:49.000661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.227 [2024-04-26 13:15:49.000668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.227 qpair failed and we were unable to recover it. 00:32:44.227 [2024-04-26 13:15:49.000946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.227 [2024-04-26 13:15:49.001275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.227 [2024-04-26 13:15:49.001281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.227 qpair failed and we were unable to recover it. 00:32:44.227 [2024-04-26 13:15:49.001598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.227 [2024-04-26 13:15:49.001871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.227 [2024-04-26 13:15:49.001878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.227 qpair failed and we were unable to recover it. 00:32:44.227 [2024-04-26 13:15:49.002173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.227 [2024-04-26 13:15:49.002514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.227 [2024-04-26 13:15:49.002520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.227 qpair failed and we were unable to recover it. 00:32:44.227 [2024-04-26 13:15:49.002808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.227 [2024-04-26 13:15:49.003124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.227 [2024-04-26 13:15:49.003130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.227 qpair failed and we were unable to recover it. 00:32:44.227 [2024-04-26 13:15:49.003431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.227 [2024-04-26 13:15:49.003746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.227 [2024-04-26 13:15:49.003752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.227 qpair failed and we were unable to recover it. 00:32:44.227 [2024-04-26 13:15:49.004041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.227 [2024-04-26 13:15:49.004352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.227 [2024-04-26 13:15:49.004359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.227 qpair failed and we were unable to recover it. 00:32:44.227 [2024-04-26 13:15:49.004731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.227 [2024-04-26 13:15:49.004984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.227 [2024-04-26 13:15:49.004990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.227 qpair failed and we were unable to recover it. 00:32:44.227 [2024-04-26 13:15:49.005285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.227 [2024-04-26 13:15:49.005671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.227 [2024-04-26 13:15:49.005678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.227 qpair failed and we were unable to recover it. 00:32:44.227 [2024-04-26 13:15:49.005963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.227 [2024-04-26 13:15:49.006266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.227 [2024-04-26 13:15:49.006272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.227 qpair failed and we were unable to recover it. 00:32:44.227 [2024-04-26 13:15:49.006586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.227 [2024-04-26 13:15:49.006894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.227 [2024-04-26 13:15:49.006901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.227 qpair failed and we were unable to recover it. 00:32:44.227 [2024-04-26 13:15:49.007204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.227 [2024-04-26 13:15:49.007533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.227 [2024-04-26 13:15:49.007539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.227 qpair failed and we were unable to recover it. 00:32:44.227 [2024-04-26 13:15:49.007853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.227 [2024-04-26 13:15:49.008056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.228 [2024-04-26 13:15:49.008063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.228 qpair failed and we were unable to recover it. 00:32:44.228 [2024-04-26 13:15:49.008392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.228 [2024-04-26 13:15:49.008702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.228 [2024-04-26 13:15:49.008708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.228 qpair failed and we were unable to recover it. 00:32:44.228 [2024-04-26 13:15:49.009029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.228 [2024-04-26 13:15:49.009367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.228 [2024-04-26 13:15:49.009373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.228 qpair failed and we were unable to recover it. 00:32:44.228 [2024-04-26 13:15:49.009734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.228 [2024-04-26 13:15:49.010037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.228 [2024-04-26 13:15:49.010044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.228 qpair failed and we were unable to recover it. 00:32:44.228 [2024-04-26 13:15:49.010366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.228 [2024-04-26 13:15:49.010686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.228 [2024-04-26 13:15:49.010692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.228 qpair failed and we were unable to recover it. 00:32:44.228 [2024-04-26 13:15:49.010847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.228 [2024-04-26 13:15:49.011009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.228 [2024-04-26 13:15:49.011015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.228 qpair failed and we were unable to recover it. 00:32:44.228 [2024-04-26 13:15:49.011307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.228 [2024-04-26 13:15:49.011658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.228 [2024-04-26 13:15:49.011665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.228 qpair failed and we were unable to recover it. 00:32:44.228 [2024-04-26 13:15:49.011962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.228 [2024-04-26 13:15:49.012284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.228 [2024-04-26 13:15:49.012291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.228 qpair failed and we were unable to recover it. 00:32:44.228 [2024-04-26 13:15:49.012609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.228 [2024-04-26 13:15:49.012926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.228 [2024-04-26 13:15:49.012932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.228 qpair failed and we were unable to recover it. 00:32:44.228 [2024-04-26 13:15:49.013101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.228 [2024-04-26 13:15:49.013370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.228 [2024-04-26 13:15:49.013377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.228 qpair failed and we were unable to recover it. 00:32:44.228 [2024-04-26 13:15:49.013708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.228 [2024-04-26 13:15:49.014018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.228 [2024-04-26 13:15:49.014025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.228 qpair failed and we were unable to recover it. 00:32:44.228 [2024-04-26 13:15:49.014332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.228 [2024-04-26 13:15:49.014650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.228 [2024-04-26 13:15:49.014656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.228 qpair failed and we were unable to recover it. 00:32:44.228 [2024-04-26 13:15:49.014841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.228 [2024-04-26 13:15:49.015119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.228 [2024-04-26 13:15:49.015125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.228 qpair failed and we were unable to recover it. 00:32:44.228 [2024-04-26 13:15:49.015429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.228 [2024-04-26 13:15:49.015637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.228 [2024-04-26 13:15:49.015644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.228 qpair failed and we were unable to recover it. 00:32:44.228 [2024-04-26 13:15:49.015962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.228 [2024-04-26 13:15:49.016132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.228 [2024-04-26 13:15:49.016139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.228 qpair failed and we were unable to recover it. 00:32:44.228 [2024-04-26 13:15:49.016434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.228 [2024-04-26 13:15:49.016732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.228 [2024-04-26 13:15:49.016738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.228 qpair failed and we were unable to recover it. 00:32:44.228 [2024-04-26 13:15:49.017088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.228 [2024-04-26 13:15:49.017443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.228 [2024-04-26 13:15:49.017449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.228 qpair failed and we were unable to recover it. 00:32:44.228 [2024-04-26 13:15:49.017735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.228 [2024-04-26 13:15:49.018023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.228 [2024-04-26 13:15:49.018030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.228 qpair failed and we were unable to recover it. 00:32:44.228 [2024-04-26 13:15:49.018338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.228 [2024-04-26 13:15:49.018631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.228 [2024-04-26 13:15:49.018637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.228 qpair failed and we were unable to recover it. 00:32:44.228 [2024-04-26 13:15:49.018921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.228 [2024-04-26 13:15:49.019246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.228 [2024-04-26 13:15:49.019253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.228 qpair failed and we were unable to recover it. 00:32:44.228 [2024-04-26 13:15:49.019585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.228 [2024-04-26 13:15:49.019853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.228 [2024-04-26 13:15:49.019861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.228 qpair failed and we were unable to recover it. 00:32:44.228 [2024-04-26 13:15:49.020229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.228 [2024-04-26 13:15:49.020520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.228 [2024-04-26 13:15:49.020527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.228 qpair failed and we were unable to recover it. 00:32:44.229 [2024-04-26 13:15:49.020833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.229 [2024-04-26 13:15:49.021128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.229 [2024-04-26 13:15:49.021135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.229 qpair failed and we were unable to recover it. 00:32:44.229 [2024-04-26 13:15:49.021424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.229 [2024-04-26 13:15:49.021718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.229 [2024-04-26 13:15:49.021724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.229 qpair failed and we were unable to recover it. 00:32:44.229 [2024-04-26 13:15:49.022022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.229 [2024-04-26 13:15:49.022357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.229 [2024-04-26 13:15:49.022363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.229 qpair failed and we were unable to recover it. 00:32:44.229 [2024-04-26 13:15:49.022656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.229 [2024-04-26 13:15:49.022973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.229 [2024-04-26 13:15:49.022980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.229 qpair failed and we were unable to recover it. 00:32:44.229 [2024-04-26 13:15:49.023291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.229 [2024-04-26 13:15:49.023585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.229 [2024-04-26 13:15:49.023592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.229 qpair failed and we were unable to recover it. 00:32:44.229 [2024-04-26 13:15:49.023783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.229 [2024-04-26 13:15:49.024109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.229 [2024-04-26 13:15:49.024116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.229 qpair failed and we were unable to recover it. 00:32:44.229 [2024-04-26 13:15:49.024437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.229 [2024-04-26 13:15:49.024635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.229 [2024-04-26 13:15:49.024642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.229 qpair failed and we were unable to recover it. 00:32:44.229 [2024-04-26 13:15:49.024952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.229 [2024-04-26 13:15:49.025253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.229 [2024-04-26 13:15:49.025259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.229 qpair failed and we were unable to recover it. 00:32:44.229 [2024-04-26 13:15:49.025610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.229 [2024-04-26 13:15:49.025803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.229 [2024-04-26 13:15:49.025809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.229 qpair failed and we were unable to recover it. 00:32:44.229 [2024-04-26 13:15:49.026102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.229 [2024-04-26 13:15:49.026400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.229 [2024-04-26 13:15:49.026407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.229 qpair failed and we were unable to recover it. 00:32:44.229 [2024-04-26 13:15:49.026699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.229 [2024-04-26 13:15:49.026893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.229 [2024-04-26 13:15:49.026899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.229 qpair failed and we were unable to recover it. 00:32:44.229 [2024-04-26 13:15:49.027190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.229 [2024-04-26 13:15:49.027509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.229 [2024-04-26 13:15:49.027516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.229 qpair failed and we were unable to recover it. 00:32:44.229 [2024-04-26 13:15:49.027817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.229 [2024-04-26 13:15:49.028101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.229 [2024-04-26 13:15:49.028108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.229 qpair failed and we were unable to recover it. 00:32:44.229 [2024-04-26 13:15:49.028443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.229 [2024-04-26 13:15:49.028794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.229 [2024-04-26 13:15:49.028801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.229 qpair failed and we were unable to recover it. 00:32:44.229 [2024-04-26 13:15:49.029109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.229 [2024-04-26 13:15:49.029427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.229 [2024-04-26 13:15:49.029433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.229 qpair failed and we were unable to recover it. 00:32:44.229 [2024-04-26 13:15:49.029623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.229 [2024-04-26 13:15:49.029959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.229 [2024-04-26 13:15:49.029966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.229 qpair failed and we were unable to recover it. 00:32:44.229 [2024-04-26 13:15:49.030188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.229 [2024-04-26 13:15:49.030521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.229 [2024-04-26 13:15:49.030527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.229 qpair failed and we were unable to recover it. 00:32:44.229 [2024-04-26 13:15:49.030821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.229 [2024-04-26 13:15:49.031129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.229 [2024-04-26 13:15:49.031135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.229 qpair failed and we were unable to recover it. 00:32:44.229 [2024-04-26 13:15:49.031435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.229 [2024-04-26 13:15:49.031717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.229 [2024-04-26 13:15:49.031723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.229 qpair failed and we were unable to recover it. 00:32:44.229 [2024-04-26 13:15:49.032018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.229 [2024-04-26 13:15:49.032325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.229 [2024-04-26 13:15:49.032332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.229 qpair failed and we were unable to recover it. 00:32:44.229 [2024-04-26 13:15:49.032525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.229 [2024-04-26 13:15:49.032884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.229 [2024-04-26 13:15:49.032891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.229 qpair failed and we were unable to recover it. 00:32:44.229 [2024-04-26 13:15:49.033191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.229 [2024-04-26 13:15:49.033360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.229 [2024-04-26 13:15:49.033367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.229 qpair failed and we were unable to recover it. 00:32:44.229 [2024-04-26 13:15:49.033684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.229 [2024-04-26 13:15:49.033963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.229 [2024-04-26 13:15:49.033970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.229 qpair failed and we were unable to recover it. 00:32:44.229 [2024-04-26 13:15:49.034141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.229 [2024-04-26 13:15:49.034313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.229 [2024-04-26 13:15:49.034319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.229 qpair failed and we were unable to recover it. 00:32:44.229 [2024-04-26 13:15:49.034608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.229 [2024-04-26 13:15:49.034928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.229 [2024-04-26 13:15:49.034935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.229 qpair failed and we were unable to recover it. 00:32:44.230 [2024-04-26 13:15:49.035278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.230 [2024-04-26 13:15:49.035576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.230 [2024-04-26 13:15:49.035582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.230 qpair failed and we were unable to recover it. 00:32:44.230 [2024-04-26 13:15:49.035906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.230 [2024-04-26 13:15:49.036237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.230 [2024-04-26 13:15:49.036243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.230 qpair failed and we were unable to recover it. 00:32:44.230 [2024-04-26 13:15:49.036553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.230 [2024-04-26 13:15:49.036835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.230 [2024-04-26 13:15:49.036845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.230 qpair failed and we were unable to recover it. 00:32:44.230 [2024-04-26 13:15:49.037130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.230 [2024-04-26 13:15:49.037341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.230 [2024-04-26 13:15:49.037348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.230 qpair failed and we were unable to recover it. 00:32:44.230 [2024-04-26 13:15:49.037645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.230 [2024-04-26 13:15:49.037959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.230 [2024-04-26 13:15:49.037966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.230 qpair failed and we were unable to recover it. 00:32:44.230 [2024-04-26 13:15:49.038272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.230 [2024-04-26 13:15:49.038587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.230 [2024-04-26 13:15:49.038593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.230 qpair failed and we were unable to recover it. 00:32:44.230 [2024-04-26 13:15:49.038748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.230 [2024-04-26 13:15:49.039101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.230 [2024-04-26 13:15:49.039108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.230 qpair failed and we were unable to recover it. 00:32:44.230 [2024-04-26 13:15:49.039418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.230 [2024-04-26 13:15:49.039584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.230 [2024-04-26 13:15:49.039591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.230 qpair failed and we were unable to recover it. 00:32:44.230 [2024-04-26 13:15:49.039786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.230 [2024-04-26 13:15:49.040074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.230 [2024-04-26 13:15:49.040081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.230 qpair failed and we were unable to recover it. 00:32:44.230 [2024-04-26 13:15:49.040414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.230 [2024-04-26 13:15:49.040744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.230 [2024-04-26 13:15:49.040751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.230 qpair failed and we were unable to recover it. 00:32:44.230 [2024-04-26 13:15:49.041072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.230 [2024-04-26 13:15:49.041395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.230 [2024-04-26 13:15:49.041401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.230 qpair failed and we were unable to recover it. 00:32:44.230 [2024-04-26 13:15:49.041698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.230 [2024-04-26 13:15:49.041984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.230 [2024-04-26 13:15:49.041991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.230 qpair failed and we were unable to recover it. 00:32:44.230 [2024-04-26 13:15:49.042281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.230 [2024-04-26 13:15:49.042579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.230 [2024-04-26 13:15:49.042585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.230 qpair failed and we were unable to recover it. 00:32:44.230 [2024-04-26 13:15:49.042893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.230 [2024-04-26 13:15:49.043091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.230 [2024-04-26 13:15:49.043098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.230 qpair failed and we were unable to recover it. 00:32:44.230 [2024-04-26 13:15:49.043428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.230 [2024-04-26 13:15:49.043734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.230 [2024-04-26 13:15:49.043741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.230 qpair failed and we were unable to recover it. 00:32:44.230 [2024-04-26 13:15:49.044054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.230 [2024-04-26 13:15:49.044344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.230 [2024-04-26 13:15:49.044350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.230 qpair failed and we were unable to recover it. 00:32:44.230 [2024-04-26 13:15:49.044643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.230 [2024-04-26 13:15:49.044960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.230 [2024-04-26 13:15:49.044967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.230 qpair failed and we were unable to recover it. 00:32:44.230 [2024-04-26 13:15:49.045347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.230 [2024-04-26 13:15:49.045516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.230 [2024-04-26 13:15:49.045523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.230 qpair failed and we were unable to recover it. 00:32:44.230 [2024-04-26 13:15:49.045744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.230 [2024-04-26 13:15:49.046042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.230 [2024-04-26 13:15:49.046049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.230 qpair failed and we were unable to recover it. 00:32:44.230 [2024-04-26 13:15:49.046359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.230 [2024-04-26 13:15:49.046630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.230 [2024-04-26 13:15:49.046636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.230 qpair failed and we were unable to recover it. 00:32:44.230 [2024-04-26 13:15:49.046911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.230 [2024-04-26 13:15:49.047209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.230 [2024-04-26 13:15:49.047215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.230 qpair failed and we were unable to recover it. 00:32:44.230 [2024-04-26 13:15:49.047489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.230 [2024-04-26 13:15:49.047810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.230 [2024-04-26 13:15:49.047818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.230 qpair failed and we were unable to recover it. 00:32:44.230 [2024-04-26 13:15:49.048132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.230 [2024-04-26 13:15:49.048427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.230 [2024-04-26 13:15:49.048433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.230 qpair failed and we were unable to recover it. 00:32:44.230 [2024-04-26 13:15:49.048748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.230 [2024-04-26 13:15:49.049037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.230 [2024-04-26 13:15:49.049044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.230 qpair failed and we were unable to recover it. 00:32:44.230 [2024-04-26 13:15:49.049341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.230 [2024-04-26 13:15:49.049617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.230 [2024-04-26 13:15:49.049623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.230 qpair failed and we were unable to recover it. 00:32:44.231 [2024-04-26 13:15:49.049928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.231 [2024-04-26 13:15:49.050154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.231 [2024-04-26 13:15:49.050160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.231 qpair failed and we were unable to recover it. 00:32:44.231 [2024-04-26 13:15:49.050473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.231 [2024-04-26 13:15:49.050835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.231 [2024-04-26 13:15:49.050845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.231 qpair failed and we were unable to recover it. 00:32:44.231 [2024-04-26 13:15:49.051144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.231 [2024-04-26 13:15:49.051424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.231 [2024-04-26 13:15:49.051430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.231 qpair failed and we were unable to recover it. 00:32:44.231 [2024-04-26 13:15:49.051722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.231 [2024-04-26 13:15:49.052024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.231 [2024-04-26 13:15:49.052031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.231 qpair failed and we were unable to recover it. 00:32:44.231 [2024-04-26 13:15:49.052350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.231 [2024-04-26 13:15:49.052519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.231 [2024-04-26 13:15:49.052526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.231 qpair failed and we were unable to recover it. 00:32:44.231 [2024-04-26 13:15:49.052824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.231 [2024-04-26 13:15:49.053126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.231 [2024-04-26 13:15:49.053133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.231 qpair failed and we were unable to recover it. 00:32:44.231 [2024-04-26 13:15:49.053402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.231 [2024-04-26 13:15:49.053613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.231 [2024-04-26 13:15:49.053623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.231 qpair failed and we were unable to recover it. 00:32:44.231 [2024-04-26 13:15:49.053820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.231 [2024-04-26 13:15:49.054179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.231 [2024-04-26 13:15:49.054185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.231 qpair failed and we were unable to recover it. 00:32:44.231 [2024-04-26 13:15:49.054508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.231 [2024-04-26 13:15:49.054793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.231 [2024-04-26 13:15:49.054799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.231 qpair failed and we were unable to recover it. 00:32:44.231 [2024-04-26 13:15:49.055106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.231 [2024-04-26 13:15:49.055299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.231 [2024-04-26 13:15:49.055305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.231 qpair failed and we were unable to recover it. 00:32:44.231 [2024-04-26 13:15:49.055613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.231 [2024-04-26 13:15:49.055938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.231 [2024-04-26 13:15:49.055945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.231 qpair failed and we were unable to recover it. 00:32:44.231 [2024-04-26 13:15:49.056259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.231 [2024-04-26 13:15:49.056557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.231 [2024-04-26 13:15:49.056563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.231 qpair failed and we were unable to recover it. 00:32:44.231 [2024-04-26 13:15:49.056840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.231 [2024-04-26 13:15:49.057167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.231 [2024-04-26 13:15:49.057173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.231 qpair failed and we were unable to recover it. 00:32:44.231 [2024-04-26 13:15:49.057489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.231 [2024-04-26 13:15:49.057677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.231 [2024-04-26 13:15:49.057683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.231 qpair failed and we were unable to recover it. 00:32:44.231 [2024-04-26 13:15:49.058015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.231 [2024-04-26 13:15:49.058362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.231 [2024-04-26 13:15:49.058368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.231 qpair failed and we were unable to recover it. 00:32:44.231 [2024-04-26 13:15:49.058644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.231 [2024-04-26 13:15:49.058970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.231 [2024-04-26 13:15:49.058977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.231 qpair failed and we were unable to recover it. 00:32:44.231 [2024-04-26 13:15:49.059314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.231 [2024-04-26 13:15:49.059476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.231 [2024-04-26 13:15:49.059484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.231 qpair failed and we were unable to recover it. 00:32:44.231 [2024-04-26 13:15:49.059661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.231 [2024-04-26 13:15:49.059973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.232 [2024-04-26 13:15:49.059980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.232 qpair failed and we were unable to recover it. 00:32:44.232 [2024-04-26 13:15:49.060172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.232 [2024-04-26 13:15:49.060355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.232 [2024-04-26 13:15:49.060363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.232 qpair failed and we were unable to recover it. 00:32:44.232 [2024-04-26 13:15:49.060651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.232 [2024-04-26 13:15:49.060979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.232 [2024-04-26 13:15:49.060987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.232 qpair failed and we were unable to recover it. 00:32:44.232 [2024-04-26 13:15:49.061306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.232 [2024-04-26 13:15:49.061528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.232 [2024-04-26 13:15:49.061534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.232 qpair failed and we were unable to recover it. 00:32:44.232 [2024-04-26 13:15:49.061742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.232 [2024-04-26 13:15:49.062074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.232 [2024-04-26 13:15:49.062081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.232 qpair failed and we were unable to recover it. 00:32:44.232 [2024-04-26 13:15:49.062377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.232 [2024-04-26 13:15:49.062693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.232 [2024-04-26 13:15:49.062700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.232 qpair failed and we were unable to recover it. 00:32:44.232 [2024-04-26 13:15:49.063043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.232 [2024-04-26 13:15:49.063362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.232 [2024-04-26 13:15:49.063368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.232 qpair failed and we were unable to recover it. 00:32:44.232 [2024-04-26 13:15:49.063681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.232 [2024-04-26 13:15:49.063997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.232 [2024-04-26 13:15:49.064004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.232 qpair failed and we were unable to recover it. 00:32:44.232 [2024-04-26 13:15:49.064341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.232 [2024-04-26 13:15:49.064661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.232 [2024-04-26 13:15:49.064669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.232 qpair failed and we were unable to recover it. 00:32:44.232 [2024-04-26 13:15:49.064939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.232 [2024-04-26 13:15:49.065111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.232 [2024-04-26 13:15:49.065119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.232 qpair failed and we were unable to recover it. 00:32:44.232 [2024-04-26 13:15:49.065404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.232 [2024-04-26 13:15:49.065810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.232 [2024-04-26 13:15:49.065816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.232 qpair failed and we were unable to recover it. 00:32:44.232 [2024-04-26 13:15:49.066197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.232 [2024-04-26 13:15:49.066376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.232 [2024-04-26 13:15:49.066383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.232 qpair failed and we were unable to recover it. 00:32:44.232 [2024-04-26 13:15:49.066658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.232 [2024-04-26 13:15:49.066997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.232 [2024-04-26 13:15:49.067005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.232 qpair failed and we were unable to recover it. 00:32:44.232 [2024-04-26 13:15:49.067318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.232 [2024-04-26 13:15:49.067639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.232 [2024-04-26 13:15:49.067645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.232 qpair failed and we were unable to recover it. 00:32:44.232 [2024-04-26 13:15:49.067960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.232 [2024-04-26 13:15:49.068294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.232 [2024-04-26 13:15:49.068301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.232 qpair failed and we were unable to recover it. 00:32:44.232 [2024-04-26 13:15:49.068701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.232 [2024-04-26 13:15:49.069018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.232 [2024-04-26 13:15:49.069024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.232 qpair failed and we were unable to recover it. 00:32:44.232 [2024-04-26 13:15:49.069349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.232 [2024-04-26 13:15:49.069508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.232 [2024-04-26 13:15:49.069515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.232 qpair failed and we were unable to recover it. 00:32:44.232 [2024-04-26 13:15:49.069794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.232 [2024-04-26 13:15:49.070102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.232 [2024-04-26 13:15:49.070108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.232 qpair failed and we were unable to recover it. 00:32:44.232 [2024-04-26 13:15:49.070267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.232 [2024-04-26 13:15:49.070546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.232 [2024-04-26 13:15:49.070552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.232 qpair failed and we were unable to recover it. 00:32:44.232 [2024-04-26 13:15:49.070872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.232 [2024-04-26 13:15:49.071083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.232 [2024-04-26 13:15:49.071090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.232 qpair failed and we were unable to recover it. 00:32:44.232 [2024-04-26 13:15:49.071394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.232 [2024-04-26 13:15:49.071591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.232 [2024-04-26 13:15:49.071597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.232 qpair failed and we were unable to recover it. 00:32:44.232 [2024-04-26 13:15:49.071923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.232 [2024-04-26 13:15:49.072258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.232 [2024-04-26 13:15:49.072265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.232 qpair failed and we were unable to recover it. 00:32:44.232 [2024-04-26 13:15:49.072578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.232 [2024-04-26 13:15:49.072875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.232 [2024-04-26 13:15:49.072882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.232 qpair failed and we were unable to recover it. 00:32:44.232 [2024-04-26 13:15:49.073084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.232 [2024-04-26 13:15:49.073357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.232 [2024-04-26 13:15:49.073363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.232 qpair failed and we were unable to recover it. 00:32:44.232 [2024-04-26 13:15:49.073653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.232 [2024-04-26 13:15:49.073691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.232 [2024-04-26 13:15:49.073698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.232 qpair failed and we were unable to recover it. 00:32:44.233 [2024-04-26 13:15:49.073988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.233 [2024-04-26 13:15:49.074315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.233 [2024-04-26 13:15:49.074321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.233 qpair failed and we were unable to recover it. 00:32:44.233 [2024-04-26 13:15:49.074616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.233 [2024-04-26 13:15:49.074973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.233 [2024-04-26 13:15:49.074980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.233 qpair failed and we were unable to recover it. 00:32:44.233 [2024-04-26 13:15:49.075294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.233 [2024-04-26 13:15:49.075575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.233 [2024-04-26 13:15:49.075581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.233 qpair failed and we were unable to recover it. 00:32:44.233 [2024-04-26 13:15:49.075874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.233 [2024-04-26 13:15:49.076200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.233 [2024-04-26 13:15:49.076207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.233 qpair failed and we were unable to recover it. 00:32:44.233 [2024-04-26 13:15:49.076522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.233 [2024-04-26 13:15:49.076757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.233 [2024-04-26 13:15:49.076763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.233 qpair failed and we were unable to recover it. 00:32:44.233 [2024-04-26 13:15:49.077149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.233 [2024-04-26 13:15:49.077468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.233 [2024-04-26 13:15:49.077474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.233 qpair failed and we were unable to recover it. 00:32:44.233 [2024-04-26 13:15:49.077740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.233 [2024-04-26 13:15:49.078008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.233 [2024-04-26 13:15:49.078015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.233 qpair failed and we were unable to recover it. 00:32:44.233 [2024-04-26 13:15:49.078213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.233 [2024-04-26 13:15:49.078526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.233 [2024-04-26 13:15:49.078532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.233 qpair failed and we were unable to recover it. 00:32:44.233 [2024-04-26 13:15:49.078833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.233 [2024-04-26 13:15:49.078990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.233 [2024-04-26 13:15:49.079003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.233 qpair failed and we were unable to recover it. 00:32:44.233 [2024-04-26 13:15:49.079326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.233 [2024-04-26 13:15:49.079521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.233 [2024-04-26 13:15:49.079527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.233 qpair failed and we were unable to recover it. 00:32:44.233 [2024-04-26 13:15:49.079854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.233 [2024-04-26 13:15:49.080143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.233 [2024-04-26 13:15:49.080149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.233 qpair failed and we were unable to recover it. 00:32:44.233 [2024-04-26 13:15:49.080458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.233 [2024-04-26 13:15:49.080652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.233 [2024-04-26 13:15:49.080659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.233 qpair failed and we were unable to recover it. 00:32:44.233 [2024-04-26 13:15:49.081004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.233 [2024-04-26 13:15:49.081307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.233 [2024-04-26 13:15:49.081313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.233 qpair failed and we were unable to recover it. 00:32:44.233 [2024-04-26 13:15:49.081620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.233 [2024-04-26 13:15:49.081965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.233 [2024-04-26 13:15:49.081972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.233 qpair failed and we were unable to recover it. 00:32:44.233 [2024-04-26 13:15:49.082285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.233 [2024-04-26 13:15:49.082627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.233 [2024-04-26 13:15:49.082633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.233 qpair failed and we were unable to recover it. 00:32:44.233 [2024-04-26 13:15:49.082945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.233 [2024-04-26 13:15:49.083252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.233 [2024-04-26 13:15:49.083259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.233 qpair failed and we were unable to recover it. 00:32:44.233 [2024-04-26 13:15:49.083468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.233 [2024-04-26 13:15:49.083736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.233 [2024-04-26 13:15:49.083743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.233 qpair failed and we were unable to recover it. 00:32:44.233 [2024-04-26 13:15:49.084030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.233 [2024-04-26 13:15:49.084326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.233 [2024-04-26 13:15:49.084332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.233 qpair failed and we were unable to recover it. 00:32:44.233 [2024-04-26 13:15:49.084718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.233 [2024-04-26 13:15:49.085002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.233 [2024-04-26 13:15:49.085009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.233 qpair failed and we were unable to recover it. 00:32:44.233 [2024-04-26 13:15:49.085336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.233 [2024-04-26 13:15:49.085654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.233 [2024-04-26 13:15:49.085660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.233 qpair failed and we were unable to recover it. 00:32:44.233 [2024-04-26 13:15:49.085982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.233 [2024-04-26 13:15:49.086324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.233 [2024-04-26 13:15:49.086330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.233 qpair failed and we were unable to recover it. 00:32:44.233 [2024-04-26 13:15:49.086633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.233 [2024-04-26 13:15:49.086960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.233 [2024-04-26 13:15:49.086966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.233 qpair failed and we were unable to recover it. 00:32:44.233 [2024-04-26 13:15:49.087275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.233 [2024-04-26 13:15:49.087564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.233 [2024-04-26 13:15:49.087570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.233 qpair failed and we were unable to recover it. 00:32:44.233 [2024-04-26 13:15:49.087787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.233 [2024-04-26 13:15:49.087870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.233 [2024-04-26 13:15:49.087877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.233 qpair failed and we were unable to recover it. 00:32:44.233 [2024-04-26 13:15:49.088054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.233 [2024-04-26 13:15:49.088392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.233 [2024-04-26 13:15:49.088398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.233 qpair failed and we were unable to recover it. 00:32:44.233 [2024-04-26 13:15:49.088699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.233 [2024-04-26 13:15:49.089033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.234 [2024-04-26 13:15:49.089040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.234 qpair failed and we were unable to recover it. 00:32:44.234 [2024-04-26 13:15:49.089339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.234 [2024-04-26 13:15:49.089654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.234 [2024-04-26 13:15:49.089660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.234 qpair failed and we were unable to recover it. 00:32:44.234 [2024-04-26 13:15:49.089941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.234 [2024-04-26 13:15:49.090116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.234 [2024-04-26 13:15:49.090124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.234 qpair failed and we were unable to recover it. 00:32:44.234 [2024-04-26 13:15:49.090457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.234 [2024-04-26 13:15:49.090763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.234 [2024-04-26 13:15:49.090769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.234 qpair failed and we were unable to recover it. 00:32:44.234 [2024-04-26 13:15:49.091061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.234 [2024-04-26 13:15:49.091229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.234 [2024-04-26 13:15:49.091237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.234 qpair failed and we were unable to recover it. 00:32:44.234 [2024-04-26 13:15:49.091553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.234 [2024-04-26 13:15:49.091852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.234 [2024-04-26 13:15:49.091859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.234 qpair failed and we were unable to recover it. 00:32:44.234 [2024-04-26 13:15:49.092204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.234 [2024-04-26 13:15:49.092493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.234 [2024-04-26 13:15:49.092499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.234 qpair failed and we were unable to recover it. 00:32:44.234 [2024-04-26 13:15:49.092806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.234 [2024-04-26 13:15:49.093148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.234 [2024-04-26 13:15:49.093155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.234 qpair failed and we were unable to recover it. 00:32:44.234 [2024-04-26 13:15:49.093465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.234 [2024-04-26 13:15:49.093757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.234 [2024-04-26 13:15:49.093763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.234 qpair failed and we were unable to recover it. 00:32:44.234 [2024-04-26 13:15:49.094082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.234 [2024-04-26 13:15:49.094394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.234 [2024-04-26 13:15:49.094400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.234 qpair failed and we were unable to recover it. 00:32:44.234 [2024-04-26 13:15:49.094712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.234 [2024-04-26 13:15:49.094901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.234 [2024-04-26 13:15:49.094907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.234 qpair failed and we were unable to recover it. 00:32:44.234 [2024-04-26 13:15:49.095220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.234 [2024-04-26 13:15:49.095530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.234 [2024-04-26 13:15:49.095536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.234 qpair failed and we were unable to recover it. 00:32:44.234 [2024-04-26 13:15:49.095840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.234 [2024-04-26 13:15:49.096160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.234 [2024-04-26 13:15:49.096166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.234 qpair failed and we were unable to recover it. 00:32:44.234 [2024-04-26 13:15:49.096346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.234 [2024-04-26 13:15:49.096643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.234 [2024-04-26 13:15:49.096649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.234 qpair failed and we were unable to recover it. 00:32:44.234 [2024-04-26 13:15:49.096951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.234 [2024-04-26 13:15:49.097257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.234 [2024-04-26 13:15:49.097263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.234 qpair failed and we were unable to recover it. 00:32:44.234 [2024-04-26 13:15:49.097564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.234 [2024-04-26 13:15:49.097909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.234 [2024-04-26 13:15:49.097915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.234 qpair failed and we were unable to recover it. 00:32:44.234 [2024-04-26 13:15:49.098254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.234 [2024-04-26 13:15:49.098477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.234 [2024-04-26 13:15:49.098483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.234 qpair failed and we were unable to recover it. 00:32:44.234 [2024-04-26 13:15:49.098797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.234 [2024-04-26 13:15:49.099096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.234 [2024-04-26 13:15:49.099102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.234 qpair failed and we were unable to recover it. 00:32:44.234 [2024-04-26 13:15:49.099419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.234 [2024-04-26 13:15:49.099731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.234 [2024-04-26 13:15:49.099737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.234 qpair failed and we were unable to recover it. 00:32:44.234 [2024-04-26 13:15:49.100029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.234 [2024-04-26 13:15:49.100330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.234 [2024-04-26 13:15:49.100336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.234 qpair failed and we were unable to recover it. 00:32:44.234 [2024-04-26 13:15:49.100639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.234 [2024-04-26 13:15:49.100955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.234 [2024-04-26 13:15:49.100962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.234 qpair failed and we were unable to recover it. 00:32:44.234 [2024-04-26 13:15:49.101292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.234 [2024-04-26 13:15:49.101481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.234 [2024-04-26 13:15:49.101488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.234 qpair failed and we were unable to recover it. 00:32:44.234 [2024-04-26 13:15:49.101813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.234 [2024-04-26 13:15:49.102131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.234 [2024-04-26 13:15:49.102138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.234 qpair failed and we were unable to recover it. 00:32:44.234 [2024-04-26 13:15:49.102431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.234 [2024-04-26 13:15:49.102754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.234 [2024-04-26 13:15:49.102761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.234 qpair failed and we were unable to recover it. 00:32:44.234 [2024-04-26 13:15:49.102972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.234 [2024-04-26 13:15:49.103308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.234 [2024-04-26 13:15:49.103315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.234 qpair failed and we were unable to recover it. 00:32:44.235 [2024-04-26 13:15:49.103641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.235 [2024-04-26 13:15:49.103828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.235 [2024-04-26 13:15:49.103835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.235 qpair failed and we were unable to recover it. 00:32:44.235 [2024-04-26 13:15:49.104117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.235 [2024-04-26 13:15:49.104323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.235 [2024-04-26 13:15:49.104329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.235 qpair failed and we were unable to recover it. 00:32:44.235 [2024-04-26 13:15:49.104658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.235 [2024-04-26 13:15:49.104943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.235 [2024-04-26 13:15:49.104951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.235 qpair failed and we were unable to recover it. 00:32:44.235 [2024-04-26 13:15:49.105235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.235 [2024-04-26 13:15:49.105533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.235 [2024-04-26 13:15:49.105540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.235 qpair failed and we were unable to recover it. 00:32:44.235 [2024-04-26 13:15:49.105849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.235 [2024-04-26 13:15:49.106166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.235 [2024-04-26 13:15:49.106172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.235 qpair failed and we were unable to recover it. 00:32:44.235 [2024-04-26 13:15:49.106366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.235 [2024-04-26 13:15:49.106725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.235 [2024-04-26 13:15:49.106732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.235 qpair failed and we were unable to recover it. 00:32:44.235 [2024-04-26 13:15:49.106968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.235 [2024-04-26 13:15:49.107304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.235 [2024-04-26 13:15:49.107311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.235 qpair failed and we were unable to recover it. 00:32:44.235 [2024-04-26 13:15:49.107606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.235 [2024-04-26 13:15:49.107902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.235 [2024-04-26 13:15:49.107909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.235 qpair failed and we were unable to recover it. 00:32:44.235 [2024-04-26 13:15:49.108227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.235 [2024-04-26 13:15:49.108550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.235 [2024-04-26 13:15:49.108557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.235 qpair failed and we were unable to recover it. 00:32:44.235 [2024-04-26 13:15:49.108884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.235 [2024-04-26 13:15:49.109079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.235 [2024-04-26 13:15:49.109086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.235 qpair failed and we were unable to recover it. 00:32:44.235 [2024-04-26 13:15:49.109407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.235 [2024-04-26 13:15:49.109725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.235 [2024-04-26 13:15:49.109732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.235 qpair failed and we were unable to recover it. 00:32:44.235 [2024-04-26 13:15:49.110015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.235 [2024-04-26 13:15:49.110340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.235 [2024-04-26 13:15:49.110346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.235 qpair failed and we were unable to recover it. 00:32:44.235 [2024-04-26 13:15:49.110695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.235 [2024-04-26 13:15:49.110991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.235 [2024-04-26 13:15:49.110997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.235 qpair failed and we were unable to recover it. 00:32:44.235 [2024-04-26 13:15:49.111186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.235 [2024-04-26 13:15:49.111516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.235 [2024-04-26 13:15:49.111523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.235 qpair failed and we were unable to recover it. 00:32:44.235 [2024-04-26 13:15:49.111834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.235 [2024-04-26 13:15:49.112146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.235 [2024-04-26 13:15:49.112153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.235 qpair failed and we were unable to recover it. 00:32:44.235 [2024-04-26 13:15:49.112461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.235 [2024-04-26 13:15:49.112747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.235 [2024-04-26 13:15:49.112753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.235 qpair failed and we were unable to recover it. 00:32:44.235 [2024-04-26 13:15:49.113072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.235 [2024-04-26 13:15:49.113390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.235 [2024-04-26 13:15:49.113396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.235 qpair failed and we were unable to recover it. 00:32:44.235 [2024-04-26 13:15:49.113665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.235 [2024-04-26 13:15:49.113894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.235 [2024-04-26 13:15:49.113900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.235 qpair failed and we were unable to recover it. 00:32:44.235 [2024-04-26 13:15:49.114205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.235 [2024-04-26 13:15:49.114510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.235 [2024-04-26 13:15:49.114517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.235 qpair failed and we were unable to recover it. 00:32:44.235 [2024-04-26 13:15:49.114701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.235 [2024-04-26 13:15:49.114929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.235 [2024-04-26 13:15:49.114936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.235 qpair failed and we were unable to recover it. 00:32:44.235 [2024-04-26 13:15:49.115269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.235 [2024-04-26 13:15:49.115543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.235 [2024-04-26 13:15:49.115549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.235 qpair failed and we were unable to recover it. 00:32:44.235 [2024-04-26 13:15:49.115863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.236 [2024-04-26 13:15:49.116244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.236 [2024-04-26 13:15:49.116250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.236 qpair failed and we were unable to recover it. 00:32:44.236 [2024-04-26 13:15:49.116560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.236 [2024-04-26 13:15:49.116875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.236 [2024-04-26 13:15:49.116882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.236 qpair failed and we were unable to recover it. 00:32:44.236 [2024-04-26 13:15:49.117172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.236 [2024-04-26 13:15:49.117485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.236 [2024-04-26 13:15:49.117491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.236 qpair failed and we were unable to recover it. 00:32:44.236 [2024-04-26 13:15:49.117802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.236 [2024-04-26 13:15:49.118117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.236 [2024-04-26 13:15:49.118124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.236 qpair failed and we were unable to recover it. 00:32:44.236 [2024-04-26 13:15:49.118438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.236 [2024-04-26 13:15:49.118726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.236 [2024-04-26 13:15:49.118733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.236 qpair failed and we were unable to recover it. 00:32:44.236 [2024-04-26 13:15:49.119080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.236 [2024-04-26 13:15:49.119376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.236 [2024-04-26 13:15:49.119383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.236 qpair failed and we were unable to recover it. 00:32:44.236 [2024-04-26 13:15:49.119719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.236 [2024-04-26 13:15:49.120056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.236 [2024-04-26 13:15:49.120062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.236 qpair failed and we were unable to recover it. 00:32:44.236 [2024-04-26 13:15:49.120373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.236 [2024-04-26 13:15:49.120674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.236 [2024-04-26 13:15:49.120681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.236 qpair failed and we were unable to recover it. 00:32:44.236 [2024-04-26 13:15:49.121014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.236 [2024-04-26 13:15:49.121330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.236 [2024-04-26 13:15:49.121336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.236 qpair failed and we were unable to recover it. 00:32:44.236 [2024-04-26 13:15:49.121486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.236 [2024-04-26 13:15:49.121774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.236 [2024-04-26 13:15:49.121781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.236 qpair failed and we were unable to recover it. 00:32:44.236 [2024-04-26 13:15:49.122088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.236 [2024-04-26 13:15:49.122408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.236 [2024-04-26 13:15:49.122414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.236 qpair failed and we were unable to recover it. 00:32:44.236 [2024-04-26 13:15:49.122745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.236 [2024-04-26 13:15:49.123064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.236 [2024-04-26 13:15:49.123070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.236 qpair failed and we were unable to recover it. 00:32:44.236 [2024-04-26 13:15:49.123364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.236 [2024-04-26 13:15:49.123557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.236 [2024-04-26 13:15:49.123563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.236 qpair failed and we were unable to recover it. 00:32:44.236 [2024-04-26 13:15:49.123884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.236 [2024-04-26 13:15:49.124107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.236 [2024-04-26 13:15:49.124113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.236 qpair failed and we were unable to recover it. 00:32:44.236 [2024-04-26 13:15:49.124415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.236 [2024-04-26 13:15:49.124747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.236 [2024-04-26 13:15:49.124754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.236 qpair failed and we were unable to recover it. 00:32:44.236 [2024-04-26 13:15:49.125070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.236 [2024-04-26 13:15:49.125388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.236 [2024-04-26 13:15:49.125394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.236 qpair failed and we were unable to recover it. 00:32:44.236 [2024-04-26 13:15:49.125609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.236 [2024-04-26 13:15:49.125935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.236 [2024-04-26 13:15:49.125942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.236 qpair failed and we were unable to recover it. 00:32:44.236 [2024-04-26 13:15:49.126237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.236 [2024-04-26 13:15:49.126556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.236 [2024-04-26 13:15:49.126563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.236 qpair failed and we were unable to recover it. 00:32:44.236 [2024-04-26 13:15:49.126856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.236 [2024-04-26 13:15:49.127159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.236 [2024-04-26 13:15:49.127166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.236 qpair failed and we were unable to recover it. 00:32:44.236 [2024-04-26 13:15:49.127401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.236 [2024-04-26 13:15:49.127729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.236 [2024-04-26 13:15:49.127735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.236 qpair failed and we were unable to recover it. 00:32:44.236 [2024-04-26 13:15:49.128056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.236 [2024-04-26 13:15:49.128366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.236 [2024-04-26 13:15:49.128373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.236 qpair failed and we were unable to recover it. 00:32:44.236 [2024-04-26 13:15:49.128662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.236 [2024-04-26 13:15:49.128952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.236 [2024-04-26 13:15:49.128959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.236 qpair failed and we were unable to recover it. 00:32:44.236 [2024-04-26 13:15:49.129301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.236 [2024-04-26 13:15:49.129592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.236 [2024-04-26 13:15:49.129599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.236 qpair failed and we were unable to recover it. 00:32:44.236 [2024-04-26 13:15:49.129917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.236 [2024-04-26 13:15:49.130106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.236 [2024-04-26 13:15:49.130113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.236 qpair failed and we were unable to recover it. 00:32:44.236 [2024-04-26 13:15:49.130318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.236 [2024-04-26 13:15:49.130558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.237 [2024-04-26 13:15:49.130565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.237 qpair failed and we were unable to recover it. 00:32:44.237 [2024-04-26 13:15:49.130726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.237 [2024-04-26 13:15:49.131056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.237 [2024-04-26 13:15:49.131063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.237 qpair failed and we were unable to recover it. 00:32:44.237 [2024-04-26 13:15:49.131367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.237 [2024-04-26 13:15:49.131679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.237 [2024-04-26 13:15:49.131686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.237 qpair failed and we were unable to recover it. 00:32:44.237 [2024-04-26 13:15:49.131989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.237 [2024-04-26 13:15:49.132312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.237 [2024-04-26 13:15:49.132320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.237 qpair failed and we were unable to recover it. 00:32:44.237 [2024-04-26 13:15:49.132633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.237 [2024-04-26 13:15:49.132976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.237 [2024-04-26 13:15:49.132984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.237 qpair failed and we were unable to recover it. 00:32:44.237 [2024-04-26 13:15:49.133298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.237 [2024-04-26 13:15:49.133598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.237 [2024-04-26 13:15:49.133605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.237 qpair failed and we were unable to recover it. 00:32:44.237 [2024-04-26 13:15:49.133898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.237 [2024-04-26 13:15:49.134179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.237 [2024-04-26 13:15:49.134185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.237 qpair failed and we were unable to recover it. 00:32:44.237 [2024-04-26 13:15:49.134504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.237 [2024-04-26 13:15:49.134800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.237 [2024-04-26 13:15:49.134806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.237 qpair failed and we were unable to recover it. 00:32:44.237 [2024-04-26 13:15:49.135156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.237 [2024-04-26 13:15:49.135465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.237 [2024-04-26 13:15:49.135471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.237 qpair failed and we were unable to recover it. 00:32:44.237 [2024-04-26 13:15:49.135763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.237 [2024-04-26 13:15:49.136048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.237 [2024-04-26 13:15:49.136054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.237 qpair failed and we were unable to recover it. 00:32:44.237 [2024-04-26 13:15:49.136368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.237 [2024-04-26 13:15:49.136672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.237 [2024-04-26 13:15:49.136678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.237 qpair failed and we were unable to recover it. 00:32:44.237 [2024-04-26 13:15:49.136990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.237 [2024-04-26 13:15:49.137320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.237 [2024-04-26 13:15:49.137326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.237 qpair failed and we were unable to recover it. 00:32:44.237 [2024-04-26 13:15:49.137626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.237 [2024-04-26 13:15:49.137907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.237 [2024-04-26 13:15:49.137914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.237 qpair failed and we were unable to recover it. 00:32:44.237 [2024-04-26 13:15:49.138229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.237 [2024-04-26 13:15:49.138511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.237 [2024-04-26 13:15:49.138517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.237 qpair failed and we were unable to recover it. 00:32:44.237 [2024-04-26 13:15:49.138815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.237 [2024-04-26 13:15:49.139004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.237 [2024-04-26 13:15:49.139010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.237 qpair failed and we were unable to recover it. 00:32:44.237 [2024-04-26 13:15:49.139221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.237 [2024-04-26 13:15:49.139494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.237 [2024-04-26 13:15:49.139501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.237 qpair failed and we were unable to recover it. 00:32:44.237 [2024-04-26 13:15:49.139829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.237 [2024-04-26 13:15:49.140133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.237 [2024-04-26 13:15:49.140139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.237 qpair failed and we were unable to recover it. 00:32:44.237 [2024-04-26 13:15:49.140430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.237 [2024-04-26 13:15:49.140762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.237 [2024-04-26 13:15:49.140768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.237 qpair failed and we were unable to recover it. 00:32:44.237 [2024-04-26 13:15:49.141075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.237 [2024-04-26 13:15:49.141410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.237 [2024-04-26 13:15:49.141416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.237 qpair failed and we were unable to recover it. 00:32:44.237 [2024-04-26 13:15:49.141604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.237 [2024-04-26 13:15:49.141970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.237 [2024-04-26 13:15:49.141976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.237 qpair failed and we were unable to recover it. 00:32:44.237 [2024-04-26 13:15:49.142269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.237 [2024-04-26 13:15:49.142553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.237 [2024-04-26 13:15:49.142560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.238 qpair failed and we were unable to recover it. 00:32:44.238 [2024-04-26 13:15:49.142854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.238 [2024-04-26 13:15:49.143153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.238 [2024-04-26 13:15:49.143159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.238 qpair failed and we were unable to recover it. 00:32:44.238 [2024-04-26 13:15:49.143465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.238 [2024-04-26 13:15:49.143816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.238 [2024-04-26 13:15:49.143824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.238 qpair failed and we were unable to recover it. 00:32:44.238 [2024-04-26 13:15:49.144150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.238 [2024-04-26 13:15:49.144370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.238 [2024-04-26 13:15:49.144376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.238 qpair failed and we were unable to recover it. 00:32:44.238 [2024-04-26 13:15:49.144566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.238 [2024-04-26 13:15:49.144866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.238 [2024-04-26 13:15:49.144873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.238 qpair failed and we were unable to recover it. 00:32:44.238 [2024-04-26 13:15:49.145179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.238 [2024-04-26 13:15:49.145482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.238 [2024-04-26 13:15:49.145488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.238 qpair failed and we were unable to recover it. 00:32:44.238 [2024-04-26 13:15:49.145828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.238 [2024-04-26 13:15:49.146114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.238 [2024-04-26 13:15:49.146121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.238 qpair failed and we were unable to recover it. 00:32:44.238 [2024-04-26 13:15:49.146430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.238 [2024-04-26 13:15:49.146756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.238 [2024-04-26 13:15:49.146763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.238 qpair failed and we were unable to recover it. 00:32:44.238 [2024-04-26 13:15:49.147111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.238 [2024-04-26 13:15:49.147409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.238 [2024-04-26 13:15:49.147415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.238 qpair failed and we were unable to recover it. 00:32:44.238 [2024-04-26 13:15:49.147737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.238 [2024-04-26 13:15:49.148032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.238 [2024-04-26 13:15:49.148038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.238 qpair failed and we were unable to recover it. 00:32:44.238 [2024-04-26 13:15:49.148437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.238 [2024-04-26 13:15:49.148746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.238 [2024-04-26 13:15:49.148755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.238 qpair failed and we were unable to recover it. 00:32:44.238 [2024-04-26 13:15:49.149088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.238 [2024-04-26 13:15:49.149392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.238 [2024-04-26 13:15:49.149399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.238 qpair failed and we were unable to recover it. 00:32:44.238 [2024-04-26 13:15:49.149679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.238 [2024-04-26 13:15:49.149998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.238 [2024-04-26 13:15:49.150004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.238 qpair failed and we were unable to recover it. 00:32:44.238 [2024-04-26 13:15:49.150320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.238 [2024-04-26 13:15:49.150625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.238 [2024-04-26 13:15:49.150632] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.238 qpair failed and we were unable to recover it. 00:32:44.238 [2024-04-26 13:15:49.150911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.238 [2024-04-26 13:15:49.151243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.238 [2024-04-26 13:15:49.151249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.238 qpair failed and we were unable to recover it. 00:32:44.238 [2024-04-26 13:15:49.151567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.238 [2024-04-26 13:15:49.151886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.238 [2024-04-26 13:15:49.151892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.238 qpair failed and we were unable to recover it. 00:32:44.238 [2024-04-26 13:15:49.152191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.238 [2024-04-26 13:15:49.152521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.238 [2024-04-26 13:15:49.152527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.238 qpair failed and we were unable to recover it. 00:32:44.238 [2024-04-26 13:15:49.152698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.238 [2024-04-26 13:15:49.152900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.238 [2024-04-26 13:15:49.152908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.238 qpair failed and we were unable to recover it. 00:32:44.238 [2024-04-26 13:15:49.153231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.238 [2024-04-26 13:15:49.153529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.238 [2024-04-26 13:15:49.153535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.238 qpair failed and we were unable to recover it. 00:32:44.238 [2024-04-26 13:15:49.153827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.238 [2024-04-26 13:15:49.154049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.238 [2024-04-26 13:15:49.154055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.238 qpair failed and we were unable to recover it. 00:32:44.238 [2024-04-26 13:15:49.154386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.238 [2024-04-26 13:15:49.154691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.238 [2024-04-26 13:15:49.154699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.238 qpair failed and we were unable to recover it. 00:32:44.238 [2024-04-26 13:15:49.154965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.238 [2024-04-26 13:15:49.155271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.238 [2024-04-26 13:15:49.155278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.238 qpair failed and we were unable to recover it. 00:32:44.238 [2024-04-26 13:15:49.155579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.238 [2024-04-26 13:15:49.155896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.238 [2024-04-26 13:15:49.155903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.238 qpair failed and we were unable to recover it. 00:32:44.238 [2024-04-26 13:15:49.156191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.238 [2024-04-26 13:15:49.156429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.238 [2024-04-26 13:15:49.156435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.238 qpair failed and we were unable to recover it. 00:32:44.238 [2024-04-26 13:15:49.156780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.238 [2024-04-26 13:15:49.157078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.239 [2024-04-26 13:15:49.157085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.239 qpair failed and we were unable to recover it. 00:32:44.239 [2024-04-26 13:15:49.157415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.239 [2024-04-26 13:15:49.157615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.239 [2024-04-26 13:15:49.157621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.239 qpair failed and we were unable to recover it. 00:32:44.239 [2024-04-26 13:15:49.157932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.239 [2024-04-26 13:15:49.158256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.239 [2024-04-26 13:15:49.158262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.239 qpair failed and we were unable to recover it. 00:32:44.239 [2024-04-26 13:15:49.158572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.239 [2024-04-26 13:15:49.158774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.239 [2024-04-26 13:15:49.158780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.239 qpair failed and we were unable to recover it. 00:32:44.239 [2024-04-26 13:15:49.159080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.239 [2024-04-26 13:15:49.159263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.239 [2024-04-26 13:15:49.159270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.239 qpair failed and we were unable to recover it. 00:32:44.239 [2024-04-26 13:15:49.159574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.239 [2024-04-26 13:15:49.159871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.239 [2024-04-26 13:15:49.159878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.239 qpair failed and we were unable to recover it. 00:32:44.239 [2024-04-26 13:15:49.160187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.239 [2024-04-26 13:15:49.160485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.239 [2024-04-26 13:15:49.160493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.239 qpair failed and we were unable to recover it. 00:32:44.239 [2024-04-26 13:15:49.160858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.239 [2024-04-26 13:15:49.161148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.239 [2024-04-26 13:15:49.161154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.239 qpair failed and we were unable to recover it. 00:32:44.239 [2024-04-26 13:15:49.161455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.239 [2024-04-26 13:15:49.161738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.239 [2024-04-26 13:15:49.161744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.239 qpair failed and we were unable to recover it. 00:32:44.239 [2024-04-26 13:15:49.161892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.239 [2024-04-26 13:15:49.162276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.239 [2024-04-26 13:15:49.162282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.239 qpair failed and we were unable to recover it. 00:32:44.239 [2024-04-26 13:15:49.162583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.239 [2024-04-26 13:15:49.162870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.239 [2024-04-26 13:15:49.162876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.239 qpair failed and we were unable to recover it. 00:32:44.239 [2024-04-26 13:15:49.163169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.239 [2024-04-26 13:15:49.163478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.239 [2024-04-26 13:15:49.163485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.239 qpair failed and we were unable to recover it. 00:32:44.239 [2024-04-26 13:15:49.163640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.239 [2024-04-26 13:15:49.163839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.239 [2024-04-26 13:15:49.163846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.239 qpair failed and we were unable to recover it. 00:32:44.239 [2024-04-26 13:15:49.164062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.239 [2024-04-26 13:15:49.164395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.239 [2024-04-26 13:15:49.164402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.239 qpair failed and we were unable to recover it. 00:32:44.239 [2024-04-26 13:15:49.164706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.239 [2024-04-26 13:15:49.164983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.239 [2024-04-26 13:15:49.164990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.239 qpair failed and we were unable to recover it. 00:32:44.239 [2024-04-26 13:15:49.165316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.239 [2024-04-26 13:15:49.165624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.239 [2024-04-26 13:15:49.165631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.239 qpair failed and we were unable to recover it. 00:32:44.239 [2024-04-26 13:15:49.165948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.239 [2024-04-26 13:15:49.166275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.239 [2024-04-26 13:15:49.166282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.239 qpair failed and we were unable to recover it. 00:32:44.239 [2024-04-26 13:15:49.166593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.239 [2024-04-26 13:15:49.166916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.239 [2024-04-26 13:15:49.166923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.239 qpair failed and we were unable to recover it. 00:32:44.239 [2024-04-26 13:15:49.167236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.239 [2024-04-26 13:15:49.167543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.239 [2024-04-26 13:15:49.167550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.239 qpair failed and we were unable to recover it. 00:32:44.239 [2024-04-26 13:15:49.167858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.239 [2024-04-26 13:15:49.168140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.239 [2024-04-26 13:15:49.168146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.239 qpair failed and we were unable to recover it. 00:32:44.239 [2024-04-26 13:15:49.168457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.239 [2024-04-26 13:15:49.168752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.239 [2024-04-26 13:15:49.168758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.239 qpair failed and we were unable to recover it. 00:32:44.239 [2024-04-26 13:15:49.169068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.239 [2024-04-26 13:15:49.169261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.239 [2024-04-26 13:15:49.169268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.239 qpair failed and we were unable to recover it. 00:32:44.239 [2024-04-26 13:15:49.169561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.239 [2024-04-26 13:15:49.169878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.239 [2024-04-26 13:15:49.169885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.239 qpair failed and we were unable to recover it. 00:32:44.239 [2024-04-26 13:15:49.170221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.239 [2024-04-26 13:15:49.170514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.239 [2024-04-26 13:15:49.170520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.239 qpair failed and we were unable to recover it. 00:32:44.239 [2024-04-26 13:15:49.170822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.239 [2024-04-26 13:15:49.171001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.239 [2024-04-26 13:15:49.171008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.239 qpair failed and we were unable to recover it. 00:32:44.239 [2024-04-26 13:15:49.171324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.239 [2024-04-26 13:15:49.171653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.240 [2024-04-26 13:15:49.171659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.240 qpair failed and we were unable to recover it. 00:32:44.240 [2024-04-26 13:15:49.171955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.240 [2024-04-26 13:15:49.172264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.240 [2024-04-26 13:15:49.172270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.240 qpair failed and we were unable to recover it. 00:32:44.240 [2024-04-26 13:15:49.172612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.240 [2024-04-26 13:15:49.172921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.240 [2024-04-26 13:15:49.172928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.240 qpair failed and we were unable to recover it. 00:32:44.240 [2024-04-26 13:15:49.173243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.240 [2024-04-26 13:15:49.173433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.240 [2024-04-26 13:15:49.173439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.240 qpair failed and we were unable to recover it. 00:32:44.240 [2024-04-26 13:15:49.173610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.240 [2024-04-26 13:15:49.173875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.240 [2024-04-26 13:15:49.173882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.240 qpair failed and we were unable to recover it. 00:32:44.240 [2024-04-26 13:15:49.174179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.240 [2024-04-26 13:15:49.174366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.240 [2024-04-26 13:15:49.174372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.240 qpair failed and we were unable to recover it. 00:32:44.240 [2024-04-26 13:15:49.174626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.240 [2024-04-26 13:15:49.174697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.240 [2024-04-26 13:15:49.174704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.240 qpair failed and we were unable to recover it. 00:32:44.240 [2024-04-26 13:15:49.175011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.240 [2024-04-26 13:15:49.175331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.240 [2024-04-26 13:15:49.175337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.240 qpair failed and we were unable to recover it. 00:32:44.240 [2024-04-26 13:15:49.175730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.240 [2024-04-26 13:15:49.175950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.240 [2024-04-26 13:15:49.175956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.240 qpair failed and we were unable to recover it. 00:32:44.240 [2024-04-26 13:15:49.176287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.240 [2024-04-26 13:15:49.176581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.240 [2024-04-26 13:15:49.176587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.240 qpair failed and we were unable to recover it. 00:32:44.240 [2024-04-26 13:15:49.176899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.240 [2024-04-26 13:15:49.177195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.240 [2024-04-26 13:15:49.177201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.240 qpair failed and we were unable to recover it. 00:32:44.240 [2024-04-26 13:15:49.177511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.240 [2024-04-26 13:15:49.177789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.240 [2024-04-26 13:15:49.177795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.240 qpair failed and we were unable to recover it. 00:32:44.240 [2024-04-26 13:15:49.178127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.240 [2024-04-26 13:15:49.178436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.240 [2024-04-26 13:15:49.178443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.240 qpair failed and we were unable to recover it. 00:32:44.240 [2024-04-26 13:15:49.178613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.240 [2024-04-26 13:15:49.178945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.240 [2024-04-26 13:15:49.178951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.240 qpair failed and we were unable to recover it. 00:32:44.240 [2024-04-26 13:15:49.179262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.240 [2024-04-26 13:15:49.179547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.240 [2024-04-26 13:15:49.179553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.240 qpair failed and we were unable to recover it. 00:32:44.240 [2024-04-26 13:15:49.179866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.240 [2024-04-26 13:15:49.180106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.240 [2024-04-26 13:15:49.180113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.240 qpair failed and we were unable to recover it. 00:32:44.240 [2024-04-26 13:15:49.180428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.240 [2024-04-26 13:15:49.180590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.240 [2024-04-26 13:15:49.180598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.240 qpair failed and we were unable to recover it. 00:32:44.240 [2024-04-26 13:15:49.180900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.240 [2024-04-26 13:15:49.181101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.240 [2024-04-26 13:15:49.181108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.240 qpair failed and we were unable to recover it. 00:32:44.240 [2024-04-26 13:15:49.181423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.240 [2024-04-26 13:15:49.181728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.240 [2024-04-26 13:15:49.181735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.240 qpair failed and we were unable to recover it. 00:32:44.240 [2024-04-26 13:15:49.182107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.240 [2024-04-26 13:15:49.182398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.240 [2024-04-26 13:15:49.182404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.240 qpair failed and we were unable to recover it. 00:32:44.240 [2024-04-26 13:15:49.182708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.240 [2024-04-26 13:15:49.183028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.240 [2024-04-26 13:15:49.183035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.240 qpair failed and we were unable to recover it. 00:32:44.240 [2024-04-26 13:15:49.183374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.240 [2024-04-26 13:15:49.183693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.240 [2024-04-26 13:15:49.183700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.240 qpair failed and we were unable to recover it. 00:32:44.240 [2024-04-26 13:15:49.184018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.240 [2024-04-26 13:15:49.184349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.240 [2024-04-26 13:15:49.184356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.240 qpair failed and we were unable to recover it. 00:32:44.240 [2024-04-26 13:15:49.184665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.240 [2024-04-26 13:15:49.185057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.240 [2024-04-26 13:15:49.185064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.241 qpair failed and we were unable to recover it. 00:32:44.241 [2024-04-26 13:15:49.185255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.241 [2024-04-26 13:15:49.185535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.241 [2024-04-26 13:15:49.185541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.241 qpair failed and we were unable to recover it. 00:32:44.241 [2024-04-26 13:15:49.185832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.241 [2024-04-26 13:15:49.186219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.241 [2024-04-26 13:15:49.186226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.241 qpair failed and we were unable to recover it. 00:32:44.241 [2024-04-26 13:15:49.186537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.241 [2024-04-26 13:15:49.186850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.241 [2024-04-26 13:15:49.186856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.241 qpair failed and we were unable to recover it. 00:32:44.241 [2024-04-26 13:15:49.187174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.241 [2024-04-26 13:15:49.187479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.241 [2024-04-26 13:15:49.187485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.241 qpair failed and we were unable to recover it. 00:32:44.241 [2024-04-26 13:15:49.187776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.241 [2024-04-26 13:15:49.188081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.241 [2024-04-26 13:15:49.188087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.241 qpair failed and we were unable to recover it. 00:32:44.241 [2024-04-26 13:15:49.188388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.241 [2024-04-26 13:15:49.188681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.241 [2024-04-26 13:15:49.188687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.241 qpair failed and we were unable to recover it. 00:32:44.241 [2024-04-26 13:15:49.188990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.241 [2024-04-26 13:15:49.189327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.241 [2024-04-26 13:15:49.189333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.241 qpair failed and we were unable to recover it. 00:32:44.241 [2024-04-26 13:15:49.189635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.241 [2024-04-26 13:15:49.189945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.241 [2024-04-26 13:15:49.189951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.241 qpair failed and we were unable to recover it. 00:32:44.241 [2024-04-26 13:15:49.190122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.241 [2024-04-26 13:15:49.190326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.241 [2024-04-26 13:15:49.190334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.241 qpair failed and we were unable to recover it. 00:32:44.241 [2024-04-26 13:15:49.190647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.241 [2024-04-26 13:15:49.190952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.241 [2024-04-26 13:15:49.190958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.241 qpair failed and we were unable to recover it. 00:32:44.241 [2024-04-26 13:15:49.191276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.241 [2024-04-26 13:15:49.191574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.241 [2024-04-26 13:15:49.191580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.241 qpair failed and we were unable to recover it. 00:32:44.241 [2024-04-26 13:15:49.191798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.241 [2024-04-26 13:15:49.192074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.241 [2024-04-26 13:15:49.192081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.241 qpair failed and we were unable to recover it. 00:32:44.241 [2024-04-26 13:15:49.192403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.241 [2024-04-26 13:15:49.192595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.241 [2024-04-26 13:15:49.192601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.241 qpair failed and we were unable to recover it. 00:32:44.241 [2024-04-26 13:15:49.192760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.241 [2024-04-26 13:15:49.193053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.241 [2024-04-26 13:15:49.193061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.241 qpair failed and we were unable to recover it. 00:32:44.241 [2024-04-26 13:15:49.193376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.241 [2024-04-26 13:15:49.193683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.241 [2024-04-26 13:15:49.193689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.241 qpair failed and we were unable to recover it. 00:32:44.241 [2024-04-26 13:15:49.193969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.241 [2024-04-26 13:15:49.194331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.241 [2024-04-26 13:15:49.194337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.241 qpair failed and we were unable to recover it. 00:32:44.241 [2024-04-26 13:15:49.194626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.241 [2024-04-26 13:15:49.194954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.241 [2024-04-26 13:15:49.194960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.241 qpair failed and we were unable to recover it. 00:32:44.241 [2024-04-26 13:15:49.195272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.241 [2024-04-26 13:15:49.195585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.241 [2024-04-26 13:15:49.195591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.241 qpair failed and we were unable to recover it. 00:32:44.241 [2024-04-26 13:15:49.195877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.241 [2024-04-26 13:15:49.196191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.241 [2024-04-26 13:15:49.196197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.241 qpair failed and we were unable to recover it. 00:32:44.241 [2024-04-26 13:15:49.196495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.241 [2024-04-26 13:15:49.196846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.241 [2024-04-26 13:15:49.196854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.241 qpair failed and we were unable to recover it. 00:32:44.241 [2024-04-26 13:15:49.197141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.241 [2024-04-26 13:15:49.197370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.241 [2024-04-26 13:15:49.197376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.241 qpair failed and we were unable to recover it. 00:32:44.241 [2024-04-26 13:15:49.197677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.242 [2024-04-26 13:15:49.197958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.242 [2024-04-26 13:15:49.197965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.242 qpair failed and we were unable to recover it. 00:32:44.242 [2024-04-26 13:15:49.198274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.242 [2024-04-26 13:15:49.198554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.242 [2024-04-26 13:15:49.198561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.242 qpair failed and we were unable to recover it. 00:32:44.242 [2024-04-26 13:15:49.198874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.242 [2024-04-26 13:15:49.199194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.242 [2024-04-26 13:15:49.199200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.242 qpair failed and we were unable to recover it. 00:32:44.242 [2024-04-26 13:15:49.199508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.242 [2024-04-26 13:15:49.199850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.242 [2024-04-26 13:15:49.199858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.242 qpair failed and we were unable to recover it. 00:32:44.242 [2024-04-26 13:15:49.200160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.242 [2024-04-26 13:15:49.200476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.242 [2024-04-26 13:15:49.200482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.242 qpair failed and we were unable to recover it. 00:32:44.242 [2024-04-26 13:15:49.200789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.242 [2024-04-26 13:15:49.201116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.242 [2024-04-26 13:15:49.201123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.242 qpair failed and we were unable to recover it. 00:32:44.242 [2024-04-26 13:15:49.201271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.242 [2024-04-26 13:15:49.201584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.242 [2024-04-26 13:15:49.201590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.242 qpair failed and we were unable to recover it. 00:32:44.242 [2024-04-26 13:15:49.201887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.242 [2024-04-26 13:15:49.202106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.242 [2024-04-26 13:15:49.202112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.242 qpair failed and we were unable to recover it. 00:32:44.242 [2024-04-26 13:15:49.202420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.242 [2024-04-26 13:15:49.202719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.242 [2024-04-26 13:15:49.202725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.242 qpair failed and we were unable to recover it. 00:32:44.242 [2024-04-26 13:15:49.203022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.242 [2024-04-26 13:15:49.203222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.242 [2024-04-26 13:15:49.203228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.242 qpair failed and we were unable to recover it. 00:32:44.242 [2024-04-26 13:15:49.203370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.242 [2024-04-26 13:15:49.203742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.242 [2024-04-26 13:15:49.203748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.242 qpair failed and we were unable to recover it. 00:32:44.242 [2024-04-26 13:15:49.204050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.242 [2024-04-26 13:15:49.204346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.242 [2024-04-26 13:15:49.204352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.242 qpair failed and we were unable to recover it. 00:32:44.242 [2024-04-26 13:15:49.204645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.242 [2024-04-26 13:15:49.204830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.242 [2024-04-26 13:15:49.204839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.242 qpair failed and we were unable to recover it. 00:32:44.242 [2024-04-26 13:15:49.205148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.242 [2024-04-26 13:15:49.205471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.242 [2024-04-26 13:15:49.205478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.242 qpair failed and we were unable to recover it. 00:32:44.242 [2024-04-26 13:15:49.205793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.242 [2024-04-26 13:15:49.206119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.242 [2024-04-26 13:15:49.206125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.242 qpair failed and we were unable to recover it. 00:32:44.242 [2024-04-26 13:15:49.206432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.242 [2024-04-26 13:15:49.206772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.242 [2024-04-26 13:15:49.206778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.242 qpair failed and we were unable to recover it. 00:32:44.242 [2024-04-26 13:15:49.207073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.242 [2024-04-26 13:15:49.207367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.242 [2024-04-26 13:15:49.207374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.242 qpair failed and we were unable to recover it. 00:32:44.242 [2024-04-26 13:15:49.207756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.242 [2024-04-26 13:15:49.208152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.242 [2024-04-26 13:15:49.208159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.242 qpair failed and we were unable to recover it. 00:32:44.242 [2024-04-26 13:15:49.208472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.242 [2024-04-26 13:15:49.208789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.242 [2024-04-26 13:15:49.208796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.242 qpair failed and we were unable to recover it. 00:32:44.243 [2024-04-26 13:15:49.209107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.243 [2024-04-26 13:15:49.209283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.243 [2024-04-26 13:15:49.209290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.243 qpair failed and we were unable to recover it. 00:32:44.243 [2024-04-26 13:15:49.209596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.243 [2024-04-26 13:15:49.209936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.243 [2024-04-26 13:15:49.209943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.243 qpair failed and we were unable to recover it. 00:32:44.243 [2024-04-26 13:15:49.210271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.243 [2024-04-26 13:15:49.210606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.243 [2024-04-26 13:15:49.210613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.243 qpair failed and we were unable to recover it. 00:32:44.243 [2024-04-26 13:15:49.210904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.243 [2024-04-26 13:15:49.211208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.243 [2024-04-26 13:15:49.211214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.243 qpair failed and we were unable to recover it. 00:32:44.243 [2024-04-26 13:15:49.211511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.243 [2024-04-26 13:15:49.211814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.243 [2024-04-26 13:15:49.211820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.243 qpair failed and we were unable to recover it. 00:32:44.243 [2024-04-26 13:15:49.212198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.243 [2024-04-26 13:15:49.212522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.243 [2024-04-26 13:15:49.212529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.243 qpair failed and we were unable to recover it. 00:32:44.243 [2024-04-26 13:15:49.212825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.243 [2024-04-26 13:15:49.213118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.243 [2024-04-26 13:15:49.213125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.243 qpair failed and we were unable to recover it. 00:32:44.243 [2024-04-26 13:15:49.213438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.243 [2024-04-26 13:15:49.213755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.243 [2024-04-26 13:15:49.213762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.243 qpair failed and we were unable to recover it. 00:32:44.243 [2024-04-26 13:15:49.214077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.243 [2024-04-26 13:15:49.214390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.243 [2024-04-26 13:15:49.214397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.243 qpair failed and we were unable to recover it. 00:32:44.243 [2024-04-26 13:15:49.214690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.243 [2024-04-26 13:15:49.214999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.243 [2024-04-26 13:15:49.215005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.243 qpair failed and we were unable to recover it. 00:32:44.243 [2024-04-26 13:15:49.215311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.243 [2024-04-26 13:15:49.215594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.243 [2024-04-26 13:15:49.215600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.243 qpair failed and we were unable to recover it. 00:32:44.243 [2024-04-26 13:15:49.215910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.243 [2024-04-26 13:15:49.216093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.243 [2024-04-26 13:15:49.216100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.243 qpair failed and we were unable to recover it. 00:32:44.243 [2024-04-26 13:15:49.216409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.243 [2024-04-26 13:15:49.216718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.243 [2024-04-26 13:15:49.216724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.243 qpair failed and we were unable to recover it. 00:32:44.243 [2024-04-26 13:15:49.217018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.243 [2024-04-26 13:15:49.217340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.243 [2024-04-26 13:15:49.217346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.243 qpair failed and we were unable to recover it. 00:32:44.243 [2024-04-26 13:15:49.217654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.243 [2024-04-26 13:15:49.217998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.243 [2024-04-26 13:15:49.218004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.243 qpair failed and we were unable to recover it. 00:32:44.243 [2024-04-26 13:15:49.218324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.243 [2024-04-26 13:15:49.218514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.243 [2024-04-26 13:15:49.218521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.243 qpair failed and we were unable to recover it. 00:32:44.243 [2024-04-26 13:15:49.218671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.243 [2024-04-26 13:15:49.218960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.243 [2024-04-26 13:15:49.218967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.243 qpair failed and we were unable to recover it. 00:32:44.243 [2024-04-26 13:15:49.219181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.243 [2024-04-26 13:15:49.219481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.243 [2024-04-26 13:15:49.219488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.243 qpair failed and we were unable to recover it. 00:32:44.243 [2024-04-26 13:15:49.219754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.243 [2024-04-26 13:15:49.220078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.243 [2024-04-26 13:15:49.220085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.243 qpair failed and we were unable to recover it. 00:32:44.243 [2024-04-26 13:15:49.220361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.243 [2024-04-26 13:15:49.220686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.243 [2024-04-26 13:15:49.220693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.243 qpair failed and we were unable to recover it. 00:32:44.243 [2024-04-26 13:15:49.221006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.243 [2024-04-26 13:15:49.221324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.243 [2024-04-26 13:15:49.221330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.243 qpair failed and we were unable to recover it. 00:32:44.243 [2024-04-26 13:15:49.221631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.243 [2024-04-26 13:15:49.221911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.243 [2024-04-26 13:15:49.221918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.243 qpair failed and we were unable to recover it. 00:32:44.243 [2024-04-26 13:15:49.222226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.244 [2024-04-26 13:15:49.222520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.244 [2024-04-26 13:15:49.222526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.244 qpair failed and we were unable to recover it. 00:32:44.244 [2024-04-26 13:15:49.222710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.244 [2024-04-26 13:15:49.222931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.244 [2024-04-26 13:15:49.222938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.244 qpair failed and we were unable to recover it. 00:32:44.244 [2024-04-26 13:15:49.223262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.244 [2024-04-26 13:15:49.223585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.244 [2024-04-26 13:15:49.223591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.244 qpair failed and we were unable to recover it. 00:32:44.244 [2024-04-26 13:15:49.223880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.244 [2024-04-26 13:15:49.224102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.244 [2024-04-26 13:15:49.224109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.244 qpair failed and we were unable to recover it. 00:32:44.244 [2024-04-26 13:15:49.224434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.244 [2024-04-26 13:15:49.224835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.244 [2024-04-26 13:15:49.224844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.244 qpair failed and we were unable to recover it. 00:32:44.244 [2024-04-26 13:15:49.225147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.244 [2024-04-26 13:15:49.225460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.244 [2024-04-26 13:15:49.225466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.244 qpair failed and we were unable to recover it. 00:32:44.244 [2024-04-26 13:15:49.225805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.244 [2024-04-26 13:15:49.225993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.244 [2024-04-26 13:15:49.226000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.244 qpair failed and we were unable to recover it. 00:32:44.244 [2024-04-26 13:15:49.226349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.244 [2024-04-26 13:15:49.226688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.244 [2024-04-26 13:15:49.226694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.244 qpair failed and we were unable to recover it. 00:32:44.244 [2024-04-26 13:15:49.227010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.244 [2024-04-26 13:15:49.227317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.244 [2024-04-26 13:15:49.227323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.244 qpair failed and we were unable to recover it. 00:32:44.244 [2024-04-26 13:15:49.227468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.244 [2024-04-26 13:15:49.227736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.244 [2024-04-26 13:15:49.227743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.244 qpair failed and we were unable to recover it. 00:32:44.244 [2024-04-26 13:15:49.228072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.244 [2024-04-26 13:15:49.228382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.244 [2024-04-26 13:15:49.228388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.244 qpair failed and we were unable to recover it. 00:32:44.244 [2024-04-26 13:15:49.228729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.244 [2024-04-26 13:15:49.229031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.244 [2024-04-26 13:15:49.229038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.244 qpair failed and we were unable to recover it. 00:32:44.244 [2024-04-26 13:15:49.229338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.244 [2024-04-26 13:15:49.229659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.244 [2024-04-26 13:15:49.229666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.244 qpair failed and we were unable to recover it. 00:32:44.244 [2024-04-26 13:15:49.229859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.244 [2024-04-26 13:15:49.230219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.244 [2024-04-26 13:15:49.230225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.244 qpair failed and we were unable to recover it. 00:32:44.244 [2024-04-26 13:15:49.230525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.244 [2024-04-26 13:15:49.230812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.244 [2024-04-26 13:15:49.230818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.244 qpair failed and we were unable to recover it. 00:32:44.244 [2024-04-26 13:15:49.231112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.244 [2024-04-26 13:15:49.231447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.244 [2024-04-26 13:15:49.231453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.244 qpair failed and we were unable to recover it. 00:32:44.244 [2024-04-26 13:15:49.231774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.244 [2024-04-26 13:15:49.232080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.244 [2024-04-26 13:15:49.232088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.244 qpair failed and we were unable to recover it. 00:32:44.244 [2024-04-26 13:15:49.232395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.244 [2024-04-26 13:15:49.232737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.244 [2024-04-26 13:15:49.232743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.244 qpair failed and we were unable to recover it. 00:32:44.244 [2024-04-26 13:15:49.233066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.244 [2024-04-26 13:15:49.233204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.244 [2024-04-26 13:15:49.233210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.244 qpair failed and we were unable to recover it. 00:32:44.244 [2024-04-26 13:15:49.233432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.244 [2024-04-26 13:15:49.233767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.244 [2024-04-26 13:15:49.233774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.244 qpair failed and we were unable to recover it. 00:32:44.244 [2024-04-26 13:15:49.233991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.244 [2024-04-26 13:15:49.234341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.244 [2024-04-26 13:15:49.234348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.244 qpair failed and we were unable to recover it. 00:32:44.244 [2024-04-26 13:15:49.234658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.244 [2024-04-26 13:15:49.234983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.244 [2024-04-26 13:15:49.234990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.244 qpair failed and we were unable to recover it. 00:32:44.244 [2024-04-26 13:15:49.235186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.244 [2024-04-26 13:15:49.235503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.244 [2024-04-26 13:15:49.235509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.244 qpair failed and we were unable to recover it. 00:32:44.244 [2024-04-26 13:15:49.235815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.244 [2024-04-26 13:15:49.236146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.245 [2024-04-26 13:15:49.236153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.245 qpair failed and we were unable to recover it. 00:32:44.245 [2024-04-26 13:15:49.236452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.245 [2024-04-26 13:15:49.236656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.245 [2024-04-26 13:15:49.236662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.245 qpair failed and we were unable to recover it. 00:32:44.245 [2024-04-26 13:15:49.236967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.245 [2024-04-26 13:15:49.237172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.245 [2024-04-26 13:15:49.237179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.245 qpair failed and we were unable to recover it. 00:32:44.245 [2024-04-26 13:15:49.237509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.245 [2024-04-26 13:15:49.237830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.245 [2024-04-26 13:15:49.237841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.245 qpair failed and we were unable to recover it. 00:32:44.245 [2024-04-26 13:15:49.238131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.245 [2024-04-26 13:15:49.238444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.245 [2024-04-26 13:15:49.238450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.245 qpair failed and we were unable to recover it. 00:32:44.245 [2024-04-26 13:15:49.238762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.245 [2024-04-26 13:15:49.239052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.245 [2024-04-26 13:15:49.239059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.245 qpair failed and we were unable to recover it. 00:32:44.245 [2024-04-26 13:15:49.239399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.245 [2024-04-26 13:15:49.239715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.245 [2024-04-26 13:15:49.239721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.245 qpair failed and we were unable to recover it. 00:32:44.245 [2024-04-26 13:15:49.240110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.245 [2024-04-26 13:15:49.240436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.245 [2024-04-26 13:15:49.240443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.245 qpair failed and we were unable to recover it. 00:32:44.245 [2024-04-26 13:15:49.240779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.245 [2024-04-26 13:15:49.240987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.245 [2024-04-26 13:15:49.240993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.245 qpair failed and we were unable to recover it. 00:32:44.245 [2024-04-26 13:15:49.241366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.245 [2024-04-26 13:15:49.241514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.245 [2024-04-26 13:15:49.241521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.245 qpair failed and we were unable to recover it. 00:32:44.245 [2024-04-26 13:15:49.241846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.245 [2024-04-26 13:15:49.242144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.245 [2024-04-26 13:15:49.242150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.245 qpair failed and we were unable to recover it. 00:32:44.245 [2024-04-26 13:15:49.242475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.245 [2024-04-26 13:15:49.242797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.245 [2024-04-26 13:15:49.242804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.245 qpair failed and we were unable to recover it. 00:32:44.245 [2024-04-26 13:15:49.243105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.245 [2024-04-26 13:15:49.243317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.245 [2024-04-26 13:15:49.243323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.245 qpair failed and we were unable to recover it. 00:32:44.245 [2024-04-26 13:15:49.243645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.245 [2024-04-26 13:15:49.243890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.245 [2024-04-26 13:15:49.243898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.245 qpair failed and we were unable to recover it. 00:32:44.245 [2024-04-26 13:15:49.244194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.245 [2024-04-26 13:15:49.244361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.245 [2024-04-26 13:15:49.244369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.245 qpair failed and we were unable to recover it. 00:32:44.245 [2024-04-26 13:15:49.244668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.245 [2024-04-26 13:15:49.244923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.245 [2024-04-26 13:15:49.244930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.245 qpair failed and we were unable to recover it. 00:32:44.245 [2024-04-26 13:15:49.245218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.245 [2024-04-26 13:15:49.245537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.245 [2024-04-26 13:15:49.245543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.245 qpair failed and we were unable to recover it. 00:32:44.245 [2024-04-26 13:15:49.245738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.245 [2024-04-26 13:15:49.246072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.245 [2024-04-26 13:15:49.246079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.245 qpair failed and we were unable to recover it. 00:32:44.245 [2024-04-26 13:15:49.246382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.245 [2024-04-26 13:15:49.246690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.245 [2024-04-26 13:15:49.246696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.245 qpair failed and we were unable to recover it. 00:32:44.245 [2024-04-26 13:15:49.247006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.245 [2024-04-26 13:15:49.247195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.245 [2024-04-26 13:15:49.247202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.245 qpair failed and we were unable to recover it. 00:32:44.245 [2024-04-26 13:15:49.247502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.245 [2024-04-26 13:15:49.247813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.245 [2024-04-26 13:15:49.247820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.245 qpair failed and we were unable to recover it. 00:32:44.245 [2024-04-26 13:15:49.248151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.245 [2024-04-26 13:15:49.248429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.245 [2024-04-26 13:15:49.248436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.245 qpair failed and we were unable to recover it. 00:32:44.245 [2024-04-26 13:15:49.248733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.245 [2024-04-26 13:15:49.249089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.245 [2024-04-26 13:15:49.249096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.245 qpair failed and we were unable to recover it. 00:32:44.245 [2024-04-26 13:15:49.249412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.245 [2024-04-26 13:15:49.249607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.245 [2024-04-26 13:15:49.249615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.245 qpair failed and we were unable to recover it. 00:32:44.245 [2024-04-26 13:15:49.249931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.245 [2024-04-26 13:15:49.250219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.246 [2024-04-26 13:15:49.250226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.246 qpair failed and we were unable to recover it. 00:32:44.246 [2024-04-26 13:15:49.250452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.246 [2024-04-26 13:15:49.250657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.246 [2024-04-26 13:15:49.250664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.246 qpair failed and we were unable to recover it. 00:32:44.246 [2024-04-26 13:15:49.250994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.246 [2024-04-26 13:15:49.251299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.246 [2024-04-26 13:15:49.251305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.246 qpair failed and we were unable to recover it. 00:32:44.246 [2024-04-26 13:15:49.251615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.246 [2024-04-26 13:15:49.251958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.246 [2024-04-26 13:15:49.251964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.246 qpair failed and we were unable to recover it. 00:32:44.246 [2024-04-26 13:15:49.252063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.246 [2024-04-26 13:15:49.252312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.246 [2024-04-26 13:15:49.252318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.246 qpair failed and we were unable to recover it. 00:32:44.246 [2024-04-26 13:15:49.252642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.246 [2024-04-26 13:15:49.252946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.246 [2024-04-26 13:15:49.252953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.246 qpair failed and we were unable to recover it. 00:32:44.246 [2024-04-26 13:15:49.253254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.246 [2024-04-26 13:15:49.253601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.246 [2024-04-26 13:15:49.253607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.246 qpair failed and we were unable to recover it. 00:32:44.246 [2024-04-26 13:15:49.253900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.246 [2024-04-26 13:15:49.254075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.246 [2024-04-26 13:15:49.254082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.246 qpair failed and we were unable to recover it. 00:32:44.246 [2024-04-26 13:15:49.254377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.246 [2024-04-26 13:15:49.254455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.246 [2024-04-26 13:15:49.254461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.246 qpair failed and we were unable to recover it. 00:32:44.246 [2024-04-26 13:15:49.254813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.246 [2024-04-26 13:15:49.255133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.246 [2024-04-26 13:15:49.255141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.246 qpair failed and we were unable to recover it. 00:32:44.246 [2024-04-26 13:15:49.255450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.246 [2024-04-26 13:15:49.255634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.246 [2024-04-26 13:15:49.255640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.246 qpair failed and we were unable to recover it. 00:32:44.246 [2024-04-26 13:15:49.256006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.246 [2024-04-26 13:15:49.256286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.246 [2024-04-26 13:15:49.256292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.246 qpair failed and we were unable to recover it. 00:32:44.246 [2024-04-26 13:15:49.256585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.246 [2024-04-26 13:15:49.256747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.246 [2024-04-26 13:15:49.256753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.246 qpair failed and we were unable to recover it. 00:32:44.246 [2024-04-26 13:15:49.257073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.246 [2024-04-26 13:15:49.257238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.246 [2024-04-26 13:15:49.257244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.246 qpair failed and we were unable to recover it. 00:32:44.246 [2024-04-26 13:15:49.257556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.246 [2024-04-26 13:15:49.257761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.246 [2024-04-26 13:15:49.257767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.246 qpair failed and we were unable to recover it. 00:32:44.246 [2024-04-26 13:15:49.258097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.246 [2024-04-26 13:15:49.258415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.246 [2024-04-26 13:15:49.258421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.246 qpair failed and we were unable to recover it. 00:32:44.246 [2024-04-26 13:15:49.258710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.246 [2024-04-26 13:15:49.259007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.246 [2024-04-26 13:15:49.259014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.246 qpair failed and we were unable to recover it. 00:32:44.246 [2024-04-26 13:15:49.259323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.246 [2024-04-26 13:15:49.259605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.246 [2024-04-26 13:15:49.259611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.246 qpair failed and we were unable to recover it. 00:32:44.246 [2024-04-26 13:15:49.259997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.246 [2024-04-26 13:15:49.260270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.246 [2024-04-26 13:15:49.260276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.246 qpair failed and we were unable to recover it. 00:32:44.246 [2024-04-26 13:15:49.260606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.246 [2024-04-26 13:15:49.260925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.246 [2024-04-26 13:15:49.260932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.246 qpair failed and we were unable to recover it. 00:32:44.246 [2024-04-26 13:15:49.261266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.246 [2024-04-26 13:15:49.261604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.246 [2024-04-26 13:15:49.261610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.246 qpair failed and we were unable to recover it. 00:32:44.246 [2024-04-26 13:15:49.261924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.246 [2024-04-26 13:15:49.262234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.246 [2024-04-26 13:15:49.262240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.246 qpair failed and we were unable to recover it. 00:32:44.246 [2024-04-26 13:15:49.262385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.246 [2024-04-26 13:15:49.262665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.246 [2024-04-26 13:15:49.262671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.246 qpair failed and we were unable to recover it. 00:32:44.246 [2024-04-26 13:15:49.262997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.246 [2024-04-26 13:15:49.263319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.246 [2024-04-26 13:15:49.263326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.246 qpair failed and we were unable to recover it. 00:32:44.246 [2024-04-26 13:15:49.263573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.246 [2024-04-26 13:15:49.263911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.246 [2024-04-26 13:15:49.263917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.246 qpair failed and we were unable to recover it. 00:32:44.246 [2024-04-26 13:15:49.264229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.246 [2024-04-26 13:15:49.264551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.246 [2024-04-26 13:15:49.264558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.246 qpair failed and we were unable to recover it. 00:32:44.246 [2024-04-26 13:15:49.264742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.246 [2024-04-26 13:15:49.265054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.246 [2024-04-26 13:15:49.265061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.246 qpair failed and we were unable to recover it. 00:32:44.247 [2024-04-26 13:15:49.265395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.247 [2024-04-26 13:15:49.265715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.247 [2024-04-26 13:15:49.265722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.247 qpair failed and we were unable to recover it. 00:32:44.247 [2024-04-26 13:15:49.266025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.247 [2024-04-26 13:15:49.266340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.247 [2024-04-26 13:15:49.266346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.247 qpair failed and we were unable to recover it. 00:32:44.247 [2024-04-26 13:15:49.266670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.247 [2024-04-26 13:15:49.267005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.247 [2024-04-26 13:15:49.267011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.247 qpair failed and we were unable to recover it. 00:32:44.247 [2024-04-26 13:15:49.267320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.247 [2024-04-26 13:15:49.267623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.247 [2024-04-26 13:15:49.267629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.247 qpair failed and we were unable to recover it. 00:32:44.247 [2024-04-26 13:15:49.267961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.247 [2024-04-26 13:15:49.268272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.247 [2024-04-26 13:15:49.268278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.247 qpair failed and we were unable to recover it. 00:32:44.247 [2024-04-26 13:15:49.268466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.247 [2024-04-26 13:15:49.268796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.247 [2024-04-26 13:15:49.268802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.247 qpair failed and we were unable to recover it. 00:32:44.247 [2024-04-26 13:15:49.269088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.247 [2024-04-26 13:15:49.269383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.247 [2024-04-26 13:15:49.269389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.247 qpair failed and we were unable to recover it. 00:32:44.247 [2024-04-26 13:15:49.269595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.247 [2024-04-26 13:15:49.269920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.247 [2024-04-26 13:15:49.269926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.247 qpair failed and we were unable to recover it. 00:32:44.247 [2024-04-26 13:15:49.270231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.247 [2024-04-26 13:15:49.270551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.247 [2024-04-26 13:15:49.270557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.247 qpair failed and we were unable to recover it. 00:32:44.518 [2024-04-26 13:15:49.270891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.518 [2024-04-26 13:15:49.271186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.518 [2024-04-26 13:15:49.271193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.518 qpair failed and we were unable to recover it. 00:32:44.518 [2024-04-26 13:15:49.271389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.518 [2024-04-26 13:15:49.271585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.518 [2024-04-26 13:15:49.271592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.518 qpair failed and we were unable to recover it. 00:32:44.518 [2024-04-26 13:15:49.271802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.518 [2024-04-26 13:15:49.272088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.518 [2024-04-26 13:15:49.272095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.518 qpair failed and we were unable to recover it. 00:32:44.518 [2024-04-26 13:15:49.272388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.519 [2024-04-26 13:15:49.272715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.519 [2024-04-26 13:15:49.272723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.519 qpair failed and we were unable to recover it. 00:32:44.519 [2024-04-26 13:15:49.273039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.519 [2024-04-26 13:15:49.273356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.519 [2024-04-26 13:15:49.273363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.519 qpair failed and we were unable to recover it. 00:32:44.519 [2024-04-26 13:15:49.273676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.519 [2024-04-26 13:15:49.273989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.519 [2024-04-26 13:15:49.273995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.519 qpair failed and we were unable to recover it. 00:32:44.519 [2024-04-26 13:15:49.274307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.519 [2024-04-26 13:15:49.274590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.519 [2024-04-26 13:15:49.274597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.519 qpair failed and we were unable to recover it. 00:32:44.519 [2024-04-26 13:15:49.274908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.519 [2024-04-26 13:15:49.275220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.519 [2024-04-26 13:15:49.275226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.519 qpair failed and we were unable to recover it. 00:32:44.519 [2024-04-26 13:15:49.275368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.519 [2024-04-26 13:15:49.275529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.519 [2024-04-26 13:15:49.275535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.519 qpair failed and we were unable to recover it. 00:32:44.519 [2024-04-26 13:15:49.275748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.519 [2024-04-26 13:15:49.276082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.519 [2024-04-26 13:15:49.276089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.519 qpair failed and we were unable to recover it. 00:32:44.519 [2024-04-26 13:15:49.276408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.519 [2024-04-26 13:15:49.276725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.519 [2024-04-26 13:15:49.276732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.519 qpair failed and we were unable to recover it. 00:32:44.519 [2024-04-26 13:15:49.277009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.519 [2024-04-26 13:15:49.277348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.519 [2024-04-26 13:15:49.277355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.519 qpair failed and we were unable to recover it. 00:32:44.519 [2024-04-26 13:15:49.277645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.519 [2024-04-26 13:15:49.277958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.519 [2024-04-26 13:15:49.277966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.519 qpair failed and we were unable to recover it. 00:32:44.519 [2024-04-26 13:15:49.278274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.519 [2024-04-26 13:15:49.278426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.519 [2024-04-26 13:15:49.278434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.519 qpair failed and we were unable to recover it. 00:32:44.519 [2024-04-26 13:15:49.278780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.519 [2024-04-26 13:15:49.279090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.519 [2024-04-26 13:15:49.279098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.519 qpair failed and we were unable to recover it. 00:32:44.519 [2024-04-26 13:15:49.279406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.519 [2024-04-26 13:15:49.279763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.519 [2024-04-26 13:15:49.279770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.519 qpair failed and we were unable to recover it. 00:32:44.519 [2024-04-26 13:15:49.280046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.519 [2024-04-26 13:15:49.280380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.519 [2024-04-26 13:15:49.280386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.519 qpair failed and we were unable to recover it. 00:32:44.519 [2024-04-26 13:15:49.280585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.519 [2024-04-26 13:15:49.280870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.519 [2024-04-26 13:15:49.280877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.519 qpair failed and we were unable to recover it. 00:32:44.519 [2024-04-26 13:15:49.281209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.519 [2024-04-26 13:15:49.281536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.519 [2024-04-26 13:15:49.281542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.519 qpair failed and we were unable to recover it. 00:32:44.519 [2024-04-26 13:15:49.281826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.519 [2024-04-26 13:15:49.282119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.519 [2024-04-26 13:15:49.282126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.519 qpair failed and we were unable to recover it. 00:32:44.519 [2024-04-26 13:15:49.282342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.519 [2024-04-26 13:15:49.282670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.519 [2024-04-26 13:15:49.282676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.519 qpair failed and we were unable to recover it. 00:32:44.519 [2024-04-26 13:15:49.282981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.519 [2024-04-26 13:15:49.283260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.519 [2024-04-26 13:15:49.283266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.519 qpair failed and we were unable to recover it. 00:32:44.519 [2024-04-26 13:15:49.283580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.519 [2024-04-26 13:15:49.283889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.519 [2024-04-26 13:15:49.283896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.519 qpair failed and we were unable to recover it. 00:32:44.519 [2024-04-26 13:15:49.284206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.519 [2024-04-26 13:15:49.284490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.519 [2024-04-26 13:15:49.284496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.519 qpair failed and we were unable to recover it. 00:32:44.519 [2024-04-26 13:15:49.284879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.519 [2024-04-26 13:15:49.285066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.519 [2024-04-26 13:15:49.285073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.519 qpair failed and we were unable to recover it. 00:32:44.519 [2024-04-26 13:15:49.285451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.519 [2024-04-26 13:15:49.285769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.519 [2024-04-26 13:15:49.285776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.519 qpair failed and we were unable to recover it. 00:32:44.519 [2024-04-26 13:15:49.286009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.519 [2024-04-26 13:15:49.286356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.519 [2024-04-26 13:15:49.286363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.519 qpair failed and we were unable to recover it. 00:32:44.519 [2024-04-26 13:15:49.286441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.519 [2024-04-26 13:15:49.286754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.519 [2024-04-26 13:15:49.286762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.519 qpair failed and we were unable to recover it. 00:32:44.519 [2024-04-26 13:15:49.287067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.519 [2024-04-26 13:15:49.287371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.519 [2024-04-26 13:15:49.287378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.519 qpair failed and we were unable to recover it. 00:32:44.519 [2024-04-26 13:15:49.287689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.519 [2024-04-26 13:15:49.288005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.519 [2024-04-26 13:15:49.288012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.520 qpair failed and we were unable to recover it. 00:32:44.520 [2024-04-26 13:15:49.288300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.520 [2024-04-26 13:15:49.288620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.520 [2024-04-26 13:15:49.288626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.520 qpair failed and we were unable to recover it. 00:32:44.520 [2024-04-26 13:15:49.288831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.520 [2024-04-26 13:15:49.289140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.520 [2024-04-26 13:15:49.289146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.520 qpair failed and we were unable to recover it. 00:32:44.520 [2024-04-26 13:15:49.289445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.520 [2024-04-26 13:15:49.289640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.520 [2024-04-26 13:15:49.289647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.520 qpair failed and we were unable to recover it. 00:32:44.520 [2024-04-26 13:15:49.289949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.520 [2024-04-26 13:15:49.290270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.520 [2024-04-26 13:15:49.290276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.520 qpair failed and we were unable to recover it. 00:32:44.520 [2024-04-26 13:15:49.290493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.520 [2024-04-26 13:15:49.290824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.520 [2024-04-26 13:15:49.290830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.520 qpair failed and we were unable to recover it. 00:32:44.520 [2024-04-26 13:15:49.291202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.520 [2024-04-26 13:15:49.291509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.520 [2024-04-26 13:15:49.291516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.520 qpair failed and we were unable to recover it. 00:32:44.520 [2024-04-26 13:15:49.291819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.520 [2024-04-26 13:15:49.292013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.520 [2024-04-26 13:15:49.292020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.520 qpair failed and we were unable to recover it. 00:32:44.520 [2024-04-26 13:15:49.292382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.520 [2024-04-26 13:15:49.292667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.520 [2024-04-26 13:15:49.292674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.520 qpair failed and we were unable to recover it. 00:32:44.520 [2024-04-26 13:15:49.292887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.520 [2024-04-26 13:15:49.293232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.520 [2024-04-26 13:15:49.293238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.520 qpair failed and we were unable to recover it. 00:32:44.520 [2024-04-26 13:15:49.293547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.520 [2024-04-26 13:15:49.293866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.520 [2024-04-26 13:15:49.293873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.520 qpair failed and we were unable to recover it. 00:32:44.520 [2024-04-26 13:15:49.294165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.520 [2024-04-26 13:15:49.294490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.520 [2024-04-26 13:15:49.294496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.520 qpair failed and we were unable to recover it. 00:32:44.520 [2024-04-26 13:15:49.294823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.520 [2024-04-26 13:15:49.295108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.520 [2024-04-26 13:15:49.295115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.520 qpair failed and we were unable to recover it. 00:32:44.520 [2024-04-26 13:15:49.295432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.520 [2024-04-26 13:15:49.295712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.520 [2024-04-26 13:15:49.295718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.520 qpair failed and we were unable to recover it. 00:32:44.520 [2024-04-26 13:15:49.296026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.520 [2024-04-26 13:15:49.296363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.520 [2024-04-26 13:15:49.296369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.520 qpair failed and we were unable to recover it. 00:32:44.520 [2024-04-26 13:15:49.296664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.520 [2024-04-26 13:15:49.296957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.520 [2024-04-26 13:15:49.296963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.520 qpair failed and we were unable to recover it. 00:32:44.520 [2024-04-26 13:15:49.297128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.520 [2024-04-26 13:15:49.297368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.520 [2024-04-26 13:15:49.297375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.520 qpair failed and we were unable to recover it. 00:32:44.520 [2024-04-26 13:15:49.297675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.520 [2024-04-26 13:15:49.297975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.520 [2024-04-26 13:15:49.297982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.520 qpair failed and we were unable to recover it. 00:32:44.520 [2024-04-26 13:15:49.298301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.520 [2024-04-26 13:15:49.298610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.520 [2024-04-26 13:15:49.298617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.520 qpair failed and we were unable to recover it. 00:32:44.520 [2024-04-26 13:15:49.298816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.520 [2024-04-26 13:15:49.299144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.520 [2024-04-26 13:15:49.299151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.520 qpair failed and we were unable to recover it. 00:32:44.520 [2024-04-26 13:15:49.299416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.520 [2024-04-26 13:15:49.299738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.520 [2024-04-26 13:15:49.299745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.520 qpair failed and we were unable to recover it. 00:32:44.520 [2024-04-26 13:15:49.300066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.520 [2024-04-26 13:15:49.300371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.520 [2024-04-26 13:15:49.300378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.520 qpair failed and we were unable to recover it. 00:32:44.520 [2024-04-26 13:15:49.300685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.520 [2024-04-26 13:15:49.300992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.520 [2024-04-26 13:15:49.300999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.520 qpair failed and we were unable to recover it. 00:32:44.520 [2024-04-26 13:15:49.301292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.520 [2024-04-26 13:15:49.301569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.520 [2024-04-26 13:15:49.301575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.520 qpair failed and we were unable to recover it. 00:32:44.520 [2024-04-26 13:15:49.301888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.520 [2024-04-26 13:15:49.302189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.520 [2024-04-26 13:15:49.302195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.520 qpair failed and we were unable to recover it. 00:32:44.520 [2024-04-26 13:15:49.302471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.520 [2024-04-26 13:15:49.302686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.520 [2024-04-26 13:15:49.302693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.520 qpair failed and we were unable to recover it. 00:32:44.521 [2024-04-26 13:15:49.302999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.521 [2024-04-26 13:15:49.303286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.521 [2024-04-26 13:15:49.303292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.521 qpair failed and we were unable to recover it. 00:32:44.521 [2024-04-26 13:15:49.303598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.521 [2024-04-26 13:15:49.303814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.521 [2024-04-26 13:15:49.303820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.521 qpair failed and we were unable to recover it. 00:32:44.521 [2024-04-26 13:15:49.304142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.521 [2024-04-26 13:15:49.304458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.521 [2024-04-26 13:15:49.304465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.521 qpair failed and we were unable to recover it. 00:32:44.521 [2024-04-26 13:15:49.304774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.521 [2024-04-26 13:15:49.304982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.521 [2024-04-26 13:15:49.304989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.521 qpair failed and we were unable to recover it. 00:32:44.521 [2024-04-26 13:15:49.305195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.521 [2024-04-26 13:15:49.305522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.521 [2024-04-26 13:15:49.305529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.521 qpair failed and we were unable to recover it. 00:32:44.521 [2024-04-26 13:15:49.305824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.521 [2024-04-26 13:15:49.306111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.521 [2024-04-26 13:15:49.306118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.521 qpair failed and we were unable to recover it. 00:32:44.521 [2024-04-26 13:15:49.306433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.521 [2024-04-26 13:15:49.306728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.521 [2024-04-26 13:15:49.306734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.521 qpair failed and we were unable to recover it. 00:32:44.521 [2024-04-26 13:15:49.307064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.521 [2024-04-26 13:15:49.307250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.521 [2024-04-26 13:15:49.307256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.521 qpair failed and we were unable to recover it. 00:32:44.521 [2024-04-26 13:15:49.307647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.521 [2024-04-26 13:15:49.307832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.521 [2024-04-26 13:15:49.307844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.521 qpair failed and we were unable to recover it. 00:32:44.521 [2024-04-26 13:15:49.308226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.521 [2024-04-26 13:15:49.308532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.521 [2024-04-26 13:15:49.308539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.521 qpair failed and we were unable to recover it. 00:32:44.521 [2024-04-26 13:15:49.308866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.521 [2024-04-26 13:15:49.309077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.521 [2024-04-26 13:15:49.309084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.521 qpair failed and we were unable to recover it. 00:32:44.521 [2024-04-26 13:15:49.309433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.521 [2024-04-26 13:15:49.309621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.521 [2024-04-26 13:15:49.309627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.521 qpair failed and we were unable to recover it. 00:32:44.521 [2024-04-26 13:15:49.309986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.521 [2024-04-26 13:15:49.310284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.521 [2024-04-26 13:15:49.310291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.521 qpair failed and we were unable to recover it. 00:32:44.521 [2024-04-26 13:15:49.310599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.521 [2024-04-26 13:15:49.310815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.521 [2024-04-26 13:15:49.310821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.521 qpair failed and we were unable to recover it. 00:32:44.521 [2024-04-26 13:15:49.311117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.521 [2024-04-26 13:15:49.311304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.521 [2024-04-26 13:15:49.311312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.521 qpair failed and we were unable to recover it. 00:32:44.521 [2024-04-26 13:15:49.311687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.521 [2024-04-26 13:15:49.312021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.521 [2024-04-26 13:15:49.312028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.521 qpair failed and we were unable to recover it. 00:32:44.521 [2024-04-26 13:15:49.312358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.521 [2024-04-26 13:15:49.312660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.521 [2024-04-26 13:15:49.312666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.521 qpair failed and we were unable to recover it. 00:32:44.521 [2024-04-26 13:15:49.312851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.521 [2024-04-26 13:15:49.313152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.521 [2024-04-26 13:15:49.313158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.521 qpair failed and we were unable to recover it. 00:32:44.521 [2024-04-26 13:15:49.313484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.521 [2024-04-26 13:15:49.313651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.521 [2024-04-26 13:15:49.313657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.521 qpair failed and we were unable to recover it. 00:32:44.521 [2024-04-26 13:15:49.313988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.521 [2024-04-26 13:15:49.314315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.521 [2024-04-26 13:15:49.314322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.521 qpair failed and we were unable to recover it. 00:32:44.521 [2024-04-26 13:15:49.314636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.521 [2024-04-26 13:15:49.314953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.521 [2024-04-26 13:15:49.314959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.521 qpair failed and we were unable to recover it. 00:32:44.521 [2024-04-26 13:15:49.315346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.521 [2024-04-26 13:15:49.315604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.521 [2024-04-26 13:15:49.315610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.521 qpair failed and we were unable to recover it. 00:32:44.521 [2024-04-26 13:15:49.315952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.521 [2024-04-26 13:15:49.316252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.521 [2024-04-26 13:15:49.316258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.521 qpair failed and we were unable to recover it. 00:32:44.521 [2024-04-26 13:15:49.316586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.521 [2024-04-26 13:15:49.316923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.521 [2024-04-26 13:15:49.316930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.521 qpair failed and we were unable to recover it. 00:32:44.521 [2024-04-26 13:15:49.317253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.521 [2024-04-26 13:15:49.317592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.521 [2024-04-26 13:15:49.317598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.521 qpair failed and we were unable to recover it. 00:32:44.521 [2024-04-26 13:15:49.317877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.521 [2024-04-26 13:15:49.318076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.521 [2024-04-26 13:15:49.318082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.521 qpair failed and we were unable to recover it. 00:32:44.521 [2024-04-26 13:15:49.318397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.521 [2024-04-26 13:15:49.318654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.522 [2024-04-26 13:15:49.318661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.522 qpair failed and we were unable to recover it. 00:32:44.522 [2024-04-26 13:15:49.318999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.522 [2024-04-26 13:15:49.319345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.522 [2024-04-26 13:15:49.319351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.522 qpair failed and we were unable to recover it. 00:32:44.522 [2024-04-26 13:15:49.319750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.522 [2024-04-26 13:15:49.320074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.522 [2024-04-26 13:15:49.320080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.522 qpair failed and we were unable to recover it. 00:32:44.522 [2024-04-26 13:15:49.320389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.522 [2024-04-26 13:15:49.320690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.522 [2024-04-26 13:15:49.320696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.522 qpair failed and we were unable to recover it. 00:32:44.522 [2024-04-26 13:15:49.320891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.522 [2024-04-26 13:15:49.321274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.522 [2024-04-26 13:15:49.321280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.522 qpair failed and we were unable to recover it. 00:32:44.522 [2024-04-26 13:15:49.321471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.522 [2024-04-26 13:15:49.321868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.522 [2024-04-26 13:15:49.321874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.522 qpair failed and we were unable to recover it. 00:32:44.522 [2024-04-26 13:15:49.322185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.522 [2024-04-26 13:15:49.322361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.522 [2024-04-26 13:15:49.322367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.522 qpair failed and we were unable to recover it. 00:32:44.522 [2024-04-26 13:15:49.322756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.522 [2024-04-26 13:15:49.323119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.522 [2024-04-26 13:15:49.323126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.522 qpair failed and we were unable to recover it. 00:32:44.522 [2024-04-26 13:15:49.323419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.522 [2024-04-26 13:15:49.323641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.522 [2024-04-26 13:15:49.323647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.522 qpair failed and we were unable to recover it. 00:32:44.522 [2024-04-26 13:15:49.323973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.522 [2024-04-26 13:15:49.324323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.522 [2024-04-26 13:15:49.324329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.522 qpair failed and we were unable to recover it. 00:32:44.522 [2024-04-26 13:15:49.324636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.522 [2024-04-26 13:15:49.324956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.522 [2024-04-26 13:15:49.324964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.522 qpair failed and we were unable to recover it. 00:32:44.522 [2024-04-26 13:15:49.325274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.522 [2024-04-26 13:15:49.325581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.522 [2024-04-26 13:15:49.325588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.522 qpair failed and we were unable to recover it. 00:32:44.522 [2024-04-26 13:15:49.325927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.522 [2024-04-26 13:15:49.326237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.522 [2024-04-26 13:15:49.326244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.522 qpair failed and we were unable to recover it. 00:32:44.522 [2024-04-26 13:15:49.326475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.522 [2024-04-26 13:15:49.326668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.522 [2024-04-26 13:15:49.326676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.522 qpair failed and we were unable to recover it. 00:32:44.522 [2024-04-26 13:15:49.326848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.522 [2024-04-26 13:15:49.327169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.522 [2024-04-26 13:15:49.327177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.522 qpair failed and we were unable to recover it. 00:32:44.522 [2024-04-26 13:15:49.327355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.522 [2024-04-26 13:15:49.327562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.522 [2024-04-26 13:15:49.327568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.522 qpair failed and we were unable to recover it. 00:32:44.522 [2024-04-26 13:15:49.327948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.522 [2024-04-26 13:15:49.328250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.522 [2024-04-26 13:15:49.328256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.522 qpair failed and we were unable to recover it. 00:32:44.522 [2024-04-26 13:15:49.328544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.522 [2024-04-26 13:15:49.328763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.522 [2024-04-26 13:15:49.328770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.522 qpair failed and we were unable to recover it. 00:32:44.522 [2024-04-26 13:15:49.329090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.522 [2024-04-26 13:15:49.329234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.522 [2024-04-26 13:15:49.329242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.522 qpair failed and we were unable to recover it. 00:32:44.522 [2024-04-26 13:15:49.329540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.522 [2024-04-26 13:15:49.329858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.522 [2024-04-26 13:15:49.329865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.522 qpair failed and we were unable to recover it. 00:32:44.522 [2024-04-26 13:15:49.330192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.522 [2024-04-26 13:15:49.330479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.522 [2024-04-26 13:15:49.330485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.522 qpair failed and we were unable to recover it. 00:32:44.522 [2024-04-26 13:15:49.330800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.522 [2024-04-26 13:15:49.331131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.522 [2024-04-26 13:15:49.331138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.522 qpair failed and we were unable to recover it. 00:32:44.522 [2024-04-26 13:15:49.331523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.522 [2024-04-26 13:15:49.331600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.522 [2024-04-26 13:15:49.331606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.522 qpair failed and we were unable to recover it. 00:32:44.522 [2024-04-26 13:15:49.331909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.522 [2024-04-26 13:15:49.332245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.522 [2024-04-26 13:15:49.332253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.522 qpair failed and we were unable to recover it. 00:32:44.522 [2024-04-26 13:15:49.332680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.522 [2024-04-26 13:15:49.332991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.522 [2024-04-26 13:15:49.332998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.522 qpair failed and we were unable to recover it. 00:32:44.522 [2024-04-26 13:15:49.333163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.522 [2024-04-26 13:15:49.333384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.522 [2024-04-26 13:15:49.333391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.522 qpair failed and we were unable to recover it. 00:32:44.522 [2024-04-26 13:15:49.333651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.522 [2024-04-26 13:15:49.333959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.522 [2024-04-26 13:15:49.333965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.522 qpair failed and we were unable to recover it. 00:32:44.522 [2024-04-26 13:15:49.334163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.522 [2024-04-26 13:15:49.334380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.523 [2024-04-26 13:15:49.334387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.523 qpair failed and we were unable to recover it. 00:32:44.523 [2024-04-26 13:15:49.334689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.523 [2024-04-26 13:15:49.334841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.523 [2024-04-26 13:15:49.334848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.523 qpair failed and we were unable to recover it. 00:32:44.523 [2024-04-26 13:15:49.335210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.523 [2024-04-26 13:15:49.335542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.523 [2024-04-26 13:15:49.335548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.523 qpair failed and we were unable to recover it. 00:32:44.523 [2024-04-26 13:15:49.335876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.523 [2024-04-26 13:15:49.336193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.523 [2024-04-26 13:15:49.336199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.523 qpair failed and we were unable to recover it. 00:32:44.523 [2024-04-26 13:15:49.336529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.523 [2024-04-26 13:15:49.336847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.523 [2024-04-26 13:15:49.336854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.523 qpair failed and we were unable to recover it. 00:32:44.523 [2024-04-26 13:15:49.337070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.523 [2024-04-26 13:15:49.337353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.523 [2024-04-26 13:15:49.337360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.523 qpair failed and we were unable to recover it. 00:32:44.523 [2024-04-26 13:15:49.337667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.523 [2024-04-26 13:15:49.338000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.523 [2024-04-26 13:15:49.338009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.523 qpair failed and we were unable to recover it. 00:32:44.523 [2024-04-26 13:15:49.338319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.523 [2024-04-26 13:15:49.338544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.523 [2024-04-26 13:15:49.338551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.523 qpair failed and we were unable to recover it. 00:32:44.523 [2024-04-26 13:15:49.338876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.523 [2024-04-26 13:15:49.339083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.523 [2024-04-26 13:15:49.339090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.523 qpair failed and we were unable to recover it. 00:32:44.523 [2024-04-26 13:15:49.339399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.523 [2024-04-26 13:15:49.339680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.523 [2024-04-26 13:15:49.339687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.523 qpair failed and we were unable to recover it. 00:32:44.523 [2024-04-26 13:15:49.340000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.523 [2024-04-26 13:15:49.340321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.523 [2024-04-26 13:15:49.340328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.523 qpair failed and we were unable to recover it. 00:32:44.523 [2024-04-26 13:15:49.340535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.523 [2024-04-26 13:15:49.340870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.523 [2024-04-26 13:15:49.340876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.523 qpair failed and we were unable to recover it. 00:32:44.523 [2024-04-26 13:15:49.341066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.523 [2024-04-26 13:15:49.341386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.523 [2024-04-26 13:15:49.341392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.523 qpair failed and we were unable to recover it. 00:32:44.523 [2024-04-26 13:15:49.341598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.523 [2024-04-26 13:15:49.341948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.523 [2024-04-26 13:15:49.341955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.523 qpair failed and we were unable to recover it. 00:32:44.523 [2024-04-26 13:15:49.342361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.523 [2024-04-26 13:15:49.342631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.523 [2024-04-26 13:15:49.342637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.523 qpair failed and we were unable to recover it. 00:32:44.523 [2024-04-26 13:15:49.342952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.523 [2024-04-26 13:15:49.343254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.523 [2024-04-26 13:15:49.343260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.523 qpair failed and we were unable to recover it. 00:32:44.523 [2024-04-26 13:15:49.343571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.523 [2024-04-26 13:15:49.343892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.523 [2024-04-26 13:15:49.343901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.523 qpair failed and we were unable to recover it. 00:32:44.523 [2024-04-26 13:15:49.344227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.523 [2024-04-26 13:15:49.344543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.523 [2024-04-26 13:15:49.344549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.523 qpair failed and we were unable to recover it. 00:32:44.523 [2024-04-26 13:15:49.344941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.523 [2024-04-26 13:15:49.345148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.523 [2024-04-26 13:15:49.345154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.523 qpair failed and we were unable to recover it. 00:32:44.523 [2024-04-26 13:15:49.345450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.523 [2024-04-26 13:15:49.345788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.523 [2024-04-26 13:15:49.345795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.524 qpair failed and we were unable to recover it. 00:32:44.524 [2024-04-26 13:15:49.346110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.524 [2024-04-26 13:15:49.346440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.524 [2024-04-26 13:15:49.346446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.524 qpair failed and we were unable to recover it. 00:32:44.524 [2024-04-26 13:15:49.346763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.524 [2024-04-26 13:15:49.347055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.524 [2024-04-26 13:15:49.347062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.524 qpair failed and we were unable to recover it. 00:32:44.524 [2024-04-26 13:15:49.347371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.524 [2024-04-26 13:15:49.347712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.524 [2024-04-26 13:15:49.347719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.524 qpair failed and we were unable to recover it. 00:32:44.524 [2024-04-26 13:15:49.348015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.524 [2024-04-26 13:15:49.348225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.524 [2024-04-26 13:15:49.348231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.524 qpair failed and we were unable to recover it. 00:32:44.524 [2024-04-26 13:15:49.348417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.524 [2024-04-26 13:15:49.348760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.524 [2024-04-26 13:15:49.348766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.524 qpair failed and we were unable to recover it. 00:32:44.524 [2024-04-26 13:15:49.349073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.524 [2024-04-26 13:15:49.349397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.524 [2024-04-26 13:15:49.349404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.524 qpair failed and we were unable to recover it. 00:32:44.524 [2024-04-26 13:15:49.349779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.524 [2024-04-26 13:15:49.349922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.524 [2024-04-26 13:15:49.349929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.524 qpair failed and we were unable to recover it. 00:32:44.524 [2024-04-26 13:15:49.350278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.524 [2024-04-26 13:15:49.350589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.524 [2024-04-26 13:15:49.350596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.524 qpair failed and we were unable to recover it. 00:32:44.524 [2024-04-26 13:15:49.350899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.524 [2024-04-26 13:15:49.351233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.524 [2024-04-26 13:15:49.351240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.524 qpair failed and we were unable to recover it. 00:32:44.524 [2024-04-26 13:15:49.351407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.524 [2024-04-26 13:15:49.351599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.524 [2024-04-26 13:15:49.351606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.524 qpair failed and we were unable to recover it. 00:32:44.524 [2024-04-26 13:15:49.351914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.524 [2024-04-26 13:15:49.352256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.524 [2024-04-26 13:15:49.352263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.524 qpair failed and we were unable to recover it. 00:32:44.524 [2024-04-26 13:15:49.352455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.524 [2024-04-26 13:15:49.352814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.524 [2024-04-26 13:15:49.352821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.524 qpair failed and we were unable to recover it. 00:32:44.524 [2024-04-26 13:15:49.352923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.524 [2024-04-26 13:15:49.353278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.524 [2024-04-26 13:15:49.353284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.524 qpair failed and we were unable to recover it. 00:32:44.524 [2024-04-26 13:15:49.353597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.524 [2024-04-26 13:15:49.353929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.524 [2024-04-26 13:15:49.353936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.524 qpair failed and we were unable to recover it. 00:32:44.524 [2024-04-26 13:15:49.354270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.524 [2024-04-26 13:15:49.354607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.524 [2024-04-26 13:15:49.354614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.524 qpair failed and we were unable to recover it. 00:32:44.524 [2024-04-26 13:15:49.354841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.524 [2024-04-26 13:15:49.355145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.524 [2024-04-26 13:15:49.355152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.524 qpair failed and we were unable to recover it. 00:32:44.524 [2024-04-26 13:15:49.355466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.524 [2024-04-26 13:15:49.355785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.524 [2024-04-26 13:15:49.355792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.524 qpair failed and we were unable to recover it. 00:32:44.524 [2024-04-26 13:15:49.356100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.524 [2024-04-26 13:15:49.356438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.524 [2024-04-26 13:15:49.356445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.524 qpair failed and we were unable to recover it. 00:32:44.524 [2024-04-26 13:15:49.356499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.524 [2024-04-26 13:15:49.356780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.524 [2024-04-26 13:15:49.356786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.524 qpair failed and we were unable to recover it. 00:32:44.524 [2024-04-26 13:15:49.357100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.524 [2024-04-26 13:15:49.357400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.524 [2024-04-26 13:15:49.357407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.524 qpair failed and we were unable to recover it. 00:32:44.524 [2024-04-26 13:15:49.357729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.524 [2024-04-26 13:15:49.358025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.524 [2024-04-26 13:15:49.358031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.524 qpair failed and we were unable to recover it. 00:32:44.524 [2024-04-26 13:15:49.358226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.524 [2024-04-26 13:15:49.358612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.524 [2024-04-26 13:15:49.358619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.524 qpair failed and we were unable to recover it. 00:32:44.524 [2024-04-26 13:15:49.358986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.524 [2024-04-26 13:15:49.359261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.524 [2024-04-26 13:15:49.359268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.524 qpair failed and we were unable to recover it. 00:32:44.524 [2024-04-26 13:15:49.359560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.524 [2024-04-26 13:15:49.359856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.524 [2024-04-26 13:15:49.359863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.524 qpair failed and we were unable to recover it. 00:32:44.524 [2024-04-26 13:15:49.360028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.524 [2024-04-26 13:15:49.360333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.524 [2024-04-26 13:15:49.360340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.524 qpair failed and we were unable to recover it. 00:32:44.524 [2024-04-26 13:15:49.360656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.524 [2024-04-26 13:15:49.360891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.524 [2024-04-26 13:15:49.360898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.524 qpair failed and we were unable to recover it. 00:32:44.524 [2024-04-26 13:15:49.361195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.524 [2024-04-26 13:15:49.361379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.524 [2024-04-26 13:15:49.361387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.524 qpair failed and we were unable to recover it. 00:32:44.524 [2024-04-26 13:15:49.361694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.524 [2024-04-26 13:15:49.362040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.524 [2024-04-26 13:15:49.362047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.525 qpair failed and we were unable to recover it. 00:32:44.525 [2024-04-26 13:15:49.362352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.525 [2024-04-26 13:15:49.362660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.525 [2024-04-26 13:15:49.362666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.525 qpair failed and we were unable to recover it. 00:32:44.525 [2024-04-26 13:15:49.362984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.525 [2024-04-26 13:15:49.363323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.525 [2024-04-26 13:15:49.363329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.525 qpair failed and we were unable to recover it. 00:32:44.525 [2024-04-26 13:15:49.363632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.525 [2024-04-26 13:15:49.363986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.525 [2024-04-26 13:15:49.363993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.525 qpair failed and we were unable to recover it. 00:32:44.525 [2024-04-26 13:15:49.364295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.525 [2024-04-26 13:15:49.364474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.525 [2024-04-26 13:15:49.364481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.525 qpair failed and we were unable to recover it. 00:32:44.525 [2024-04-26 13:15:49.364748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.525 [2024-04-26 13:15:49.365140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.525 [2024-04-26 13:15:49.365146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.525 qpair failed and we were unable to recover it. 00:32:44.525 [2024-04-26 13:15:49.365424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.525 [2024-04-26 13:15:49.365755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.525 [2024-04-26 13:15:49.365762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.525 qpair failed and we were unable to recover it. 00:32:44.525 [2024-04-26 13:15:49.366063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.525 [2024-04-26 13:15:49.366369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.525 [2024-04-26 13:15:49.366377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.525 qpair failed and we were unable to recover it. 00:32:44.525 [2024-04-26 13:15:49.366709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.525 [2024-04-26 13:15:49.367012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.525 [2024-04-26 13:15:49.367019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.525 qpair failed and we were unable to recover it. 00:32:44.525 [2024-04-26 13:15:49.367349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.525 [2024-04-26 13:15:49.367642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.525 [2024-04-26 13:15:49.367648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.525 qpair failed and we were unable to recover it. 00:32:44.525 [2024-04-26 13:15:49.368015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.525 [2024-04-26 13:15:49.368342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.525 [2024-04-26 13:15:49.368350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.525 qpair failed and we were unable to recover it. 00:32:44.525 [2024-04-26 13:15:49.368521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.525 [2024-04-26 13:15:49.368804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.525 [2024-04-26 13:15:49.368810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.525 qpair failed and we were unable to recover it. 00:32:44.525 [2024-04-26 13:15:49.369038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.525 [2024-04-26 13:15:49.369359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.525 [2024-04-26 13:15:49.369366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.525 qpair failed and we were unable to recover it. 00:32:44.525 [2024-04-26 13:15:49.369562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.525 [2024-04-26 13:15:49.369747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.525 [2024-04-26 13:15:49.369755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.525 qpair failed and we were unable to recover it. 00:32:44.525 [2024-04-26 13:15:49.369989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.525 [2024-04-26 13:15:49.370078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.525 [2024-04-26 13:15:49.370084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.525 qpair failed and we were unable to recover it. 00:32:44.525 [2024-04-26 13:15:49.370390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.525 [2024-04-26 13:15:49.370701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.525 [2024-04-26 13:15:49.370707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.525 qpair failed and we were unable to recover it. 00:32:44.525 [2024-04-26 13:15:49.371024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.525 [2024-04-26 13:15:49.371317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.525 [2024-04-26 13:15:49.371324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.525 qpair failed and we were unable to recover it. 00:32:44.525 [2024-04-26 13:15:49.371513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.525 [2024-04-26 13:15:49.371878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.525 [2024-04-26 13:15:49.371885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.525 qpair failed and we were unable to recover it. 00:32:44.525 [2024-04-26 13:15:49.372202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.525 [2024-04-26 13:15:49.372482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.525 [2024-04-26 13:15:49.372488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.525 qpair failed and we were unable to recover it. 00:32:44.525 [2024-04-26 13:15:49.372786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.525 [2024-04-26 13:15:49.373124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.525 [2024-04-26 13:15:49.373131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.525 qpair failed and we were unable to recover it. 00:32:44.525 [2024-04-26 13:15:49.373422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.525 [2024-04-26 13:15:49.373571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.525 [2024-04-26 13:15:49.373578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.525 qpair failed and we were unable to recover it. 00:32:44.525 [2024-04-26 13:15:49.373739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.525 [2024-04-26 13:15:49.374119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.525 [2024-04-26 13:15:49.374126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.525 qpair failed and we were unable to recover it. 00:32:44.525 [2024-04-26 13:15:49.374442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.525 [2024-04-26 13:15:49.374620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.525 [2024-04-26 13:15:49.374627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.525 qpair failed and we were unable to recover it. 00:32:44.525 [2024-04-26 13:15:49.374913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.525 [2024-04-26 13:15:49.375242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.525 [2024-04-26 13:15:49.375249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.525 qpair failed and we were unable to recover it. 00:32:44.525 [2024-04-26 13:15:49.375539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.525 [2024-04-26 13:15:49.375854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.525 [2024-04-26 13:15:49.375862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.525 qpair failed and we were unable to recover it. 00:32:44.525 [2024-04-26 13:15:49.376146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.525 [2024-04-26 13:15:49.376482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.525 [2024-04-26 13:15:49.376488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.525 qpair failed and we were unable to recover it. 00:32:44.525 [2024-04-26 13:15:49.376801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.525 [2024-04-26 13:15:49.377074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.525 [2024-04-26 13:15:49.377081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.525 qpair failed and we were unable to recover it. 00:32:44.526 [2024-04-26 13:15:49.377382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.526 [2024-04-26 13:15:49.377673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.526 [2024-04-26 13:15:49.377679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.526 qpair failed and we were unable to recover it. 00:32:44.526 [2024-04-26 13:15:49.378001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.526 [2024-04-26 13:15:49.378078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.526 [2024-04-26 13:15:49.378084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.526 qpair failed and we were unable to recover it. 00:32:44.526 [2024-04-26 13:15:49.378393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.526 [2024-04-26 13:15:49.378589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.526 [2024-04-26 13:15:49.378596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.526 qpair failed and we were unable to recover it. 00:32:44.526 [2024-04-26 13:15:49.378923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.526 [2024-04-26 13:15:49.379076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.526 [2024-04-26 13:15:49.379084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.526 qpair failed and we were unable to recover it. 00:32:44.526 [2024-04-26 13:15:49.379378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.526 [2024-04-26 13:15:49.379688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.526 [2024-04-26 13:15:49.379695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.526 qpair failed and we were unable to recover it. 00:32:44.526 [2024-04-26 13:15:49.379999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.526 [2024-04-26 13:15:49.380322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.526 [2024-04-26 13:15:49.380328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.526 qpair failed and we were unable to recover it. 00:32:44.526 [2024-04-26 13:15:49.380624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.526 [2024-04-26 13:15:49.380915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.526 [2024-04-26 13:15:49.380922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.526 qpair failed and we were unable to recover it. 00:32:44.526 [2024-04-26 13:15:49.381235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.526 [2024-04-26 13:15:49.381550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.526 [2024-04-26 13:15:49.381557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.526 qpair failed and we were unable to recover it. 00:32:44.526 [2024-04-26 13:15:49.381854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.526 [2024-04-26 13:15:49.382038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.526 [2024-04-26 13:15:49.382044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.526 qpair failed and we were unable to recover it. 00:32:44.526 [2024-04-26 13:15:49.382331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.526 [2024-04-26 13:15:49.382515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.526 [2024-04-26 13:15:49.382522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.526 qpair failed and we were unable to recover it. 00:32:44.526 [2024-04-26 13:15:49.382824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.526 [2024-04-26 13:15:49.383139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.526 [2024-04-26 13:15:49.383146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.526 qpair failed and we were unable to recover it. 00:32:44.526 [2024-04-26 13:15:49.383436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.526 [2024-04-26 13:15:49.383645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.526 [2024-04-26 13:15:49.383651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.526 qpair failed and we were unable to recover it. 00:32:44.526 [2024-04-26 13:15:49.383962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.526 [2024-04-26 13:15:49.384290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.526 [2024-04-26 13:15:49.384296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.526 qpair failed and we were unable to recover it. 00:32:44.526 [2024-04-26 13:15:49.384588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.526 [2024-04-26 13:15:49.384899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.526 [2024-04-26 13:15:49.384906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.526 qpair failed and we were unable to recover it. 00:32:44.526 [2024-04-26 13:15:49.385122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.526 [2024-04-26 13:15:49.385481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.526 [2024-04-26 13:15:49.385487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.526 qpair failed and we were unable to recover it. 00:32:44.526 [2024-04-26 13:15:49.385780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.526 [2024-04-26 13:15:49.386069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.526 [2024-04-26 13:15:49.386076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.526 qpair failed and we were unable to recover it. 00:32:44.526 [2024-04-26 13:15:49.386418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.526 [2024-04-26 13:15:49.386702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.526 [2024-04-26 13:15:49.386709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.526 qpair failed and we were unable to recover it. 00:32:44.526 [2024-04-26 13:15:49.387016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.526 [2024-04-26 13:15:49.387228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.526 [2024-04-26 13:15:49.387234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.526 qpair failed and we were unable to recover it. 00:32:44.526 [2024-04-26 13:15:49.387547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.526 [2024-04-26 13:15:49.387859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.526 [2024-04-26 13:15:49.387867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.526 qpair failed and we were unable to recover it. 00:32:44.526 [2024-04-26 13:15:49.388195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.526 [2024-04-26 13:15:49.388520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.526 [2024-04-26 13:15:49.388527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.526 qpair failed and we were unable to recover it. 00:32:44.526 [2024-04-26 13:15:49.388833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.526 [2024-04-26 13:15:49.389166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.526 [2024-04-26 13:15:49.389173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.526 qpair failed and we were unable to recover it. 00:32:44.526 [2024-04-26 13:15:49.389477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.526 [2024-04-26 13:15:49.389684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.526 [2024-04-26 13:15:49.389691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.526 qpair failed and we were unable to recover it. 00:32:44.526 [2024-04-26 13:15:49.390025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.526 [2024-04-26 13:15:49.390345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.526 [2024-04-26 13:15:49.390352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.526 qpair failed and we were unable to recover it. 00:32:44.526 [2024-04-26 13:15:49.390526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.526 [2024-04-26 13:15:49.390861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.526 [2024-04-26 13:15:49.390869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.526 qpair failed and we were unable to recover it. 00:32:44.526 [2024-04-26 13:15:49.391159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.526 [2024-04-26 13:15:49.391319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.526 [2024-04-26 13:15:49.391326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.526 qpair failed and we were unable to recover it. 00:32:44.526 [2024-04-26 13:15:49.391561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.526 [2024-04-26 13:15:49.391861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.526 [2024-04-26 13:15:49.391868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.526 qpair failed and we were unable to recover it. 00:32:44.526 [2024-04-26 13:15:49.392187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.526 [2024-04-26 13:15:49.392386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.526 [2024-04-26 13:15:49.392392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.526 qpair failed and we were unable to recover it. 00:32:44.526 [2024-04-26 13:15:49.392698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.527 [2024-04-26 13:15:49.393012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.527 [2024-04-26 13:15:49.393019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.527 qpair failed and we were unable to recover it. 00:32:44.527 [2024-04-26 13:15:49.393331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.527 [2024-04-26 13:15:49.393650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.527 [2024-04-26 13:15:49.393657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.527 qpair failed and we were unable to recover it. 00:32:44.527 [2024-04-26 13:15:49.393948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.527 [2024-04-26 13:15:49.394160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.527 [2024-04-26 13:15:49.394166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.527 qpair failed and we were unable to recover it. 00:32:44.527 [2024-04-26 13:15:49.394494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.527 [2024-04-26 13:15:49.394786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.527 [2024-04-26 13:15:49.394793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.527 qpair failed and we were unable to recover it. 00:32:44.527 [2024-04-26 13:15:49.395121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.527 [2024-04-26 13:15:49.395415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.527 [2024-04-26 13:15:49.395422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.527 qpair failed and we were unable to recover it. 00:32:44.527 [2024-04-26 13:15:49.395616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.527 [2024-04-26 13:15:49.395923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.527 [2024-04-26 13:15:49.395929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.527 qpair failed and we were unable to recover it. 00:32:44.527 [2024-04-26 13:15:49.396102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.527 [2024-04-26 13:15:49.396387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.527 [2024-04-26 13:15:49.396394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.527 qpair failed and we were unable to recover it. 00:32:44.527 [2024-04-26 13:15:49.396673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.527 [2024-04-26 13:15:49.397008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.527 [2024-04-26 13:15:49.397015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.527 qpair failed and we were unable to recover it. 00:32:44.527 [2024-04-26 13:15:49.397330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.527 [2024-04-26 13:15:49.397661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.527 [2024-04-26 13:15:49.397667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.527 qpair failed and we were unable to recover it. 00:32:44.527 [2024-04-26 13:15:49.397960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.527 [2024-04-26 13:15:49.398280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.527 [2024-04-26 13:15:49.398286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.527 qpair failed and we were unable to recover it. 00:32:44.527 [2024-04-26 13:15:49.398505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.527 [2024-04-26 13:15:49.398828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.527 [2024-04-26 13:15:49.398834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.527 qpair failed and we were unable to recover it. 00:32:44.527 [2024-04-26 13:15:49.399132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.527 [2024-04-26 13:15:49.399448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.527 [2024-04-26 13:15:49.399454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.527 qpair failed and we were unable to recover it. 00:32:44.527 [2024-04-26 13:15:49.399639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.527 [2024-04-26 13:15:49.399995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.527 [2024-04-26 13:15:49.400002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.527 qpair failed and we were unable to recover it. 00:32:44.527 [2024-04-26 13:15:49.400297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.527 [2024-04-26 13:15:49.400608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.527 [2024-04-26 13:15:49.400615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.527 qpair failed and we were unable to recover it. 00:32:44.527 [2024-04-26 13:15:49.400926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.527 [2024-04-26 13:15:49.401237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.527 [2024-04-26 13:15:49.401243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.527 qpair failed and we were unable to recover it. 00:32:44.527 [2024-04-26 13:15:49.401639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.527 [2024-04-26 13:15:49.401947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.527 [2024-04-26 13:15:49.401954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.527 qpair failed and we were unable to recover it. 00:32:44.527 [2024-04-26 13:15:49.401994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.527 [2024-04-26 13:15:49.402281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.527 [2024-04-26 13:15:49.402288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.527 qpair failed and we were unable to recover it. 00:32:44.527 [2024-04-26 13:15:49.402588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.527 [2024-04-26 13:15:49.402748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.527 [2024-04-26 13:15:49.402755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.527 qpair failed and we were unable to recover it. 00:32:44.527 [2024-04-26 13:15:49.403069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.527 [2024-04-26 13:15:49.403418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.527 [2024-04-26 13:15:49.403426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.527 qpair failed and we were unable to recover it. 00:32:44.527 [2024-04-26 13:15:49.403721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.527 [2024-04-26 13:15:49.403996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.527 [2024-04-26 13:15:49.404009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.527 qpair failed and we were unable to recover it. 00:32:44.527 [2024-04-26 13:15:49.404326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.527 [2024-04-26 13:15:49.404639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.527 [2024-04-26 13:15:49.404645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.527 qpair failed and we were unable to recover it. 00:32:44.527 [2024-04-26 13:15:49.404952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.527 [2024-04-26 13:15:49.405272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.527 [2024-04-26 13:15:49.405278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.527 qpair failed and we were unable to recover it. 00:32:44.527 [2024-04-26 13:15:49.405476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.527 [2024-04-26 13:15:49.405669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.527 [2024-04-26 13:15:49.405682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.527 qpair failed and we were unable to recover it. 00:32:44.527 [2024-04-26 13:15:49.406027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.527 [2024-04-26 13:15:49.406212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.527 [2024-04-26 13:15:49.406219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.527 qpair failed and we were unable to recover it. 00:32:44.527 [2024-04-26 13:15:49.406505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.527 [2024-04-26 13:15:49.406807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.527 [2024-04-26 13:15:49.406813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.527 qpair failed and we were unable to recover it. 00:32:44.527 [2024-04-26 13:15:49.407154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.527 [2024-04-26 13:15:49.407478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.527 [2024-04-26 13:15:49.407484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.527 qpair failed and we were unable to recover it. 00:32:44.527 [2024-04-26 13:15:49.407791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.527 [2024-04-26 13:15:49.408159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.527 [2024-04-26 13:15:49.408166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.527 qpair failed and we were unable to recover it. 00:32:44.527 [2024-04-26 13:15:49.408451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.527 [2024-04-26 13:15:49.408643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.527 [2024-04-26 13:15:49.408650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.527 qpair failed and we were unable to recover it. 00:32:44.527 [2024-04-26 13:15:49.408982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.527 [2024-04-26 13:15:49.409329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.528 [2024-04-26 13:15:49.409335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.528 qpair failed and we were unable to recover it. 00:32:44.528 [2024-04-26 13:15:49.409664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.528 [2024-04-26 13:15:49.409823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.528 [2024-04-26 13:15:49.409830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.528 qpair failed and we were unable to recover it. 00:32:44.528 [2024-04-26 13:15:49.410139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.528 [2024-04-26 13:15:49.410475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.528 [2024-04-26 13:15:49.410482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.528 qpair failed and we were unable to recover it. 00:32:44.528 [2024-04-26 13:15:49.410790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.528 [2024-04-26 13:15:49.411096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.528 [2024-04-26 13:15:49.411102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.528 qpair failed and we were unable to recover it. 00:32:44.528 [2024-04-26 13:15:49.411404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.528 [2024-04-26 13:15:49.411614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.528 [2024-04-26 13:15:49.411620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.528 qpair failed and we were unable to recover it. 00:32:44.528 [2024-04-26 13:15:49.411936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.528 [2024-04-26 13:15:49.412264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.528 [2024-04-26 13:15:49.412270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.528 qpair failed and we were unable to recover it. 00:32:44.528 [2024-04-26 13:15:49.412579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.528 [2024-04-26 13:15:49.412852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.528 [2024-04-26 13:15:49.412859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.528 qpair failed and we were unable to recover it. 00:32:44.528 [2024-04-26 13:15:49.413157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.528 [2024-04-26 13:15:49.413453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.528 [2024-04-26 13:15:49.413459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.528 qpair failed and we were unable to recover it. 00:32:44.528 [2024-04-26 13:15:49.413717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.528 [2024-04-26 13:15:49.414016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.528 [2024-04-26 13:15:49.414023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.528 qpair failed and we were unable to recover it. 00:32:44.528 [2024-04-26 13:15:49.414313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.528 [2024-04-26 13:15:49.414616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.528 [2024-04-26 13:15:49.414622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.528 qpair failed and we were unable to recover it. 00:32:44.528 [2024-04-26 13:15:49.414935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.528 [2024-04-26 13:15:49.415231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.528 [2024-04-26 13:15:49.415237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.528 qpair failed and we were unable to recover it. 00:32:44.528 [2024-04-26 13:15:49.415543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.528 [2024-04-26 13:15:49.415872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.528 [2024-04-26 13:15:49.415879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.528 qpair failed and we were unable to recover it. 00:32:44.528 [2024-04-26 13:15:49.416186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.528 [2024-04-26 13:15:49.416516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.528 [2024-04-26 13:15:49.416523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.528 qpair failed and we were unable to recover it. 00:32:44.528 [2024-04-26 13:15:49.416814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.528 [2024-04-26 13:15:49.417109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.528 [2024-04-26 13:15:49.417115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.528 qpair failed and we were unable to recover it. 00:32:44.528 [2024-04-26 13:15:49.417411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.528 [2024-04-26 13:15:49.417746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.528 [2024-04-26 13:15:49.417752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.528 qpair failed and we were unable to recover it. 00:32:44.528 [2024-04-26 13:15:49.418048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.528 [2024-04-26 13:15:49.418372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.528 [2024-04-26 13:15:49.418379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.528 qpair failed and we were unable to recover it. 00:32:44.528 [2024-04-26 13:15:49.418533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.528 [2024-04-26 13:15:49.418799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.528 [2024-04-26 13:15:49.418806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.528 qpair failed and we were unable to recover it. 00:32:44.528 [2024-04-26 13:15:49.419113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.528 [2024-04-26 13:15:49.419417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.528 [2024-04-26 13:15:49.419424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.528 qpair failed and we were unable to recover it. 00:32:44.528 [2024-04-26 13:15:49.419724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.528 [2024-04-26 13:15:49.420046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.528 [2024-04-26 13:15:49.420055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.528 qpair failed and we were unable to recover it. 00:32:44.528 [2024-04-26 13:15:49.420308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.528 [2024-04-26 13:15:49.420494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.528 [2024-04-26 13:15:49.420502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.528 qpair failed and we were unable to recover it. 00:32:44.528 [2024-04-26 13:15:49.420695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.529 [2024-04-26 13:15:49.421015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.529 [2024-04-26 13:15:49.421022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.529 qpair failed and we were unable to recover it. 00:32:44.529 [2024-04-26 13:15:49.421344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.529 [2024-04-26 13:15:49.421711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.529 [2024-04-26 13:15:49.421717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.529 qpair failed and we were unable to recover it. 00:32:44.529 [2024-04-26 13:15:49.422022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.529 [2024-04-26 13:15:49.422303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.529 [2024-04-26 13:15:49.422309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.529 qpair failed and we were unable to recover it. 00:32:44.529 [2024-04-26 13:15:49.422616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.529 [2024-04-26 13:15:49.422933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.529 [2024-04-26 13:15:49.422939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.529 qpair failed and we were unable to recover it. 00:32:44.529 [2024-04-26 13:15:49.423272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.529 [2024-04-26 13:15:49.423584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.529 [2024-04-26 13:15:49.423591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.529 qpair failed and we were unable to recover it. 00:32:44.529 [2024-04-26 13:15:49.423887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.529 [2024-04-26 13:15:49.424220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.529 [2024-04-26 13:15:49.424228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.529 qpair failed and we were unable to recover it. 00:32:44.529 [2024-04-26 13:15:49.424526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.529 [2024-04-26 13:15:49.424822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.529 [2024-04-26 13:15:49.424829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.529 qpair failed and we were unable to recover it. 00:32:44.529 [2024-04-26 13:15:49.425062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.529 [2024-04-26 13:15:49.425243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.529 [2024-04-26 13:15:49.425249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.529 qpair failed and we were unable to recover it. 00:32:44.529 [2024-04-26 13:15:49.425577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.529 [2024-04-26 13:15:49.425928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.529 [2024-04-26 13:15:49.425941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.529 qpair failed and we were unable to recover it. 00:32:44.529 [2024-04-26 13:15:49.426275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.529 [2024-04-26 13:15:49.426569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.529 [2024-04-26 13:15:49.426576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.529 qpair failed and we were unable to recover it. 00:32:44.529 [2024-04-26 13:15:49.426873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.529 [2024-04-26 13:15:49.427174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.529 [2024-04-26 13:15:49.427181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.529 qpair failed and we were unable to recover it. 00:32:44.529 [2024-04-26 13:15:49.427490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.529 [2024-04-26 13:15:49.427780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.529 [2024-04-26 13:15:49.427787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.529 qpair failed and we were unable to recover it. 00:32:44.529 [2024-04-26 13:15:49.428102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.529 [2024-04-26 13:15:49.428439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.529 [2024-04-26 13:15:49.428446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.529 qpair failed and we were unable to recover it. 00:32:44.529 [2024-04-26 13:15:49.428780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.529 [2024-04-26 13:15:49.429099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.529 [2024-04-26 13:15:49.429107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.529 qpair failed and we were unable to recover it. 00:32:44.529 [2024-04-26 13:15:49.429388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.529 [2024-04-26 13:15:49.429570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.529 [2024-04-26 13:15:49.429578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.529 qpair failed and we were unable to recover it. 00:32:44.529 [2024-04-26 13:15:49.429895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.529 [2024-04-26 13:15:49.430216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.529 [2024-04-26 13:15:49.430223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.529 qpair failed and we were unable to recover it. 00:32:44.529 [2024-04-26 13:15:49.430531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.529 [2024-04-26 13:15:49.430845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.529 [2024-04-26 13:15:49.430852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.529 qpair failed and we were unable to recover it. 00:32:44.529 [2024-04-26 13:15:49.431170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.529 [2024-04-26 13:15:49.431344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.529 [2024-04-26 13:15:49.431350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.529 qpair failed and we were unable to recover it. 00:32:44.529 [2024-04-26 13:15:49.431521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.529 [2024-04-26 13:15:49.431705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.529 [2024-04-26 13:15:49.431713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.529 qpair failed and we were unable to recover it. 00:32:44.529 [2024-04-26 13:15:49.432025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.529 [2024-04-26 13:15:49.432348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.529 [2024-04-26 13:15:49.432356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.529 qpair failed and we were unable to recover it. 00:32:44.529 [2024-04-26 13:15:49.432671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.529 [2024-04-26 13:15:49.432987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.529 [2024-04-26 13:15:49.432994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.529 qpair failed and we were unable to recover it. 00:32:44.529 [2024-04-26 13:15:49.433330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.529 [2024-04-26 13:15:49.433512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.529 [2024-04-26 13:15:49.433518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.529 qpair failed and we were unable to recover it. 00:32:44.529 [2024-04-26 13:15:49.433843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.529 [2024-04-26 13:15:49.434140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.529 [2024-04-26 13:15:49.434146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.529 qpair failed and we were unable to recover it. 00:32:44.529 [2024-04-26 13:15:49.434307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.529 [2024-04-26 13:15:49.434527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.529 [2024-04-26 13:15:49.434533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.529 qpair failed and we were unable to recover it. 00:32:44.529 [2024-04-26 13:15:49.434840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.529 [2024-04-26 13:15:49.435172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.529 [2024-04-26 13:15:49.435178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.529 qpair failed and we were unable to recover it. 00:32:44.529 [2024-04-26 13:15:49.435457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.529 [2024-04-26 13:15:49.435778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.529 [2024-04-26 13:15:49.435784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.529 qpair failed and we were unable to recover it. 00:32:44.529 [2024-04-26 13:15:49.436079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.530 [2024-04-26 13:15:49.436398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.530 [2024-04-26 13:15:49.436404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.530 qpair failed and we were unable to recover it. 00:32:44.530 [2024-04-26 13:15:49.436702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.530 [2024-04-26 13:15:49.437019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.530 [2024-04-26 13:15:49.437026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.530 qpair failed and we were unable to recover it. 00:32:44.530 [2024-04-26 13:15:49.437325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.530 [2024-04-26 13:15:49.437616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.530 [2024-04-26 13:15:49.437622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.530 qpair failed and we were unable to recover it. 00:32:44.530 [2024-04-26 13:15:49.438001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.530 [2024-04-26 13:15:49.438335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.530 [2024-04-26 13:15:49.438342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.530 qpair failed and we were unable to recover it. 00:32:44.530 [2024-04-26 13:15:49.438635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.530 [2024-04-26 13:15:49.438939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.530 [2024-04-26 13:15:49.438946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.530 qpair failed and we were unable to recover it. 00:32:44.530 [2024-04-26 13:15:49.439263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.530 [2024-04-26 13:15:49.439561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.530 [2024-04-26 13:15:49.439567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.530 qpair failed and we were unable to recover it. 00:32:44.530 [2024-04-26 13:15:49.439891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.530 [2024-04-26 13:15:49.440195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.530 [2024-04-26 13:15:49.440202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.530 qpair failed and we were unable to recover it. 00:32:44.530 [2024-04-26 13:15:49.440501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.530 [2024-04-26 13:15:49.440807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.530 [2024-04-26 13:15:49.440813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.530 qpair failed and we were unable to recover it. 00:32:44.530 [2024-04-26 13:15:49.441109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.530 [2024-04-26 13:15:49.441300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.530 [2024-04-26 13:15:49.441306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.530 qpair failed and we were unable to recover it. 00:32:44.530 [2024-04-26 13:15:49.441479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.530 [2024-04-26 13:15:49.441850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.530 [2024-04-26 13:15:49.441856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.530 qpair failed and we were unable to recover it. 00:32:44.530 [2024-04-26 13:15:49.442149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.530 [2024-04-26 13:15:49.442470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.530 [2024-04-26 13:15:49.442476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.530 qpair failed and we were unable to recover it. 00:32:44.530 [2024-04-26 13:15:49.442769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.530 [2024-04-26 13:15:49.443098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.530 [2024-04-26 13:15:49.443105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.530 qpair failed and we were unable to recover it. 00:32:44.530 [2024-04-26 13:15:49.443405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.530 [2024-04-26 13:15:49.443730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.530 [2024-04-26 13:15:49.443737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.530 qpair failed and we were unable to recover it. 00:32:44.530 [2024-04-26 13:15:49.444013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.530 [2024-04-26 13:15:49.444338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.530 [2024-04-26 13:15:49.444353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.530 qpair failed and we were unable to recover it. 00:32:44.530 [2024-04-26 13:15:49.444681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.530 [2024-04-26 13:15:49.444856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.530 [2024-04-26 13:15:49.444864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.530 qpair failed and we were unable to recover it. 00:32:44.530 [2024-04-26 13:15:49.445184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.530 [2024-04-26 13:15:49.445497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.530 [2024-04-26 13:15:49.445503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.530 qpair failed and we were unable to recover it. 00:32:44.530 [2024-04-26 13:15:49.445786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.530 [2024-04-26 13:15:49.446092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.530 [2024-04-26 13:15:49.446099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.530 qpair failed and we were unable to recover it. 00:32:44.530 [2024-04-26 13:15:49.446430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.530 [2024-04-26 13:15:49.446743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.530 [2024-04-26 13:15:49.446750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.530 qpair failed and we were unable to recover it. 00:32:44.530 [2024-04-26 13:15:49.447053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.530 [2024-04-26 13:15:49.447291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.530 [2024-04-26 13:15:49.447298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.530 qpair failed and we were unable to recover it. 00:32:44.530 [2024-04-26 13:15:49.447591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.530 [2024-04-26 13:15:49.447910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.530 [2024-04-26 13:15:49.447917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.530 qpair failed and we were unable to recover it. 00:32:44.530 [2024-04-26 13:15:49.448208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.530 [2024-04-26 13:15:49.448522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.530 [2024-04-26 13:15:49.448528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.530 qpair failed and we were unable to recover it. 00:32:44.530 [2024-04-26 13:15:49.448732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.530 [2024-04-26 13:15:49.449050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.530 [2024-04-26 13:15:49.449057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.530 qpair failed and we were unable to recover it. 00:32:44.530 [2024-04-26 13:15:49.449348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.530 [2024-04-26 13:15:49.449663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.530 [2024-04-26 13:15:49.449669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.530 qpair failed and we were unable to recover it. 00:32:44.530 [2024-04-26 13:15:49.449858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.530 [2024-04-26 13:15:49.450195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.530 [2024-04-26 13:15:49.450201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.530 qpair failed and we were unable to recover it. 00:32:44.530 [2024-04-26 13:15:49.450494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.530 [2024-04-26 13:15:49.450775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.531 [2024-04-26 13:15:49.450781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.531 qpair failed and we were unable to recover it. 00:32:44.531 [2024-04-26 13:15:49.451039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.531 [2024-04-26 13:15:49.451255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.531 [2024-04-26 13:15:49.451261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.531 qpair failed and we were unable to recover it. 00:32:44.531 [2024-04-26 13:15:49.451466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.531 [2024-04-26 13:15:49.451782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.531 [2024-04-26 13:15:49.451788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.531 qpair failed and we were unable to recover it. 00:32:44.531 [2024-04-26 13:15:49.452127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.531 [2024-04-26 13:15:49.452459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.531 [2024-04-26 13:15:49.452466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.531 qpair failed and we were unable to recover it. 00:32:44.531 [2024-04-26 13:15:49.452767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.531 [2024-04-26 13:15:49.453062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.531 [2024-04-26 13:15:49.453069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.531 qpair failed and we were unable to recover it. 00:32:44.531 [2024-04-26 13:15:49.453360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.531 [2024-04-26 13:15:49.453655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.531 [2024-04-26 13:15:49.453662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.531 qpair failed and we were unable to recover it. 00:32:44.531 [2024-04-26 13:15:49.453980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.531 [2024-04-26 13:15:49.454280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.531 [2024-04-26 13:15:49.454286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.531 qpair failed and we were unable to recover it. 00:32:44.531 [2024-04-26 13:15:49.454597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.531 [2024-04-26 13:15:49.454911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.531 [2024-04-26 13:15:49.454918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.531 qpair failed and we were unable to recover it. 00:32:44.531 [2024-04-26 13:15:49.455211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.531 [2024-04-26 13:15:49.455482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.531 [2024-04-26 13:15:49.455488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.531 qpair failed and we were unable to recover it. 00:32:44.531 [2024-04-26 13:15:49.455779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.531 [2024-04-26 13:15:49.456068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.531 [2024-04-26 13:15:49.456075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.531 qpair failed and we were unable to recover it. 00:32:44.531 [2024-04-26 13:15:49.456374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.531 [2024-04-26 13:15:49.456687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.531 [2024-04-26 13:15:49.456694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.531 qpair failed and we were unable to recover it. 00:32:44.531 [2024-04-26 13:15:49.456991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.531 [2024-04-26 13:15:49.457306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.531 [2024-04-26 13:15:49.457313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.531 qpair failed and we were unable to recover it. 00:32:44.531 [2024-04-26 13:15:49.457646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.531 [2024-04-26 13:15:49.457884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.531 [2024-04-26 13:15:49.457890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.531 qpair failed and we were unable to recover it. 00:32:44.531 [2024-04-26 13:15:49.458211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.531 [2024-04-26 13:15:49.458371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.531 [2024-04-26 13:15:49.458378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.531 qpair failed and we were unable to recover it. 00:32:44.531 [2024-04-26 13:15:49.458746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.531 [2024-04-26 13:15:49.459051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.531 [2024-04-26 13:15:49.459057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.531 qpair failed and we were unable to recover it. 00:32:44.531 [2024-04-26 13:15:49.459370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.531 [2024-04-26 13:15:49.459682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.531 [2024-04-26 13:15:49.459689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.531 qpair failed and we were unable to recover it. 00:32:44.531 [2024-04-26 13:15:49.459981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.531 [2024-04-26 13:15:49.460298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.531 [2024-04-26 13:15:49.460304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.531 qpair failed and we were unable to recover it. 00:32:44.531 [2024-04-26 13:15:49.460618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.531 [2024-04-26 13:15:49.460930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.531 [2024-04-26 13:15:49.460937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.531 qpair failed and we were unable to recover it. 00:32:44.531 [2024-04-26 13:15:49.461246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.531 [2024-04-26 13:15:49.461562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.531 [2024-04-26 13:15:49.461569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.531 qpair failed and we were unable to recover it. 00:32:44.531 [2024-04-26 13:15:49.461941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.531 [2024-04-26 13:15:49.462231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.531 [2024-04-26 13:15:49.462237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.531 qpair failed and we were unable to recover it. 00:32:44.531 [2024-04-26 13:15:49.462543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.531 [2024-04-26 13:15:49.462877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.531 [2024-04-26 13:15:49.462883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.531 qpair failed and we were unable to recover it. 00:32:44.531 [2024-04-26 13:15:49.463235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.531 [2024-04-26 13:15:49.463549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.531 [2024-04-26 13:15:49.463556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.531 qpair failed and we were unable to recover it. 00:32:44.531 [2024-04-26 13:15:49.463744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.531 [2024-04-26 13:15:49.464056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.531 [2024-04-26 13:15:49.464064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.531 qpair failed and we were unable to recover it. 00:32:44.531 [2024-04-26 13:15:49.464275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.531 [2024-04-26 13:15:49.464608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.531 [2024-04-26 13:15:49.464615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.531 qpair failed and we were unable to recover it. 00:32:44.531 [2024-04-26 13:15:49.464914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.531 [2024-04-26 13:15:49.465128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.531 [2024-04-26 13:15:49.465135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.531 qpair failed and we were unable to recover it. 00:32:44.531 [2024-04-26 13:15:49.465438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.531 [2024-04-26 13:15:49.465779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.531 [2024-04-26 13:15:49.465786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.531 qpair failed and we were unable to recover it. 00:32:44.531 [2024-04-26 13:15:49.466100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.531 [2024-04-26 13:15:49.466271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.531 [2024-04-26 13:15:49.466278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.531 qpair failed and we were unable to recover it. 00:32:44.531 [2024-04-26 13:15:49.466626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.531 [2024-04-26 13:15:49.466920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.531 [2024-04-26 13:15:49.466927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.531 qpair failed and we were unable to recover it. 00:32:44.531 [2024-04-26 13:15:49.467227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.531 [2024-04-26 13:15:49.467518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.532 [2024-04-26 13:15:49.467525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.532 qpair failed and we were unable to recover it. 00:32:44.532 [2024-04-26 13:15:49.467717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.532 [2024-04-26 13:15:49.468062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.532 [2024-04-26 13:15:49.468069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.532 qpair failed and we were unable to recover it. 00:32:44.532 [2024-04-26 13:15:49.468367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.532 [2024-04-26 13:15:49.468692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.532 [2024-04-26 13:15:49.468699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.532 qpair failed and we were unable to recover it. 00:32:44.532 [2024-04-26 13:15:49.469005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.532 [2024-04-26 13:15:49.469296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.532 [2024-04-26 13:15:49.469303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.532 qpair failed and we were unable to recover it. 00:32:44.532 [2024-04-26 13:15:49.469530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.532 [2024-04-26 13:15:49.469736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.532 [2024-04-26 13:15:49.469742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.532 qpair failed and we were unable to recover it. 00:32:44.532 [2024-04-26 13:15:49.470031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.532 [2024-04-26 13:15:49.470380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.532 [2024-04-26 13:15:49.470386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.532 qpair failed and we were unable to recover it. 00:32:44.532 [2024-04-26 13:15:49.470696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.532 [2024-04-26 13:15:49.470987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.532 [2024-04-26 13:15:49.470994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.532 qpair failed and we were unable to recover it. 00:32:44.532 [2024-04-26 13:15:49.471179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.532 [2024-04-26 13:15:49.471545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.532 [2024-04-26 13:15:49.471552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.532 qpair failed and we were unable to recover it. 00:32:44.532 [2024-04-26 13:15:49.471912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.532 [2024-04-26 13:15:49.472240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.532 [2024-04-26 13:15:49.472246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.532 qpair failed and we were unable to recover it. 00:32:44.532 [2024-04-26 13:15:49.472435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.532 [2024-04-26 13:15:49.472770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.532 [2024-04-26 13:15:49.472776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.532 qpair failed and we were unable to recover it. 00:32:44.532 [2024-04-26 13:15:49.473086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.532 [2024-04-26 13:15:49.473409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.532 [2024-04-26 13:15:49.473415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.532 qpair failed and we were unable to recover it. 00:32:44.532 [2024-04-26 13:15:49.473715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.532 [2024-04-26 13:15:49.473923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.532 [2024-04-26 13:15:49.473929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.532 qpair failed and we were unable to recover it. 00:32:44.532 [2024-04-26 13:15:49.474264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.532 [2024-04-26 13:15:49.474598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.532 [2024-04-26 13:15:49.474605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.532 qpair failed and we were unable to recover it. 00:32:44.532 [2024-04-26 13:15:49.474781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.532 [2024-04-26 13:15:49.475075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.532 [2024-04-26 13:15:49.475082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.532 qpair failed and we were unable to recover it. 00:32:44.532 [2024-04-26 13:15:49.475401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.532 [2024-04-26 13:15:49.475725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.532 [2024-04-26 13:15:49.475731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.532 qpair failed and we were unable to recover it. 00:32:44.532 [2024-04-26 13:15:49.476099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.532 [2024-04-26 13:15:49.476405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.532 [2024-04-26 13:15:49.476412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.532 qpair failed and we were unable to recover it. 00:32:44.532 [2024-04-26 13:15:49.476707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.532 [2024-04-26 13:15:49.477026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.532 [2024-04-26 13:15:49.477033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.532 qpair failed and we were unable to recover it. 00:32:44.532 [2024-04-26 13:15:49.477349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.532 [2024-04-26 13:15:49.477667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.532 [2024-04-26 13:15:49.477674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.532 qpair failed and we were unable to recover it. 00:32:44.532 [2024-04-26 13:15:49.477973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.532 [2024-04-26 13:15:49.478278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.532 [2024-04-26 13:15:49.478284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.532 qpair failed and we were unable to recover it. 00:32:44.532 [2024-04-26 13:15:49.478471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.532 [2024-04-26 13:15:49.478648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.532 [2024-04-26 13:15:49.478656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.532 qpair failed and we were unable to recover it. 00:32:44.532 [2024-04-26 13:15:49.478999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.532 [2024-04-26 13:15:49.479311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.532 [2024-04-26 13:15:49.479317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.532 qpair failed and we were unable to recover it. 00:32:44.532 [2024-04-26 13:15:49.479632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.532 [2024-04-26 13:15:49.479934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.532 [2024-04-26 13:15:49.479941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.532 qpair failed and we were unable to recover it. 00:32:44.532 [2024-04-26 13:15:49.480241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.532 [2024-04-26 13:15:49.480536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.532 [2024-04-26 13:15:49.480542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.532 qpair failed and we were unable to recover it. 00:32:44.532 [2024-04-26 13:15:49.480725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.532 [2024-04-26 13:15:49.481062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.532 [2024-04-26 13:15:49.481068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.532 qpair failed and we were unable to recover it. 00:32:44.532 [2024-04-26 13:15:49.481369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.532 [2024-04-26 13:15:49.481678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.532 [2024-04-26 13:15:49.481684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.532 qpair failed and we were unable to recover it. 00:32:44.532 [2024-04-26 13:15:49.482001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.532 [2024-04-26 13:15:49.482339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.532 [2024-04-26 13:15:49.482345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.532 qpair failed and we were unable to recover it. 00:32:44.532 [2024-04-26 13:15:49.482655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.532 [2024-04-26 13:15:49.482849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.532 [2024-04-26 13:15:49.482857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.532 qpair failed and we were unable to recover it. 00:32:44.532 [2024-04-26 13:15:49.483164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.532 [2024-04-26 13:15:49.483554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.532 [2024-04-26 13:15:49.483560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.532 qpair failed and we were unable to recover it. 00:32:44.532 [2024-04-26 13:15:49.483855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.532 [2024-04-26 13:15:49.484139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.532 [2024-04-26 13:15:49.484146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.532 qpair failed and we were unable to recover it. 00:32:44.532 [2024-04-26 13:15:49.484352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.533 [2024-04-26 13:15:49.484685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.533 [2024-04-26 13:15:49.484691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.533 qpair failed and we were unable to recover it. 00:32:44.533 [2024-04-26 13:15:49.484987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.533 [2024-04-26 13:15:49.485275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.533 [2024-04-26 13:15:49.485281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.533 qpair failed and we were unable to recover it. 00:32:44.533 [2024-04-26 13:15:49.485575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.533 [2024-04-26 13:15:49.485888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.533 [2024-04-26 13:15:49.485895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.533 qpair failed and we were unable to recover it. 00:32:44.533 [2024-04-26 13:15:49.486199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.533 [2024-04-26 13:15:49.486487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.533 [2024-04-26 13:15:49.486493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.533 qpair failed and we were unable to recover it. 00:32:44.533 [2024-04-26 13:15:49.486865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.533 [2024-04-26 13:15:49.487161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.533 [2024-04-26 13:15:49.487167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.533 qpair failed and we were unable to recover it. 00:32:44.533 [2024-04-26 13:15:49.487459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.533 [2024-04-26 13:15:49.487735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.533 [2024-04-26 13:15:49.487741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.533 qpair failed and we were unable to recover it. 00:32:44.533 [2024-04-26 13:15:49.488017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.533 [2024-04-26 13:15:49.488341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.533 [2024-04-26 13:15:49.488347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.533 qpair failed and we were unable to recover it. 00:32:44.533 [2024-04-26 13:15:49.488632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.533 [2024-04-26 13:15:49.488870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.533 [2024-04-26 13:15:49.488876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.533 qpair failed and we were unable to recover it. 00:32:44.533 [2024-04-26 13:15:49.489176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.533 [2024-04-26 13:15:49.489492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.533 [2024-04-26 13:15:49.489498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.533 qpair failed and we were unable to recover it. 00:32:44.533 [2024-04-26 13:15:49.489842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.533 [2024-04-26 13:15:49.490137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.533 [2024-04-26 13:15:49.490144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.533 qpair failed and we were unable to recover it. 00:32:44.533 [2024-04-26 13:15:49.490447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.533 [2024-04-26 13:15:49.490788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.533 [2024-04-26 13:15:49.490795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.533 qpair failed and we were unable to recover it. 00:32:44.533 [2024-04-26 13:15:49.491017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.533 [2024-04-26 13:15:49.491207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.533 [2024-04-26 13:15:49.491213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.533 qpair failed and we were unable to recover it. 00:32:44.533 [2024-04-26 13:15:49.491515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.533 [2024-04-26 13:15:49.491847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.533 [2024-04-26 13:15:49.491855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.533 qpair failed and we were unable to recover it. 00:32:44.533 [2024-04-26 13:15:49.492207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.533 [2024-04-26 13:15:49.492581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.533 [2024-04-26 13:15:49.492587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.533 qpair failed and we were unable to recover it. 00:32:44.533 [2024-04-26 13:15:49.492888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.533 [2024-04-26 13:15:49.493186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.533 [2024-04-26 13:15:49.493193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.533 qpair failed and we were unable to recover it. 00:32:44.533 [2024-04-26 13:15:49.493505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.533 [2024-04-26 13:15:49.493803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.533 [2024-04-26 13:15:49.493809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.533 qpair failed and we were unable to recover it. 00:32:44.533 [2024-04-26 13:15:49.494113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.533 [2024-04-26 13:15:49.494400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.533 [2024-04-26 13:15:49.494406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.533 qpair failed and we were unable to recover it. 00:32:44.533 [2024-04-26 13:15:49.494719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.533 [2024-04-26 13:15:49.495001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.533 [2024-04-26 13:15:49.495007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.533 qpair failed and we were unable to recover it. 00:32:44.533 [2024-04-26 13:15:49.495298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.533 [2024-04-26 13:15:49.495634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.533 [2024-04-26 13:15:49.495640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.533 qpair failed and we were unable to recover it. 00:32:44.533 [2024-04-26 13:15:49.495926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.533 [2024-04-26 13:15:49.496305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.533 [2024-04-26 13:15:49.496312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.533 qpair failed and we were unable to recover it. 00:32:44.533 [2024-04-26 13:15:49.496613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.533 [2024-04-26 13:15:49.496777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.533 [2024-04-26 13:15:49.496784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.533 qpair failed and we were unable to recover it. 00:32:44.533 [2024-04-26 13:15:49.497068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.533 [2024-04-26 13:15:49.497390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.533 [2024-04-26 13:15:49.497396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.533 qpair failed and we were unable to recover it. 00:32:44.533 [2024-04-26 13:15:49.497688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.533 [2024-04-26 13:15:49.497994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.533 [2024-04-26 13:15:49.498001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.533 qpair failed and we were unable to recover it. 00:32:44.533 [2024-04-26 13:15:49.498290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.533 [2024-04-26 13:15:49.498624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.533 [2024-04-26 13:15:49.498630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.533 qpair failed and we were unable to recover it. 00:32:44.534 [2024-04-26 13:15:49.498930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.534 [2024-04-26 13:15:49.499257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.534 [2024-04-26 13:15:49.499263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.534 qpair failed and we were unable to recover it. 00:32:44.534 [2024-04-26 13:15:49.499603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.534 [2024-04-26 13:15:49.499829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.534 [2024-04-26 13:15:49.499838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.534 qpair failed and we were unable to recover it. 00:32:44.534 [2024-04-26 13:15:49.500140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.534 [2024-04-26 13:15:49.500457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.534 [2024-04-26 13:15:49.500463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.534 qpair failed and we were unable to recover it. 00:32:44.534 [2024-04-26 13:15:49.500771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.534 [2024-04-26 13:15:49.501105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.534 [2024-04-26 13:15:49.501112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.534 qpair failed and we were unable to recover it. 00:32:44.534 [2024-04-26 13:15:49.501427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.534 [2024-04-26 13:15:49.501612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.534 [2024-04-26 13:15:49.501618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.534 qpair failed and we were unable to recover it. 00:32:44.534 [2024-04-26 13:15:49.501936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.534 [2024-04-26 13:15:49.502101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.534 [2024-04-26 13:15:49.502108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.534 qpair failed and we were unable to recover it. 00:32:44.534 [2024-04-26 13:15:49.502388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.534 [2024-04-26 13:15:49.502672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.534 [2024-04-26 13:15:49.502678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.534 qpair failed and we were unable to recover it. 00:32:44.534 [2024-04-26 13:15:49.502895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.534 [2024-04-26 13:15:49.503250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.534 [2024-04-26 13:15:49.503256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.534 qpair failed and we were unable to recover it. 00:32:44.534 [2024-04-26 13:15:49.503556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.534 [2024-04-26 13:15:49.503852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.534 [2024-04-26 13:15:49.503859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.534 qpair failed and we were unable to recover it. 00:32:44.534 [2024-04-26 13:15:49.504162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.534 [2024-04-26 13:15:49.504355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.534 [2024-04-26 13:15:49.504361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.534 qpair failed and we were unable to recover it. 00:32:44.534 [2024-04-26 13:15:49.504673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.534 [2024-04-26 13:15:49.504993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.534 [2024-04-26 13:15:49.504999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.534 qpair failed and we were unable to recover it. 00:32:44.534 [2024-04-26 13:15:49.505296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.534 [2024-04-26 13:15:49.505494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.534 [2024-04-26 13:15:49.505501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.534 qpair failed and we were unable to recover it. 00:32:44.534 [2024-04-26 13:15:49.505673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.534 [2024-04-26 13:15:49.505898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.534 [2024-04-26 13:15:49.505905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.534 qpair failed and we were unable to recover it. 00:32:44.534 [2024-04-26 13:15:49.506126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.534 [2024-04-26 13:15:49.506440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.534 [2024-04-26 13:15:49.506446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.534 qpair failed and we were unable to recover it. 00:32:44.534 [2024-04-26 13:15:49.506762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.534 [2024-04-26 13:15:49.507043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.534 [2024-04-26 13:15:49.507049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.534 qpair failed and we were unable to recover it. 00:32:44.534 [2024-04-26 13:15:49.507345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.534 [2024-04-26 13:15:49.507689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.534 [2024-04-26 13:15:49.507696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.534 qpair failed and we were unable to recover it. 00:32:44.534 [2024-04-26 13:15:49.507905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.534 [2024-04-26 13:15:49.508195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.534 [2024-04-26 13:15:49.508201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.534 qpair failed and we were unable to recover it. 00:32:44.534 [2024-04-26 13:15:49.508512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.534 [2024-04-26 13:15:49.508828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.534 [2024-04-26 13:15:49.508834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.534 qpair failed and we were unable to recover it. 00:32:44.534 [2024-04-26 13:15:49.509072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.534 [2024-04-26 13:15:49.509348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.534 [2024-04-26 13:15:49.509356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.534 qpair failed and we were unable to recover it. 00:32:44.534 [2024-04-26 13:15:49.509683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.534 [2024-04-26 13:15:49.509980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.534 [2024-04-26 13:15:49.509987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.534 qpair failed and we were unable to recover it. 00:32:44.534 [2024-04-26 13:15:49.510295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.534 [2024-04-26 13:15:49.510575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.534 [2024-04-26 13:15:49.510581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.534 qpair failed and we were unable to recover it. 00:32:44.534 [2024-04-26 13:15:49.510872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.534 [2024-04-26 13:15:49.511153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.534 [2024-04-26 13:15:49.511160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.534 qpair failed and we were unable to recover it. 00:32:44.534 [2024-04-26 13:15:49.511325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.534 [2024-04-26 13:15:49.511493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.534 [2024-04-26 13:15:49.511500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.534 qpair failed and we were unable to recover it. 00:32:44.534 [2024-04-26 13:15:49.511834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.534 [2024-04-26 13:15:49.512017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.534 [2024-04-26 13:15:49.512024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.534 qpair failed and we were unable to recover it. 00:32:44.534 [2024-04-26 13:15:49.512376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.534 [2024-04-26 13:15:49.512709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.534 [2024-04-26 13:15:49.512716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.534 qpair failed and we were unable to recover it. 00:32:44.534 [2024-04-26 13:15:49.512915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.534 [2024-04-26 13:15:49.513188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.534 [2024-04-26 13:15:49.513194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.534 qpair failed and we were unable to recover it. 00:32:44.534 [2024-04-26 13:15:49.513408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.534 [2024-04-26 13:15:49.513737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.534 [2024-04-26 13:15:49.513744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.534 qpair failed and we were unable to recover it. 00:32:44.534 [2024-04-26 13:15:49.514024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.534 [2024-04-26 13:15:49.514227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.534 [2024-04-26 13:15:49.514234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.534 qpair failed and we were unable to recover it. 00:32:44.534 [2024-04-26 13:15:49.514558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.535 [2024-04-26 13:15:49.514944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.535 [2024-04-26 13:15:49.514952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.535 qpair failed and we were unable to recover it. 00:32:44.535 [2024-04-26 13:15:49.515150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.535 [2024-04-26 13:15:49.515424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.535 [2024-04-26 13:15:49.515430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.535 qpair failed and we were unable to recover it. 00:32:44.535 [2024-04-26 13:15:49.515722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.535 [2024-04-26 13:15:49.515917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.535 [2024-04-26 13:15:49.515924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.535 qpair failed and we were unable to recover it. 00:32:44.535 [2024-04-26 13:15:49.516274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.535 [2024-04-26 13:15:49.516597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.535 [2024-04-26 13:15:49.516604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.535 qpair failed and we were unable to recover it. 00:32:44.535 [2024-04-26 13:15:49.516800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.535 [2024-04-26 13:15:49.516991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.535 [2024-04-26 13:15:49.516998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.535 qpair failed and we were unable to recover it. 00:32:44.535 [2024-04-26 13:15:49.517333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.535 [2024-04-26 13:15:49.517516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.535 [2024-04-26 13:15:49.517523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.535 qpair failed and we were unable to recover it. 00:32:44.535 [2024-04-26 13:15:49.517815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.535 [2024-04-26 13:15:49.518178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.535 [2024-04-26 13:15:49.518185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.535 qpair failed and we were unable to recover it. 00:32:44.535 [2024-04-26 13:15:49.518509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.535 [2024-04-26 13:15:49.518859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.535 [2024-04-26 13:15:49.518866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.535 qpair failed and we were unable to recover it. 00:32:44.535 [2024-04-26 13:15:49.519198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.535 [2024-04-26 13:15:49.519495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.535 [2024-04-26 13:15:49.519501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.535 qpair failed and we were unable to recover it. 00:32:44.535 [2024-04-26 13:15:49.519810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.535 [2024-04-26 13:15:49.520134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.535 [2024-04-26 13:15:49.520140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.535 qpair failed and we were unable to recover it. 00:32:44.535 [2024-04-26 13:15:49.520433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.535 [2024-04-26 13:15:49.520727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.535 [2024-04-26 13:15:49.520735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.535 qpair failed and we were unable to recover it. 00:32:44.535 [2024-04-26 13:15:49.521022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.535 [2024-04-26 13:15:49.521333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.535 [2024-04-26 13:15:49.521339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.535 qpair failed and we were unable to recover it. 00:32:44.535 [2024-04-26 13:15:49.521634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.535 [2024-04-26 13:15:49.521949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.535 [2024-04-26 13:15:49.521955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.535 qpair failed and we were unable to recover it. 00:32:44.535 [2024-04-26 13:15:49.522108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.535 [2024-04-26 13:15:49.522354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.535 [2024-04-26 13:15:49.522361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.535 qpair failed and we were unable to recover it. 00:32:44.535 [2024-04-26 13:15:49.522582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.535 [2024-04-26 13:15:49.522879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.535 [2024-04-26 13:15:49.522886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.535 qpair failed and we were unable to recover it. 00:32:44.535 [2024-04-26 13:15:49.523189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.535 [2024-04-26 13:15:49.523514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.535 [2024-04-26 13:15:49.523520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.535 qpair failed and we were unable to recover it. 00:32:44.535 [2024-04-26 13:15:49.523830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.535 [2024-04-26 13:15:49.524162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.535 [2024-04-26 13:15:49.524169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.535 qpair failed and we were unable to recover it. 00:32:44.535 [2024-04-26 13:15:49.524469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.535 [2024-04-26 13:15:49.524669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.535 [2024-04-26 13:15:49.524676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.535 qpair failed and we were unable to recover it. 00:32:44.535 [2024-04-26 13:15:49.524864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.535 [2024-04-26 13:15:49.525176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.535 [2024-04-26 13:15:49.525182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.535 qpair failed and we were unable to recover it. 00:32:44.535 [2024-04-26 13:15:49.525377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.535 [2024-04-26 13:15:49.525713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.535 [2024-04-26 13:15:49.525721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.535 qpair failed and we were unable to recover it. 00:32:44.535 [2024-04-26 13:15:49.526018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.535 [2024-04-26 13:15:49.526340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.535 [2024-04-26 13:15:49.526348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.535 qpair failed and we were unable to recover it. 00:32:44.535 [2024-04-26 13:15:49.526660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.535 [2024-04-26 13:15:49.526974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.535 [2024-04-26 13:15:49.526981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.535 qpair failed and we were unable to recover it. 00:32:44.535 [2024-04-26 13:15:49.527276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.535 [2024-04-26 13:15:49.527595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.535 [2024-04-26 13:15:49.527601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.535 qpair failed and we were unable to recover it. 00:32:44.535 [2024-04-26 13:15:49.527773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.535 [2024-04-26 13:15:49.528058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.535 [2024-04-26 13:15:49.528064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.535 qpair failed and we were unable to recover it. 00:32:44.535 [2024-04-26 13:15:49.528395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.535 [2024-04-26 13:15:49.528713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.535 [2024-04-26 13:15:49.528719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.535 qpair failed and we were unable to recover it. 00:32:44.535 [2024-04-26 13:15:49.529026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.535 [2024-04-26 13:15:49.529103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.535 [2024-04-26 13:15:49.529110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.535 qpair failed and we were unable to recover it. 00:32:44.535 [2024-04-26 13:15:49.529425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.535 [2024-04-26 13:15:49.529741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.535 [2024-04-26 13:15:49.529748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.535 qpair failed and we were unable to recover it. 00:32:44.535 [2024-04-26 13:15:49.530070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.535 [2024-04-26 13:15:49.530384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.535 [2024-04-26 13:15:49.530390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.535 qpair failed and we were unable to recover it. 00:32:44.535 [2024-04-26 13:15:49.530554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.536 [2024-04-26 13:15:49.530775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.536 [2024-04-26 13:15:49.530782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.536 qpair failed and we were unable to recover it. 00:32:44.536 [2024-04-26 13:15:49.531093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.536 [2024-04-26 13:15:49.531436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.536 [2024-04-26 13:15:49.531442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.536 qpair failed and we were unable to recover it. 00:32:44.536 [2024-04-26 13:15:49.531738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.536 [2024-04-26 13:15:49.532082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.536 [2024-04-26 13:15:49.532089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.536 qpair failed and we were unable to recover it. 00:32:44.536 [2024-04-26 13:15:49.532389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.536 [2024-04-26 13:15:49.532673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.536 [2024-04-26 13:15:49.532679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.536 qpair failed and we were unable to recover it. 00:32:44.536 [2024-04-26 13:15:49.533017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.536 [2024-04-26 13:15:49.533351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.536 [2024-04-26 13:15:49.533357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.536 qpair failed and we were unable to recover it. 00:32:44.536 [2024-04-26 13:15:49.533656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.536 [2024-04-26 13:15:49.533958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.536 [2024-04-26 13:15:49.533964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.536 qpair failed and we were unable to recover it. 00:32:44.536 [2024-04-26 13:15:49.534259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.536 [2024-04-26 13:15:49.534576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.536 [2024-04-26 13:15:49.534582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.536 qpair failed and we were unable to recover it. 00:32:44.536 [2024-04-26 13:15:49.534885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.536 [2024-04-26 13:15:49.535106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.536 [2024-04-26 13:15:49.535112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.536 qpair failed and we were unable to recover it. 00:32:44.536 [2024-04-26 13:15:49.535274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.536 [2024-04-26 13:15:49.535517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.536 [2024-04-26 13:15:49.535523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.536 qpair failed and we were unable to recover it. 00:32:44.536 [2024-04-26 13:15:49.535721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.536 [2024-04-26 13:15:49.536020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.536 [2024-04-26 13:15:49.536027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.536 qpair failed and we were unable to recover it. 00:32:44.536 [2024-04-26 13:15:49.536369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.536 [2024-04-26 13:15:49.536702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.536 [2024-04-26 13:15:49.536708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.536 qpair failed and we were unable to recover it. 00:32:44.536 [2024-04-26 13:15:49.537007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.536 [2024-04-26 13:15:49.537197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.536 [2024-04-26 13:15:49.537204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.536 qpair failed and we were unable to recover it. 00:32:44.536 [2024-04-26 13:15:49.537572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.536 [2024-04-26 13:15:49.537886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.536 [2024-04-26 13:15:49.537893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.536 qpair failed and we were unable to recover it. 00:32:44.536 [2024-04-26 13:15:49.538200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.536 [2024-04-26 13:15:49.538482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.536 [2024-04-26 13:15:49.538489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.536 qpair failed and we were unable to recover it. 00:32:44.536 [2024-04-26 13:15:49.538780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.536 [2024-04-26 13:15:49.539077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.536 [2024-04-26 13:15:49.539083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.536 qpair failed and we were unable to recover it. 00:32:44.536 [2024-04-26 13:15:49.539384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.536 [2024-04-26 13:15:49.539678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.536 [2024-04-26 13:15:49.539684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.536 qpair failed and we were unable to recover it. 00:32:44.536 [2024-04-26 13:15:49.539977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.536 [2024-04-26 13:15:49.540275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.536 [2024-04-26 13:15:49.540281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.536 qpair failed and we were unable to recover it. 00:32:44.536 [2024-04-26 13:15:49.540468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.536 [2024-04-26 13:15:49.540829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.536 [2024-04-26 13:15:49.540835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.536 qpair failed and we were unable to recover it. 00:32:44.536 [2024-04-26 13:15:49.541137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.536 [2024-04-26 13:15:49.541459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.536 [2024-04-26 13:15:49.541465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.536 qpair failed and we were unable to recover it. 00:32:44.536 [2024-04-26 13:15:49.541756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.536 [2024-04-26 13:15:49.542073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.536 [2024-04-26 13:15:49.542080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.536 qpair failed and we were unable to recover it. 00:32:44.536 [2024-04-26 13:15:49.542389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.536 [2024-04-26 13:15:49.542735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.536 [2024-04-26 13:15:49.542741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.536 qpair failed and we were unable to recover it. 00:32:44.536 [2024-04-26 13:15:49.542978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.536 [2024-04-26 13:15:49.543299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.536 [2024-04-26 13:15:49.543305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.536 qpair failed and we were unable to recover it. 00:32:44.536 [2024-04-26 13:15:49.543593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.536 [2024-04-26 13:15:49.543917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.536 [2024-04-26 13:15:49.543924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.536 qpair failed and we were unable to recover it. 00:32:44.536 [2024-04-26 13:15:49.544236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.536 [2024-04-26 13:15:49.544484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.536 [2024-04-26 13:15:49.544490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.536 qpair failed and we were unable to recover it. 00:32:44.536 [2024-04-26 13:15:49.544818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.536 [2024-04-26 13:15:49.545144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.536 [2024-04-26 13:15:49.545151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.536 qpair failed and we were unable to recover it. 00:32:44.536 [2024-04-26 13:15:49.545467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.536 [2024-04-26 13:15:49.545811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.536 [2024-04-26 13:15:49.545819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.536 qpair failed and we were unable to recover it. 00:32:44.536 [2024-04-26 13:15:49.546169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.536 [2024-04-26 13:15:49.546488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.536 [2024-04-26 13:15:49.546495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.536 qpair failed and we were unable to recover it. 00:32:44.536 [2024-04-26 13:15:49.546889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.536 [2024-04-26 13:15:49.547167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.536 [2024-04-26 13:15:49.547175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.536 qpair failed and we were unable to recover it. 00:32:44.536 [2024-04-26 13:15:49.547486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.536 [2024-04-26 13:15:49.547687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.537 [2024-04-26 13:15:49.547695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.537 qpair failed and we were unable to recover it. 00:32:44.537 [2024-04-26 13:15:49.547889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.537 [2024-04-26 13:15:49.548167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.537 [2024-04-26 13:15:49.548174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.537 qpair failed and we were unable to recover it. 00:32:44.537 [2024-04-26 13:15:49.548361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.537 [2024-04-26 13:15:49.548697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.537 [2024-04-26 13:15:49.548704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.537 qpair failed and we were unable to recover it. 00:32:44.537 [2024-04-26 13:15:49.548997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.537 [2024-04-26 13:15:49.549348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.537 [2024-04-26 13:15:49.549356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.537 qpair failed and we were unable to recover it. 00:32:44.537 [2024-04-26 13:15:49.549645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.537 [2024-04-26 13:15:49.549896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.537 [2024-04-26 13:15:49.549903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.537 qpair failed and we were unable to recover it. 00:32:44.537 [2024-04-26 13:15:49.550233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.537 [2024-04-26 13:15:49.550558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.537 [2024-04-26 13:15:49.550566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.537 qpair failed and we were unable to recover it. 00:32:44.537 [2024-04-26 13:15:49.550903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.537 [2024-04-26 13:15:49.551243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.537 [2024-04-26 13:15:49.551251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.537 qpair failed and we were unable to recover it. 00:32:44.537 [2024-04-26 13:15:49.551545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.537 [2024-04-26 13:15:49.551866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.537 [2024-04-26 13:15:49.551874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.537 qpair failed and we were unable to recover it. 00:32:44.537 [2024-04-26 13:15:49.552206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.537 [2024-04-26 13:15:49.552394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.537 [2024-04-26 13:15:49.552401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.537 qpair failed and we were unable to recover it. 00:32:44.537 [2024-04-26 13:15:49.552722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.537 [2024-04-26 13:15:49.553016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.537 [2024-04-26 13:15:49.553024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.537 qpair failed and we were unable to recover it. 00:32:44.537 [2024-04-26 13:15:49.553336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.537 [2024-04-26 13:15:49.553677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.537 [2024-04-26 13:15:49.553684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.537 qpair failed and we were unable to recover it. 00:32:44.537 [2024-04-26 13:15:49.553970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.537 [2024-04-26 13:15:49.554203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.537 [2024-04-26 13:15:49.554210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.537 qpair failed and we were unable to recover it. 00:32:44.537 [2024-04-26 13:15:49.554457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.537 [2024-04-26 13:15:49.554547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.537 [2024-04-26 13:15:49.554554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.537 qpair failed and we were unable to recover it. 00:32:44.537 [2024-04-26 13:15:49.554897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.537 [2024-04-26 13:15:49.555176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.537 [2024-04-26 13:15:49.555182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.537 qpair failed and we were unable to recover it. 00:32:44.537 [2024-04-26 13:15:49.555511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.537 [2024-04-26 13:15:49.555812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.537 [2024-04-26 13:15:49.555819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.537 qpair failed and we were unable to recover it. 00:32:44.537 [2024-04-26 13:15:49.556101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.537 [2024-04-26 13:15:49.556427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.537 [2024-04-26 13:15:49.556434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.537 qpair failed and we were unable to recover it. 00:32:44.537 [2024-04-26 13:15:49.556630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.537 [2024-04-26 13:15:49.556844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.537 [2024-04-26 13:15:49.556851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.537 qpair failed and we were unable to recover it. 00:32:44.537 [2024-04-26 13:15:49.557169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.537 [2024-04-26 13:15:49.557491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.537 [2024-04-26 13:15:49.557498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.537 qpair failed and we were unable to recover it. 00:32:44.537 [2024-04-26 13:15:49.557791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.537 [2024-04-26 13:15:49.558088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.537 [2024-04-26 13:15:49.558095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.537 qpair failed and we were unable to recover it. 00:32:44.537 [2024-04-26 13:15:49.558407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.537 [2024-04-26 13:15:49.558605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.537 [2024-04-26 13:15:49.558612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.537 qpair failed and we were unable to recover it. 00:32:44.537 [2024-04-26 13:15:49.558989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.537 [2024-04-26 13:15:49.559326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.537 [2024-04-26 13:15:49.559333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.537 qpair failed and we were unable to recover it. 00:32:44.537 [2024-04-26 13:15:49.559649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.537 [2024-04-26 13:15:49.559810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.537 [2024-04-26 13:15:49.559817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.537 qpair failed and we were unable to recover it. 00:32:44.537 [2024-04-26 13:15:49.560079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.537 [2024-04-26 13:15:49.560269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.537 [2024-04-26 13:15:49.560275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.537 qpair failed and we were unable to recover it. 00:32:44.537 [2024-04-26 13:15:49.560507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.537 [2024-04-26 13:15:49.560781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.537 [2024-04-26 13:15:49.560787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.537 qpair failed and we were unable to recover it. 00:32:44.537 [2024-04-26 13:15:49.561041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.537 [2024-04-26 13:15:49.561376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.537 [2024-04-26 13:15:49.561382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.537 qpair failed and we were unable to recover it. 00:32:44.537 [2024-04-26 13:15:49.561700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.537 [2024-04-26 13:15:49.561891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.537 [2024-04-26 13:15:49.561898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.537 qpair failed and we were unable to recover it. 00:32:44.537 [2024-04-26 13:15:49.562218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.537 [2024-04-26 13:15:49.562537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.537 [2024-04-26 13:15:49.562543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.537 qpair failed and we were unable to recover it. 00:32:44.537 [2024-04-26 13:15:49.562854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.537 [2024-04-26 13:15:49.563196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.537 [2024-04-26 13:15:49.563202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.537 qpair failed and we were unable to recover it. 00:32:44.537 [2024-04-26 13:15:49.563512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.537 [2024-04-26 13:15:49.563879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.537 [2024-04-26 13:15:49.563886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.537 qpair failed and we were unable to recover it. 00:32:44.537 [2024-04-26 13:15:49.564090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.538 [2024-04-26 13:15:49.564402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.538 [2024-04-26 13:15:49.564409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.538 qpair failed and we were unable to recover it. 00:32:44.538 [2024-04-26 13:15:49.564703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.538 [2024-04-26 13:15:49.564995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.538 [2024-04-26 13:15:49.565002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.538 qpair failed and we were unable to recover it. 00:32:44.538 [2024-04-26 13:15:49.565355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.538 [2024-04-26 13:15:49.565539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.538 [2024-04-26 13:15:49.565545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.538 qpair failed and we were unable to recover it. 00:32:44.538 [2024-04-26 13:15:49.565917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.538 [2024-04-26 13:15:49.566217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.538 [2024-04-26 13:15:49.566223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.538 qpair failed and we were unable to recover it. 00:32:44.538 [2024-04-26 13:15:49.566550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.538 [2024-04-26 13:15:49.566848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.538 [2024-04-26 13:15:49.566856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.538 qpair failed and we were unable to recover it. 00:32:44.538 [2024-04-26 13:15:49.567177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.538 [2024-04-26 13:15:49.567351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.538 [2024-04-26 13:15:49.567357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.538 qpair failed and we were unable to recover it. 00:32:44.538 [2024-04-26 13:15:49.567572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.538 [2024-04-26 13:15:49.567884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.538 [2024-04-26 13:15:49.567891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.538 qpair failed and we were unable to recover it. 00:32:44.538 [2024-04-26 13:15:49.568122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.538 [2024-04-26 13:15:49.568408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.538 [2024-04-26 13:15:49.568415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.538 qpair failed and we were unable to recover it. 00:32:44.809 [2024-04-26 13:15:49.568717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.809 [2024-04-26 13:15:49.568907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.809 [2024-04-26 13:15:49.568914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.809 qpair failed and we were unable to recover it. 00:32:44.809 [2024-04-26 13:15:49.569262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.809 [2024-04-26 13:15:49.569560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.809 [2024-04-26 13:15:49.569566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.809 qpair failed and we were unable to recover it. 00:32:44.809 [2024-04-26 13:15:49.569753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.809 [2024-04-26 13:15:49.570085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.809 [2024-04-26 13:15:49.570091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.809 qpair failed and we were unable to recover it. 00:32:44.809 [2024-04-26 13:15:49.570384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.809 [2024-04-26 13:15:49.570673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.809 [2024-04-26 13:15:49.570679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.809 qpair failed and we were unable to recover it. 00:32:44.809 [2024-04-26 13:15:49.570847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.809 [2024-04-26 13:15:49.571133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.809 [2024-04-26 13:15:49.571140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.809 qpair failed and we were unable to recover it. 00:32:44.809 [2024-04-26 13:15:49.571451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.809 [2024-04-26 13:15:49.571624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.809 [2024-04-26 13:15:49.571630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.809 qpair failed and we were unable to recover it. 00:32:44.809 [2024-04-26 13:15:49.571929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.809 [2024-04-26 13:15:49.572094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.809 [2024-04-26 13:15:49.572100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.809 qpair failed and we were unable to recover it. 00:32:44.809 [2024-04-26 13:15:49.572369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.809 [2024-04-26 13:15:49.572711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.809 [2024-04-26 13:15:49.572717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.809 qpair failed and we were unable to recover it. 00:32:44.809 [2024-04-26 13:15:49.573041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.809 [2024-04-26 13:15:49.573353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.809 [2024-04-26 13:15:49.573361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.809 qpair failed and we were unable to recover it. 00:32:44.809 [2024-04-26 13:15:49.573664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.809 [2024-04-26 13:15:49.573849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.809 [2024-04-26 13:15:49.573856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.809 qpair failed and we were unable to recover it. 00:32:44.809 [2024-04-26 13:15:49.574187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.809 [2024-04-26 13:15:49.574494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.809 [2024-04-26 13:15:49.574502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.809 qpair failed and we were unable to recover it. 00:32:44.809 [2024-04-26 13:15:49.574810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.809 [2024-04-26 13:15:49.575132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.809 [2024-04-26 13:15:49.575138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.809 qpair failed and we were unable to recover it. 00:32:44.809 [2024-04-26 13:15:49.575529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.809 [2024-04-26 13:15:49.575777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.809 [2024-04-26 13:15:49.575784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.809 qpair failed and we were unable to recover it. 00:32:44.809 [2024-04-26 13:15:49.576096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.809 [2024-04-26 13:15:49.576261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.809 [2024-04-26 13:15:49.576268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.809 qpair failed and we were unable to recover it. 00:32:44.809 [2024-04-26 13:15:49.576582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.809 [2024-04-26 13:15:49.576919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.809 [2024-04-26 13:15:49.576925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.809 qpair failed and we were unable to recover it. 00:32:44.809 [2024-04-26 13:15:49.577263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.809 [2024-04-26 13:15:49.577604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.809 [2024-04-26 13:15:49.577610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.809 qpair failed and we were unable to recover it. 00:32:44.809 [2024-04-26 13:15:49.577913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.809 [2024-04-26 13:15:49.578220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.809 [2024-04-26 13:15:49.578226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.809 qpair failed and we were unable to recover it. 00:32:44.809 [2024-04-26 13:15:49.578271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.809 [2024-04-26 13:15:49.578616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.809 [2024-04-26 13:15:49.578623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.809 qpair failed and we were unable to recover it. 00:32:44.809 [2024-04-26 13:15:49.578935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.809 [2024-04-26 13:15:49.579259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.809 [2024-04-26 13:15:49.579265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.809 qpair failed and we were unable to recover it. 00:32:44.809 [2024-04-26 13:15:49.579593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.809 [2024-04-26 13:15:49.579788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.809 [2024-04-26 13:15:49.579795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.809 qpair failed and we were unable to recover it. 00:32:44.809 [2024-04-26 13:15:49.580101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.809 [2024-04-26 13:15:49.580270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.809 [2024-04-26 13:15:49.580276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.809 qpair failed and we were unable to recover it. 00:32:44.809 [2024-04-26 13:15:49.580589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.809 [2024-04-26 13:15:49.580927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.809 [2024-04-26 13:15:49.580934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.809 qpair failed and we were unable to recover it. 00:32:44.809 [2024-04-26 13:15:49.581171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.809 [2024-04-26 13:15:49.581482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.809 [2024-04-26 13:15:49.581489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.809 qpair failed and we were unable to recover it. 00:32:44.810 [2024-04-26 13:15:49.581795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.810 [2024-04-26 13:15:49.581984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.810 [2024-04-26 13:15:49.581991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.810 qpair failed and we were unable to recover it. 00:32:44.810 [2024-04-26 13:15:49.582318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.810 [2024-04-26 13:15:49.582668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.810 [2024-04-26 13:15:49.582675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.810 qpair failed and we were unable to recover it. 00:32:44.810 [2024-04-26 13:15:49.583024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.810 [2024-04-26 13:15:49.583334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.810 [2024-04-26 13:15:49.583341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.810 qpair failed and we were unable to recover it. 00:32:44.810 [2024-04-26 13:15:49.583652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.810 [2024-04-26 13:15:49.583983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.810 [2024-04-26 13:15:49.583990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.810 qpair failed and we were unable to recover it. 00:32:44.810 [2024-04-26 13:15:49.584287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.810 [2024-04-26 13:15:49.584609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.810 [2024-04-26 13:15:49.584616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.810 qpair failed and we were unable to recover it. 00:32:44.810 [2024-04-26 13:15:49.584906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.810 [2024-04-26 13:15:49.585194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.810 [2024-04-26 13:15:49.585201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.810 qpair failed and we were unable to recover it. 00:32:44.810 [2024-04-26 13:15:49.585542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.810 [2024-04-26 13:15:49.585747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.810 [2024-04-26 13:15:49.585753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.810 qpair failed and we were unable to recover it. 00:32:44.810 [2024-04-26 13:15:49.586056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.810 [2024-04-26 13:15:49.586328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.810 [2024-04-26 13:15:49.586335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.810 qpair failed and we were unable to recover it. 00:32:44.810 [2024-04-26 13:15:49.586559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.810 [2024-04-26 13:15:49.586846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.810 [2024-04-26 13:15:49.586853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.810 qpair failed and we were unable to recover it. 00:32:44.810 [2024-04-26 13:15:49.587166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.810 [2024-04-26 13:15:49.587531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.810 [2024-04-26 13:15:49.587538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.810 qpair failed and we were unable to recover it. 00:32:44.810 [2024-04-26 13:15:49.587854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.810 [2024-04-26 13:15:49.588123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.810 [2024-04-26 13:15:49.588129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.810 qpair failed and we were unable to recover it. 00:32:44.810 [2024-04-26 13:15:49.588448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.810 [2024-04-26 13:15:49.588747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.810 [2024-04-26 13:15:49.588753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.810 qpair failed and we were unable to recover it. 00:32:44.810 [2024-04-26 13:15:49.589090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.810 [2024-04-26 13:15:49.589425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.810 [2024-04-26 13:15:49.589431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.810 qpair failed and we were unable to recover it. 00:32:44.810 [2024-04-26 13:15:49.589736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.810 [2024-04-26 13:15:49.590066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.810 [2024-04-26 13:15:49.590072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.810 qpair failed and we were unable to recover it. 00:32:44.810 [2024-04-26 13:15:49.590294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.810 [2024-04-26 13:15:49.590608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.810 [2024-04-26 13:15:49.590614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.810 qpair failed and we were unable to recover it. 00:32:44.810 [2024-04-26 13:15:49.590914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.810 [2024-04-26 13:15:49.591263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.810 [2024-04-26 13:15:49.591269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.810 qpair failed and we were unable to recover it. 00:32:44.810 [2024-04-26 13:15:49.591573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.810 [2024-04-26 13:15:49.591876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.810 [2024-04-26 13:15:49.591883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.810 qpair failed and we were unable to recover it. 00:32:44.810 [2024-04-26 13:15:49.592205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.810 [2024-04-26 13:15:49.592528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.810 [2024-04-26 13:15:49.592534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.810 qpair failed and we were unable to recover it. 00:32:44.810 [2024-04-26 13:15:49.592696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.810 [2024-04-26 13:15:49.592999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.810 [2024-04-26 13:15:49.593006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.810 qpair failed and we were unable to recover it. 00:32:44.810 [2024-04-26 13:15:49.593284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.810 [2024-04-26 13:15:49.593615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.810 [2024-04-26 13:15:49.593623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.810 qpair failed and we were unable to recover it. 00:32:44.810 [2024-04-26 13:15:49.593960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.810 [2024-04-26 13:15:49.594129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.810 [2024-04-26 13:15:49.594136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.810 qpair failed and we were unable to recover it. 00:32:44.810 [2024-04-26 13:15:49.594414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.810 [2024-04-26 13:15:49.594723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.810 [2024-04-26 13:15:49.594729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.810 qpair failed and we were unable to recover it. 00:32:44.810 [2024-04-26 13:15:49.595061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.810 [2024-04-26 13:15:49.595379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.810 [2024-04-26 13:15:49.595386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.810 qpair failed and we were unable to recover it. 00:32:44.810 [2024-04-26 13:15:49.595600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.810 [2024-04-26 13:15:49.595943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.810 [2024-04-26 13:15:49.595950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.810 qpair failed and we were unable to recover it. 00:32:44.810 [2024-04-26 13:15:49.596176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.810 [2024-04-26 13:15:49.596547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.810 [2024-04-26 13:15:49.596554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.810 qpair failed and we were unable to recover it. 00:32:44.810 [2024-04-26 13:15:49.596833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.810 [2024-04-26 13:15:49.597016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.810 [2024-04-26 13:15:49.597024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.810 qpair failed and we were unable to recover it. 00:32:44.810 [2024-04-26 13:15:49.597310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.810 [2024-04-26 13:15:49.597638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.810 [2024-04-26 13:15:49.597645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.810 qpair failed and we were unable to recover it. 00:32:44.810 [2024-04-26 13:15:49.597843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.810 [2024-04-26 13:15:49.598124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.810 [2024-04-26 13:15:49.598131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.811 qpair failed and we were unable to recover it. 00:32:44.811 [2024-04-26 13:15:49.598438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.811 [2024-04-26 13:15:49.598743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.811 [2024-04-26 13:15:49.598751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.811 qpair failed and we were unable to recover it. 00:32:44.811 [2024-04-26 13:15:49.598960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.811 [2024-04-26 13:15:49.599333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.811 [2024-04-26 13:15:49.599340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.811 qpair failed and we were unable to recover it. 00:32:44.811 [2024-04-26 13:15:49.599659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.811 [2024-04-26 13:15:49.599939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.811 [2024-04-26 13:15:49.599946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.811 qpair failed and we were unable to recover it. 00:32:44.811 [2024-04-26 13:15:49.600276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.811 [2024-04-26 13:15:49.600592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.811 [2024-04-26 13:15:49.600598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.811 qpair failed and we were unable to recover it. 00:32:44.811 [2024-04-26 13:15:49.600905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.811 [2024-04-26 13:15:49.601230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.811 [2024-04-26 13:15:49.601236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.811 qpair failed and we were unable to recover it. 00:32:44.811 [2024-04-26 13:15:49.601536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.811 [2024-04-26 13:15:49.601817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.811 [2024-04-26 13:15:49.601823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.811 qpair failed and we were unable to recover it. 00:32:44.811 [2024-04-26 13:15:49.602122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.811 [2024-04-26 13:15:49.602443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.811 [2024-04-26 13:15:49.602449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.811 qpair failed and we were unable to recover it. 00:32:44.811 [2024-04-26 13:15:49.602739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.811 [2024-04-26 13:15:49.603062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.811 [2024-04-26 13:15:49.603071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.811 qpair failed and we were unable to recover it. 00:32:44.811 [2024-04-26 13:15:49.603407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.811 [2024-04-26 13:15:49.603707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.811 [2024-04-26 13:15:49.603714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.811 qpair failed and we were unable to recover it. 00:32:44.811 [2024-04-26 13:15:49.604018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.811 [2024-04-26 13:15:49.604191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.811 [2024-04-26 13:15:49.604198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.811 qpair failed and we were unable to recover it. 00:32:44.811 [2024-04-26 13:15:49.604512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.811 [2024-04-26 13:15:49.604593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.811 [2024-04-26 13:15:49.604599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.811 qpair failed and we were unable to recover it. 00:32:44.811 [2024-04-26 13:15:49.604900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.811 [2024-04-26 13:15:49.605202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.811 [2024-04-26 13:15:49.605208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.811 qpair failed and we were unable to recover it. 00:32:44.811 [2024-04-26 13:15:49.605511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.811 [2024-04-26 13:15:49.605823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.811 [2024-04-26 13:15:49.605829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.811 qpair failed and we were unable to recover it. 00:32:44.811 [2024-04-26 13:15:49.606036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.811 [2024-04-26 13:15:49.606372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.811 [2024-04-26 13:15:49.606378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.811 qpair failed and we were unable to recover it. 00:32:44.811 [2024-04-26 13:15:49.606689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.811 [2024-04-26 13:15:49.606973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.811 [2024-04-26 13:15:49.606980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.811 qpair failed and we were unable to recover it. 00:32:44.811 [2024-04-26 13:15:49.607277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.811 [2024-04-26 13:15:49.607489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.811 [2024-04-26 13:15:49.607495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.811 qpair failed and we were unable to recover it. 00:32:44.811 [2024-04-26 13:15:49.607793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.811 [2024-04-26 13:15:49.608121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.811 [2024-04-26 13:15:49.608127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.811 qpair failed and we were unable to recover it. 00:32:44.811 [2024-04-26 13:15:49.608332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.811 [2024-04-26 13:15:49.608665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.811 [2024-04-26 13:15:49.608674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.811 qpair failed and we were unable to recover it. 00:32:44.811 [2024-04-26 13:15:49.608994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.811 [2024-04-26 13:15:49.609315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.811 [2024-04-26 13:15:49.609322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.811 qpair failed and we were unable to recover it. 00:32:44.811 [2024-04-26 13:15:49.609607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.811 [2024-04-26 13:15:49.609903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.811 [2024-04-26 13:15:49.609909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.811 qpair failed and we were unable to recover it. 00:32:44.811 [2024-04-26 13:15:49.610205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.811 [2024-04-26 13:15:49.610546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.811 [2024-04-26 13:15:49.610553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.811 qpair failed and we were unable to recover it. 00:32:44.811 [2024-04-26 13:15:49.610849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.811 [2024-04-26 13:15:49.611165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.811 [2024-04-26 13:15:49.611171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.811 qpair failed and we were unable to recover it. 00:32:44.811 [2024-04-26 13:15:49.611466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.811 [2024-04-26 13:15:49.611728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.811 [2024-04-26 13:15:49.611735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.811 qpair failed and we were unable to recover it. 00:32:44.811 [2024-04-26 13:15:49.611960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.811 [2024-04-26 13:15:49.611998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.811 [2024-04-26 13:15:49.612005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.811 qpair failed and we were unable to recover it. 00:32:44.811 [2024-04-26 13:15:49.612304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.811 [2024-04-26 13:15:49.612647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.811 [2024-04-26 13:15:49.612654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.811 qpair failed and we were unable to recover it. 00:32:44.811 [2024-04-26 13:15:49.612976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.811 [2024-04-26 13:15:49.613293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.811 [2024-04-26 13:15:49.613299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.811 qpair failed and we were unable to recover it. 00:32:44.811 [2024-04-26 13:15:49.613496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.811 [2024-04-26 13:15:49.613792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.811 [2024-04-26 13:15:49.613798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.811 qpair failed and we were unable to recover it. 00:32:44.811 [2024-04-26 13:15:49.614129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.811 [2024-04-26 13:15:49.614442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.812 [2024-04-26 13:15:49.614450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.812 qpair failed and we were unable to recover it. 00:32:44.812 [2024-04-26 13:15:49.614640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.812 [2024-04-26 13:15:49.615001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.812 [2024-04-26 13:15:49.615008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.812 qpair failed and we were unable to recover it. 00:32:44.812 [2024-04-26 13:15:49.615316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.812 [2024-04-26 13:15:49.615604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.812 [2024-04-26 13:15:49.615611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.812 qpair failed and we were unable to recover it. 00:32:44.812 [2024-04-26 13:15:49.615811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.812 [2024-04-26 13:15:49.616174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.812 [2024-04-26 13:15:49.616181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.812 qpair failed and we were unable to recover it. 00:32:44.812 [2024-04-26 13:15:49.616500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.812 [2024-04-26 13:15:49.616835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.812 [2024-04-26 13:15:49.616844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.812 qpair failed and we were unable to recover it. 00:32:44.812 [2024-04-26 13:15:49.617151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.812 [2024-04-26 13:15:49.617352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.812 [2024-04-26 13:15:49.617358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.812 qpair failed and we were unable to recover it. 00:32:44.812 [2024-04-26 13:15:49.617664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.812 [2024-04-26 13:15:49.618009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.812 [2024-04-26 13:15:49.618015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.812 qpair failed and we were unable to recover it. 00:32:44.812 [2024-04-26 13:15:49.618337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.812 [2024-04-26 13:15:49.618667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.812 [2024-04-26 13:15:49.618673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.812 qpair failed and we were unable to recover it. 00:32:44.812 [2024-04-26 13:15:49.618852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.812 [2024-04-26 13:15:49.618980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.812 [2024-04-26 13:15:49.618986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.812 qpair failed and we were unable to recover it. 00:32:44.812 [2024-04-26 13:15:49.619254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.812 [2024-04-26 13:15:49.619571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.812 [2024-04-26 13:15:49.619578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.812 qpair failed and we were unable to recover it. 00:32:44.812 [2024-04-26 13:15:49.619963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.812 [2024-04-26 13:15:49.620252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.812 [2024-04-26 13:15:49.620260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.812 qpair failed and we were unable to recover it. 00:32:44.812 [2024-04-26 13:15:49.620449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.812 [2024-04-26 13:15:49.620771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.812 [2024-04-26 13:15:49.620778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.812 qpair failed and we were unable to recover it. 00:32:44.812 [2024-04-26 13:15:49.620971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.812 [2024-04-26 13:15:49.621244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.812 [2024-04-26 13:15:49.621251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.812 qpair failed and we were unable to recover it. 00:32:44.812 [2024-04-26 13:15:49.621563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.812 [2024-04-26 13:15:49.621867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.812 [2024-04-26 13:15:49.621875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.812 qpair failed and we were unable to recover it. 00:32:44.812 [2024-04-26 13:15:49.622173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.812 [2024-04-26 13:15:49.622510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.812 [2024-04-26 13:15:49.622516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.812 qpair failed and we were unable to recover it. 00:32:44.812 [2024-04-26 13:15:49.622847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.812 [2024-04-26 13:15:49.623159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.812 [2024-04-26 13:15:49.623166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.812 qpair failed and we were unable to recover it. 00:32:44.812 [2024-04-26 13:15:49.623329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.812 [2024-04-26 13:15:49.623706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.812 [2024-04-26 13:15:49.623712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.812 qpair failed and we were unable to recover it. 00:32:44.812 [2024-04-26 13:15:49.623908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.812 [2024-04-26 13:15:49.624274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.812 [2024-04-26 13:15:49.624280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.812 qpair failed and we were unable to recover it. 00:32:44.812 [2024-04-26 13:15:49.624593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.812 [2024-04-26 13:15:49.624899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.812 [2024-04-26 13:15:49.624906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.812 qpair failed and we were unable to recover it. 00:32:44.812 [2024-04-26 13:15:49.625195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.812 [2024-04-26 13:15:49.625487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.812 [2024-04-26 13:15:49.625493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.812 qpair failed and we were unable to recover it. 00:32:44.812 [2024-04-26 13:15:49.625786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.812 [2024-04-26 13:15:49.626114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.812 [2024-04-26 13:15:49.626120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.812 qpair failed and we were unable to recover it. 00:32:44.812 [2024-04-26 13:15:49.626498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.812 [2024-04-26 13:15:49.626783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.812 [2024-04-26 13:15:49.626790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.812 qpair failed and we were unable to recover it. 00:32:44.812 [2024-04-26 13:15:49.627099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.812 [2024-04-26 13:15:49.627293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.812 [2024-04-26 13:15:49.627299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.812 qpair failed and we were unable to recover it. 00:32:44.812 [2024-04-26 13:15:49.627605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.812 [2024-04-26 13:15:49.627889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.812 [2024-04-26 13:15:49.627895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.812 qpair failed and we were unable to recover it. 00:32:44.812 [2024-04-26 13:15:49.628110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.812 [2024-04-26 13:15:49.628271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.812 [2024-04-26 13:15:49.628278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.812 qpair failed and we were unable to recover it. 00:32:44.812 [2024-04-26 13:15:49.628561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.812 [2024-04-26 13:15:49.628866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.812 [2024-04-26 13:15:49.628873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.812 qpair failed and we were unable to recover it. 00:32:44.812 [2024-04-26 13:15:49.629182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.812 [2024-04-26 13:15:49.629322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.812 [2024-04-26 13:15:49.629328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.812 qpair failed and we were unable to recover it. 00:32:44.812 [2024-04-26 13:15:49.629699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.812 [2024-04-26 13:15:49.630022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.812 [2024-04-26 13:15:49.630028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.813 qpair failed and we were unable to recover it. 00:32:44.813 [2024-04-26 13:15:49.630322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.813 [2024-04-26 13:15:49.630606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.813 [2024-04-26 13:15:49.630612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.813 qpair failed and we were unable to recover it. 00:32:44.813 [2024-04-26 13:15:49.630920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.813 [2024-04-26 13:15:49.631244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.813 [2024-04-26 13:15:49.631250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.813 qpair failed and we were unable to recover it. 00:32:44.813 [2024-04-26 13:15:49.631447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.813 [2024-04-26 13:15:49.631788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.813 [2024-04-26 13:15:49.631794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.813 qpair failed and we were unable to recover it. 00:32:44.813 [2024-04-26 13:15:49.632094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.813 [2024-04-26 13:15:49.632287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.813 [2024-04-26 13:15:49.632293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.813 qpair failed and we were unable to recover it. 00:32:44.813 [2024-04-26 13:15:49.632609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.813 [2024-04-26 13:15:49.632930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.813 [2024-04-26 13:15:49.632938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.813 qpair failed and we were unable to recover it. 00:32:44.813 [2024-04-26 13:15:49.633259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.813 [2024-04-26 13:15:49.633564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.813 [2024-04-26 13:15:49.633571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.813 qpair failed and we were unable to recover it. 00:32:44.813 [2024-04-26 13:15:49.633756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.813 [2024-04-26 13:15:49.634082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.813 [2024-04-26 13:15:49.634088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.813 qpair failed and we were unable to recover it. 00:32:44.813 [2024-04-26 13:15:49.634427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.813 [2024-04-26 13:15:49.634743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.813 [2024-04-26 13:15:49.634750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.813 qpair failed and we were unable to recover it. 00:32:44.813 [2024-04-26 13:15:49.635059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.813 [2024-04-26 13:15:49.635351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.813 [2024-04-26 13:15:49.635358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.813 qpair failed and we were unable to recover it. 00:32:44.813 [2024-04-26 13:15:49.635676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.813 [2024-04-26 13:15:49.636063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.813 [2024-04-26 13:15:49.636070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.813 qpair failed and we were unable to recover it. 00:32:44.813 [2024-04-26 13:15:49.636393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.813 [2024-04-26 13:15:49.636722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.813 [2024-04-26 13:15:49.636729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.813 qpair failed and we were unable to recover it. 00:32:44.813 [2024-04-26 13:15:49.637013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.813 [2024-04-26 13:15:49.637185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.813 [2024-04-26 13:15:49.637192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.813 qpair failed and we were unable to recover it. 00:32:44.813 [2024-04-26 13:15:49.637509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.813 [2024-04-26 13:15:49.637852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.813 [2024-04-26 13:15:49.637859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.813 qpair failed and we were unable to recover it. 00:32:44.813 [2024-04-26 13:15:49.638147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.813 [2024-04-26 13:15:49.638479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.813 [2024-04-26 13:15:49.638485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.813 qpair failed and we were unable to recover it. 00:32:44.813 [2024-04-26 13:15:49.638800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.813 [2024-04-26 13:15:49.639142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.813 [2024-04-26 13:15:49.639148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.813 qpair failed and we were unable to recover it. 00:32:44.813 [2024-04-26 13:15:49.639448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.813 [2024-04-26 13:15:49.639733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.813 [2024-04-26 13:15:49.639739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.813 qpair failed and we were unable to recover it. 00:32:44.813 [2024-04-26 13:15:49.640060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.813 [2024-04-26 13:15:49.640368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.813 [2024-04-26 13:15:49.640374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.813 qpair failed and we were unable to recover it. 00:32:44.813 [2024-04-26 13:15:49.640676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.813 [2024-04-26 13:15:49.640976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.813 [2024-04-26 13:15:49.640982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.813 qpair failed and we were unable to recover it. 00:32:44.813 [2024-04-26 13:15:49.641295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.813 [2024-04-26 13:15:49.641611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.813 [2024-04-26 13:15:49.641618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.813 qpair failed and we were unable to recover it. 00:32:44.813 [2024-04-26 13:15:49.641897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.813 [2024-04-26 13:15:49.642080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.813 [2024-04-26 13:15:49.642086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.813 qpair failed and we were unable to recover it. 00:32:44.813 [2024-04-26 13:15:49.642412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.813 [2024-04-26 13:15:49.642617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.813 [2024-04-26 13:15:49.642623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.813 qpair failed and we were unable to recover it. 00:32:44.813 [2024-04-26 13:15:49.643003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.813 [2024-04-26 13:15:49.643294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.813 [2024-04-26 13:15:49.643301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.813 qpair failed and we were unable to recover it. 00:32:44.813 [2024-04-26 13:15:49.643626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.814 [2024-04-26 13:15:49.643950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.814 [2024-04-26 13:15:49.643958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.814 qpair failed and we were unable to recover it. 00:32:44.814 [2024-04-26 13:15:49.644352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.814 [2024-04-26 13:15:49.644665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.814 [2024-04-26 13:15:49.644671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.814 qpair failed and we were unable to recover it. 00:32:44.814 [2024-04-26 13:15:49.644970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.814 [2024-04-26 13:15:49.645280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.814 [2024-04-26 13:15:49.645286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.814 qpair failed and we were unable to recover it. 00:32:44.814 [2024-04-26 13:15:49.645472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.814 [2024-04-26 13:15:49.645850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.814 [2024-04-26 13:15:49.645857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.814 qpair failed and we were unable to recover it. 00:32:44.814 [2024-04-26 13:15:49.646178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.814 [2024-04-26 13:15:49.646473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.814 [2024-04-26 13:15:49.646479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.814 qpair failed and we were unable to recover it. 00:32:44.814 [2024-04-26 13:15:49.646795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.814 [2024-04-26 13:15:49.647152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.814 [2024-04-26 13:15:49.647159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.814 qpair failed and we were unable to recover it. 00:32:44.814 [2024-04-26 13:15:49.647460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.814 [2024-04-26 13:15:49.647764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.814 [2024-04-26 13:15:49.647771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.814 qpair failed and we were unable to recover it. 00:32:44.814 [2024-04-26 13:15:49.648086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.814 [2024-04-26 13:15:49.648329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.814 [2024-04-26 13:15:49.648336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.814 qpair failed and we were unable to recover it. 00:32:44.814 [2024-04-26 13:15:49.648648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.814 [2024-04-26 13:15:49.648834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.814 [2024-04-26 13:15:49.648844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.814 qpair failed and we were unable to recover it. 00:32:44.814 [2024-04-26 13:15:49.649143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.814 [2024-04-26 13:15:49.649462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.814 [2024-04-26 13:15:49.649469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.814 qpair failed and we were unable to recover it. 00:32:44.814 [2024-04-26 13:15:49.649771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.814 [2024-04-26 13:15:49.650030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.814 [2024-04-26 13:15:49.650037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.814 qpair failed and we were unable to recover it. 00:32:44.814 [2024-04-26 13:15:49.650217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.814 [2024-04-26 13:15:49.650548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.814 [2024-04-26 13:15:49.650555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.814 qpair failed and we were unable to recover it. 00:32:44.814 [2024-04-26 13:15:49.650835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.814 [2024-04-26 13:15:49.651148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.814 [2024-04-26 13:15:49.651154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.814 qpair failed and we were unable to recover it. 00:32:44.814 [2024-04-26 13:15:49.651433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.814 [2024-04-26 13:15:49.651754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.814 [2024-04-26 13:15:49.651761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.814 qpair failed and we were unable to recover it. 00:32:44.814 [2024-04-26 13:15:49.652061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.814 [2024-04-26 13:15:49.652395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.814 [2024-04-26 13:15:49.652401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.814 qpair failed and we were unable to recover it. 00:32:44.814 [2024-04-26 13:15:49.652708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.814 [2024-04-26 13:15:49.653050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.814 [2024-04-26 13:15:49.653056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.814 qpair failed and we were unable to recover it. 00:32:44.814 [2024-04-26 13:15:49.653424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.814 [2024-04-26 13:15:49.653724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.814 [2024-04-26 13:15:49.653730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.814 qpair failed and we were unable to recover it. 00:32:44.814 [2024-04-26 13:15:49.654121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.814 [2024-04-26 13:15:49.654422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.814 [2024-04-26 13:15:49.654428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.814 qpair failed and we were unable to recover it. 00:32:44.814 [2024-04-26 13:15:49.654695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.814 [2024-04-26 13:15:49.655023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.814 [2024-04-26 13:15:49.655029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.814 qpair failed and we were unable to recover it. 00:32:44.814 [2024-04-26 13:15:49.655319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.814 [2024-04-26 13:15:49.655654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.814 [2024-04-26 13:15:49.655660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.814 qpair failed and we were unable to recover it. 00:32:44.814 [2024-04-26 13:15:49.655972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.814 [2024-04-26 13:15:49.656278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.814 [2024-04-26 13:15:49.656284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.814 qpair failed and we were unable to recover it. 00:32:44.814 [2024-04-26 13:15:49.656599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.814 [2024-04-26 13:15:49.656911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.814 [2024-04-26 13:15:49.656918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.814 qpair failed and we were unable to recover it. 00:32:44.814 [2024-04-26 13:15:49.657218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.814 [2024-04-26 13:15:49.657384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.814 [2024-04-26 13:15:49.657390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.814 qpair failed and we were unable to recover it. 00:32:44.814 [2024-04-26 13:15:49.657703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.814 [2024-04-26 13:15:49.658003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.814 [2024-04-26 13:15:49.658010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.814 qpair failed and we were unable to recover it. 00:32:44.814 [2024-04-26 13:15:49.658287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.814 [2024-04-26 13:15:49.658621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.814 [2024-04-26 13:15:49.658628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.814 qpair failed and we were unable to recover it. 00:32:44.814 [2024-04-26 13:15:49.658941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.814 [2024-04-26 13:15:49.659253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.814 [2024-04-26 13:15:49.659259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.814 qpair failed and we were unable to recover it. 00:32:44.814 [2024-04-26 13:15:49.659559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.814 [2024-04-26 13:15:49.659859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.814 [2024-04-26 13:15:49.659866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.814 qpair failed and we were unable to recover it. 00:32:44.814 [2024-04-26 13:15:49.660184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.814 [2024-04-26 13:15:49.660489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.814 [2024-04-26 13:15:49.660496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.814 qpair failed and we were unable to recover it. 00:32:44.814 [2024-04-26 13:15:49.660812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.814 [2024-04-26 13:15:49.661147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.815 [2024-04-26 13:15:49.661153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.815 qpair failed and we were unable to recover it. 00:32:44.815 [2024-04-26 13:15:49.661467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.815 [2024-04-26 13:15:49.661752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.815 [2024-04-26 13:15:49.661758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.815 qpair failed and we were unable to recover it. 00:32:44.815 [2024-04-26 13:15:49.662061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.815 [2024-04-26 13:15:49.662206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.815 [2024-04-26 13:15:49.662212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.815 qpair failed and we were unable to recover it. 00:32:44.815 [2024-04-26 13:15:49.662388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.815 [2024-04-26 13:15:49.662735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.815 [2024-04-26 13:15:49.662741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.815 qpair failed and we were unable to recover it. 00:32:44.815 [2024-04-26 13:15:49.663047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.815 [2024-04-26 13:15:49.663370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.815 [2024-04-26 13:15:49.663377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.815 qpair failed and we were unable to recover it. 00:32:44.815 [2024-04-26 13:15:49.663531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.815 [2024-04-26 13:15:49.663810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.815 [2024-04-26 13:15:49.663816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.815 qpair failed and we were unable to recover it. 00:32:44.815 [2024-04-26 13:15:49.664113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.815 [2024-04-26 13:15:49.664431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.815 [2024-04-26 13:15:49.664437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.815 qpair failed and we were unable to recover it. 00:32:44.815 [2024-04-26 13:15:49.664738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.815 [2024-04-26 13:15:49.664894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.815 [2024-04-26 13:15:49.664901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.815 qpair failed and we were unable to recover it. 00:32:44.815 [2024-04-26 13:15:49.665195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.815 [2024-04-26 13:15:49.665499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.815 [2024-04-26 13:15:49.665506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.815 qpair failed and we were unable to recover it. 00:32:44.815 [2024-04-26 13:15:49.665842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.815 [2024-04-26 13:15:49.666145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.815 [2024-04-26 13:15:49.666151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.815 qpair failed and we were unable to recover it. 00:32:44.815 [2024-04-26 13:15:49.666451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.815 [2024-04-26 13:15:49.666770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.815 [2024-04-26 13:15:49.666777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.815 qpair failed and we were unable to recover it. 00:32:44.815 [2024-04-26 13:15:49.667082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.815 [2024-04-26 13:15:49.667303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.815 [2024-04-26 13:15:49.667309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.815 qpair failed and we were unable to recover it. 00:32:44.815 [2024-04-26 13:15:49.667626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.815 [2024-04-26 13:15:49.667872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.815 [2024-04-26 13:15:49.667880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.815 qpair failed and we were unable to recover it. 00:32:44.815 [2024-04-26 13:15:49.668215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.815 [2024-04-26 13:15:49.668527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.815 [2024-04-26 13:15:49.668533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.815 qpair failed and we were unable to recover it. 00:32:44.815 [2024-04-26 13:15:49.668845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.815 [2024-04-26 13:15:49.669145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.815 [2024-04-26 13:15:49.669151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.815 qpair failed and we were unable to recover it. 00:32:44.815 [2024-04-26 13:15:49.669452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.815 [2024-04-26 13:15:49.669601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.815 [2024-04-26 13:15:49.669607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.815 qpair failed and we were unable to recover it. 00:32:44.815 [2024-04-26 13:15:49.669940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.815 [2024-04-26 13:15:49.670269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.815 [2024-04-26 13:15:49.670275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.815 qpair failed and we were unable to recover it. 00:32:44.815 [2024-04-26 13:15:49.670577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.815 [2024-04-26 13:15:49.670915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.815 [2024-04-26 13:15:49.670929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.815 qpair failed and we were unable to recover it. 00:32:44.815 [2024-04-26 13:15:49.671220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.815 [2024-04-26 13:15:49.671501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.815 [2024-04-26 13:15:49.671507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.815 qpair failed and we were unable to recover it. 00:32:44.815 [2024-04-26 13:15:49.671809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.815 [2024-04-26 13:15:49.672030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.815 [2024-04-26 13:15:49.672037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.815 qpair failed and we were unable to recover it. 00:32:44.815 [2024-04-26 13:15:49.672219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.815 [2024-04-26 13:15:49.672489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.815 [2024-04-26 13:15:49.672495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.815 qpair failed and we were unable to recover it. 00:32:44.815 [2024-04-26 13:15:49.672814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.815 [2024-04-26 13:15:49.673109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.815 [2024-04-26 13:15:49.673116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.815 qpair failed and we were unable to recover it. 00:32:44.815 [2024-04-26 13:15:49.673303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.815 [2024-04-26 13:15:49.673606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.815 [2024-04-26 13:15:49.673612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.815 qpair failed and we were unable to recover it. 00:32:44.815 [2024-04-26 13:15:49.673901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.815 [2024-04-26 13:15:49.674214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.815 [2024-04-26 13:15:49.674220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.815 qpair failed and we were unable to recover it. 00:32:44.815 [2024-04-26 13:15:49.674524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.815 [2024-04-26 13:15:49.674813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.815 [2024-04-26 13:15:49.674819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.815 qpair failed and we were unable to recover it. 00:32:44.815 [2024-04-26 13:15:49.675120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.815 [2024-04-26 13:15:49.675377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.815 [2024-04-26 13:15:49.675383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.815 qpair failed and we were unable to recover it. 00:32:44.815 [2024-04-26 13:15:49.675688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.816 [2024-04-26 13:15:49.675996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.816 [2024-04-26 13:15:49.676002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.816 qpair failed and we were unable to recover it. 00:32:44.816 [2024-04-26 13:15:49.676302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.816 [2024-04-26 13:15:49.676596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.816 [2024-04-26 13:15:49.676602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.816 qpair failed and we were unable to recover it. 00:32:44.816 [2024-04-26 13:15:49.676921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.816 [2024-04-26 13:15:49.677212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.816 [2024-04-26 13:15:49.677218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.816 qpair failed and we were unable to recover it. 00:32:44.816 [2024-04-26 13:15:49.677517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.816 [2024-04-26 13:15:49.677806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.816 [2024-04-26 13:15:49.677812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.816 qpair failed and we were unable to recover it. 00:32:44.816 [2024-04-26 13:15:49.678117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.816 [2024-04-26 13:15:49.678310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.816 [2024-04-26 13:15:49.678316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.816 qpair failed and we were unable to recover it. 00:32:44.816 [2024-04-26 13:15:49.678636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.816 [2024-04-26 13:15:49.678954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.816 [2024-04-26 13:15:49.678960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.816 qpair failed and we were unable to recover it. 00:32:44.816 [2024-04-26 13:15:49.679252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.816 [2024-04-26 13:15:49.679577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.816 [2024-04-26 13:15:49.679583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.816 qpair failed and we were unable to recover it. 00:32:44.816 [2024-04-26 13:15:49.679895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.816 [2024-04-26 13:15:49.680063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.816 [2024-04-26 13:15:49.680070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.816 qpair failed and we were unable to recover it. 00:32:44.816 [2024-04-26 13:15:49.680368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.816 [2024-04-26 13:15:49.680693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.816 [2024-04-26 13:15:49.680704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.816 qpair failed and we were unable to recover it. 00:32:44.816 [2024-04-26 13:15:49.680996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.816 [2024-04-26 13:15:49.681367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.816 [2024-04-26 13:15:49.681373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.816 qpair failed and we were unable to recover it. 00:32:44.816 [2024-04-26 13:15:49.681679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.816 [2024-04-26 13:15:49.681827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.816 [2024-04-26 13:15:49.681834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.816 qpair failed and we were unable to recover it. 00:32:44.816 [2024-04-26 13:15:49.682146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.816 [2024-04-26 13:15:49.682444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.816 [2024-04-26 13:15:49.682450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.816 qpair failed and we were unable to recover it. 00:32:44.816 [2024-04-26 13:15:49.682744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.816 [2024-04-26 13:15:49.683066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.816 [2024-04-26 13:15:49.683073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.816 qpair failed and we were unable to recover it. 00:32:44.816 [2024-04-26 13:15:49.683360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.816 [2024-04-26 13:15:49.683667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.816 [2024-04-26 13:15:49.683673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.816 qpair failed and we were unable to recover it. 00:32:44.816 [2024-04-26 13:15:49.683988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.816 [2024-04-26 13:15:49.684200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.816 [2024-04-26 13:15:49.684206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.816 qpair failed and we were unable to recover it. 00:32:44.816 [2024-04-26 13:15:49.684488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.816 [2024-04-26 13:15:49.684796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.816 [2024-04-26 13:15:49.684803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.816 qpair failed and we were unable to recover it. 00:32:44.816 [2024-04-26 13:15:49.685070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.816 [2024-04-26 13:15:49.685397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.816 [2024-04-26 13:15:49.685404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.816 qpair failed and we were unable to recover it. 00:32:44.816 [2024-04-26 13:15:49.685706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.816 [2024-04-26 13:15:49.686020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.816 [2024-04-26 13:15:49.686026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.816 qpair failed and we were unable to recover it. 00:32:44.816 [2024-04-26 13:15:49.686403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.816 [2024-04-26 13:15:49.686740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.816 [2024-04-26 13:15:49.686746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.816 qpair failed and we were unable to recover it. 00:32:44.816 [2024-04-26 13:15:49.687046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.816 [2024-04-26 13:15:49.687335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.816 [2024-04-26 13:15:49.687342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.816 qpair failed and we were unable to recover it. 00:32:44.816 [2024-04-26 13:15:49.687639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.816 [2024-04-26 13:15:49.687816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.816 [2024-04-26 13:15:49.687822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.816 qpair failed and we were unable to recover it. 00:32:44.816 [2024-04-26 13:15:49.688010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.816 [2024-04-26 13:15:49.688383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.816 [2024-04-26 13:15:49.688389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.816 qpair failed and we were unable to recover it. 00:32:44.816 [2024-04-26 13:15:49.688703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.816 [2024-04-26 13:15:49.689020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.816 [2024-04-26 13:15:49.689026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.816 qpair failed and we were unable to recover it. 00:32:44.816 [2024-04-26 13:15:49.689222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.816 [2024-04-26 13:15:49.689564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.816 [2024-04-26 13:15:49.689571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.816 qpair failed and we were unable to recover it. 00:32:44.816 [2024-04-26 13:15:49.689872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.816 [2024-04-26 13:15:49.690177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.816 [2024-04-26 13:15:49.690183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.816 qpair failed and we were unable to recover it. 00:32:44.816 [2024-04-26 13:15:49.690489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.816 [2024-04-26 13:15:49.690562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.816 [2024-04-26 13:15:49.690569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.816 qpair failed and we were unable to recover it. 00:32:44.816 [2024-04-26 13:15:49.690878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.816 [2024-04-26 13:15:49.691203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.816 [2024-04-26 13:15:49.691209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.816 qpair failed and we were unable to recover it. 00:32:44.816 [2024-04-26 13:15:49.691519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.816 [2024-04-26 13:15:49.691815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.816 [2024-04-26 13:15:49.691821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.816 qpair failed and we were unable to recover it. 00:32:44.816 [2024-04-26 13:15:49.692105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.816 [2024-04-26 13:15:49.692266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.817 [2024-04-26 13:15:49.692273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.817 qpair failed and we were unable to recover it. 00:32:44.817 [2024-04-26 13:15:49.692497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.817 [2024-04-26 13:15:49.692829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.817 [2024-04-26 13:15:49.692835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.817 qpair failed and we were unable to recover it. 00:32:44.817 [2024-04-26 13:15:49.693129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.817 [2024-04-26 13:15:49.693407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.817 [2024-04-26 13:15:49.693413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.817 qpair failed and we were unable to recover it. 00:32:44.817 [2024-04-26 13:15:49.693712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.817 [2024-04-26 13:15:49.693908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.817 [2024-04-26 13:15:49.693915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.817 qpair failed and we were unable to recover it. 00:32:44.817 [2024-04-26 13:15:49.694100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.817 [2024-04-26 13:15:49.694386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.817 [2024-04-26 13:15:49.694392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.817 qpair failed and we were unable to recover it. 00:32:44.817 [2024-04-26 13:15:49.694703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.817 [2024-04-26 13:15:49.695023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.817 [2024-04-26 13:15:49.695030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.817 qpair failed and we were unable to recover it. 00:32:44.817 [2024-04-26 13:15:49.695346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.817 [2024-04-26 13:15:49.695667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.817 [2024-04-26 13:15:49.695673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.817 qpair failed and we were unable to recover it. 00:32:44.817 [2024-04-26 13:15:49.695978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.817 [2024-04-26 13:15:49.696269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.817 [2024-04-26 13:15:49.696275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.817 qpair failed and we were unable to recover it. 00:32:44.817 [2024-04-26 13:15:49.696655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.817 [2024-04-26 13:15:49.696964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.817 [2024-04-26 13:15:49.696971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.817 qpair failed and we were unable to recover it. 00:32:44.817 [2024-04-26 13:15:49.697294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.817 [2024-04-26 13:15:49.697611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.817 [2024-04-26 13:15:49.697619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.817 qpair failed and we were unable to recover it. 00:32:44.817 [2024-04-26 13:15:49.697914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.817 [2024-04-26 13:15:49.698240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.817 [2024-04-26 13:15:49.698247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.817 qpair failed and we were unable to recover it. 00:32:44.817 [2024-04-26 13:15:49.698420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.817 [2024-04-26 13:15:49.698720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.817 [2024-04-26 13:15:49.698727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.817 qpair failed and we were unable to recover it. 00:32:44.817 [2024-04-26 13:15:49.699014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.817 [2024-04-26 13:15:49.699336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.817 [2024-04-26 13:15:49.699342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.817 qpair failed and we were unable to recover it. 00:32:44.817 [2024-04-26 13:15:49.699642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.817 [2024-04-26 13:15:49.699947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.817 [2024-04-26 13:15:49.699955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.817 qpair failed and we were unable to recover it. 00:32:44.817 [2024-04-26 13:15:49.700287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.817 [2024-04-26 13:15:49.700593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.817 [2024-04-26 13:15:49.700599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.817 qpair failed and we were unable to recover it. 00:32:44.817 [2024-04-26 13:15:49.700883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.817 [2024-04-26 13:15:49.701201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.817 [2024-04-26 13:15:49.701207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.817 qpair failed and we were unable to recover it. 00:32:44.817 [2024-04-26 13:15:49.701501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.817 [2024-04-26 13:15:49.701823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.817 [2024-04-26 13:15:49.701829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.817 qpair failed and we were unable to recover it. 00:32:44.817 [2024-04-26 13:15:49.702123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.817 [2024-04-26 13:15:49.702422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.817 [2024-04-26 13:15:49.702429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.817 qpair failed and we were unable to recover it. 00:32:44.817 [2024-04-26 13:15:49.702617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.817 [2024-04-26 13:15:49.702959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.817 [2024-04-26 13:15:49.702966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.817 qpair failed and we were unable to recover it. 00:32:44.817 [2024-04-26 13:15:49.703282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.817 [2024-04-26 13:15:49.703570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.817 [2024-04-26 13:15:49.703577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.817 qpair failed and we were unable to recover it. 00:32:44.817 [2024-04-26 13:15:49.703878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.817 [2024-04-26 13:15:49.704190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.817 [2024-04-26 13:15:49.704197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.817 qpair failed and we were unable to recover it. 00:32:44.817 [2024-04-26 13:15:49.704582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.817 [2024-04-26 13:15:49.704893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.817 [2024-04-26 13:15:49.704900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.817 qpair failed and we were unable to recover it. 00:32:44.817 [2024-04-26 13:15:49.705199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.817 [2024-04-26 13:15:49.705399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.817 [2024-04-26 13:15:49.705405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.817 qpair failed and we were unable to recover it. 00:32:44.817 [2024-04-26 13:15:49.705749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.817 [2024-04-26 13:15:49.706071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.817 [2024-04-26 13:15:49.706078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.817 qpair failed and we were unable to recover it. 00:32:44.817 [2024-04-26 13:15:49.706377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.817 [2024-04-26 13:15:49.706540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.817 [2024-04-26 13:15:49.706546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.817 qpair failed and we were unable to recover it. 00:32:44.817 [2024-04-26 13:15:49.706890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.817 [2024-04-26 13:15:49.707235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.817 [2024-04-26 13:15:49.707241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.817 qpair failed and we were unable to recover it. 00:32:44.817 [2024-04-26 13:15:49.707555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.817 [2024-04-26 13:15:49.707728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.817 [2024-04-26 13:15:49.707735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.817 qpair failed and we were unable to recover it. 00:32:44.817 [2024-04-26 13:15:49.707905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.818 [2024-04-26 13:15:49.708218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.818 [2024-04-26 13:15:49.708224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.818 qpair failed and we were unable to recover it. 00:32:44.818 [2024-04-26 13:15:49.708515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.818 [2024-04-26 13:15:49.708841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.818 [2024-04-26 13:15:49.708848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.818 qpair failed and we were unable to recover it. 00:32:44.818 [2024-04-26 13:15:49.709187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.818 [2024-04-26 13:15:49.709531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.818 [2024-04-26 13:15:49.709540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.818 qpair failed and we were unable to recover it. 00:32:44.818 [2024-04-26 13:15:49.709899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.818 [2024-04-26 13:15:49.710221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.818 [2024-04-26 13:15:49.710227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.818 qpair failed and we were unable to recover it. 00:32:44.818 [2024-04-26 13:15:49.710526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.818 [2024-04-26 13:15:49.710726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.818 [2024-04-26 13:15:49.710732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.818 qpair failed and we were unable to recover it. 00:32:44.818 [2024-04-26 13:15:49.711085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.818 [2024-04-26 13:15:49.711403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.818 [2024-04-26 13:15:49.711409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.818 qpair failed and we were unable to recover it. 00:32:44.818 [2024-04-26 13:15:49.711707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.818 [2024-04-26 13:15:49.711907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.818 [2024-04-26 13:15:49.711914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.818 qpair failed and we were unable to recover it. 00:32:44.818 [2024-04-26 13:15:49.712224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.818 [2024-04-26 13:15:49.712564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.818 [2024-04-26 13:15:49.712570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.818 qpair failed and we were unable to recover it. 00:32:44.818 [2024-04-26 13:15:49.712862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.818 [2024-04-26 13:15:49.713148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.818 [2024-04-26 13:15:49.713154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.818 qpair failed and we were unable to recover it. 00:32:44.818 [2024-04-26 13:15:49.713470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.818 [2024-04-26 13:15:49.713661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.818 [2024-04-26 13:15:49.713667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.818 qpair failed and we were unable to recover it. 00:32:44.818 [2024-04-26 13:15:49.713862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.818 [2024-04-26 13:15:49.714206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.818 [2024-04-26 13:15:49.714213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.818 qpair failed and we were unable to recover it. 00:32:44.818 [2024-04-26 13:15:49.714519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.818 [2024-04-26 13:15:49.714832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.818 [2024-04-26 13:15:49.714841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.818 qpair failed and we were unable to recover it. 00:32:44.818 [2024-04-26 13:15:49.715116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.818 [2024-04-26 13:15:49.715272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.818 [2024-04-26 13:15:49.715279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.818 qpair failed and we were unable to recover it. 00:32:44.818 [2024-04-26 13:15:49.715586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.818 [2024-04-26 13:15:49.715890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.818 [2024-04-26 13:15:49.715898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.818 qpair failed and we were unable to recover it. 00:32:44.818 [2024-04-26 13:15:49.716213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.818 [2024-04-26 13:15:49.716530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.818 [2024-04-26 13:15:49.716536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.818 qpair failed and we were unable to recover it. 00:32:44.818 [2024-04-26 13:15:49.716826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.818 [2024-04-26 13:15:49.717199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.818 [2024-04-26 13:15:49.717206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.818 qpair failed and we were unable to recover it. 00:32:44.818 [2024-04-26 13:15:49.717496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.818 [2024-04-26 13:15:49.717790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.818 [2024-04-26 13:15:49.717796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.818 qpair failed and we were unable to recover it. 00:32:44.818 [2024-04-26 13:15:49.718106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.818 [2024-04-26 13:15:49.718422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.818 [2024-04-26 13:15:49.718428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.818 qpair failed and we were unable to recover it. 00:32:44.818 [2024-04-26 13:15:49.718728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.818 [2024-04-26 13:15:49.719072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.818 [2024-04-26 13:15:49.719078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.818 qpair failed and we were unable to recover it. 00:32:44.818 [2024-04-26 13:15:49.719455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.818 [2024-04-26 13:15:49.719746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.818 [2024-04-26 13:15:49.719752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.818 qpair failed and we were unable to recover it. 00:32:44.818 [2024-04-26 13:15:49.720038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.818 [2024-04-26 13:15:49.720365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.818 [2024-04-26 13:15:49.720372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.818 qpair failed and we were unable to recover it. 00:32:44.818 [2024-04-26 13:15:49.720563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.818 [2024-04-26 13:15:49.720892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.818 [2024-04-26 13:15:49.720898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.818 qpair failed and we were unable to recover it. 00:32:44.818 [2024-04-26 13:15:49.721192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.818 [2024-04-26 13:15:49.721491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.818 [2024-04-26 13:15:49.721497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.818 qpair failed and we were unable to recover it. 00:32:44.818 [2024-04-26 13:15:49.721796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.818 [2024-04-26 13:15:49.722117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.819 [2024-04-26 13:15:49.722123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.819 qpair failed and we were unable to recover it. 00:32:44.819 [2024-04-26 13:15:49.722433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.819 [2024-04-26 13:15:49.722754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.819 [2024-04-26 13:15:49.722760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.819 qpair failed and we were unable to recover it. 00:32:44.819 [2024-04-26 13:15:49.723051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.819 [2024-04-26 13:15:49.723386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.819 [2024-04-26 13:15:49.723393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.819 qpair failed and we were unable to recover it. 00:32:44.819 [2024-04-26 13:15:49.723685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.819 [2024-04-26 13:15:49.723990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.819 [2024-04-26 13:15:49.723997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.819 qpair failed and we were unable to recover it. 00:32:44.819 [2024-04-26 13:15:49.724373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.819 [2024-04-26 13:15:49.724679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.819 [2024-04-26 13:15:49.724685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.819 qpair failed and we were unable to recover it. 00:32:44.819 [2024-04-26 13:15:49.725011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.819 [2024-04-26 13:15:49.725346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.819 [2024-04-26 13:15:49.725352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.819 qpair failed and we were unable to recover it. 00:32:44.819 [2024-04-26 13:15:49.725541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.819 [2024-04-26 13:15:49.725866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.819 [2024-04-26 13:15:49.725873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.819 qpair failed and we were unable to recover it. 00:32:44.819 [2024-04-26 13:15:49.726102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.819 [2024-04-26 13:15:49.726439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.819 [2024-04-26 13:15:49.726446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.819 qpair failed and we were unable to recover it. 00:32:44.819 [2024-04-26 13:15:49.726683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.819 [2024-04-26 13:15:49.727009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.819 [2024-04-26 13:15:49.727016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.819 qpair failed and we were unable to recover it. 00:32:44.819 [2024-04-26 13:15:49.727348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.819 [2024-04-26 13:15:49.727625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.819 [2024-04-26 13:15:49.727632] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.819 qpair failed and we were unable to recover it. 00:32:44.819 [2024-04-26 13:15:49.727943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.819 [2024-04-26 13:15:49.728246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.819 [2024-04-26 13:15:49.728253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.819 qpair failed and we were unable to recover it. 00:32:44.819 [2024-04-26 13:15:49.728469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.819 [2024-04-26 13:15:49.728782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.819 [2024-04-26 13:15:49.728790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.819 qpair failed and we were unable to recover it. 00:32:44.819 [2024-04-26 13:15:49.729020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.819 [2024-04-26 13:15:49.729348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.819 [2024-04-26 13:15:49.729356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.819 qpair failed and we were unable to recover it. 00:32:44.819 [2024-04-26 13:15:49.729716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.819 [2024-04-26 13:15:49.730025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.819 [2024-04-26 13:15:49.730033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.819 qpair failed and we were unable to recover it. 00:32:44.819 [2024-04-26 13:15:49.730346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.819 [2024-04-26 13:15:49.730539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.819 [2024-04-26 13:15:49.730546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.819 qpair failed and we were unable to recover it. 00:32:44.819 [2024-04-26 13:15:49.730868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.819 [2024-04-26 13:15:49.731073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.819 [2024-04-26 13:15:49.731080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.819 qpair failed and we were unable to recover it. 00:32:44.819 [2024-04-26 13:15:49.731405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.819 [2024-04-26 13:15:49.731591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.819 [2024-04-26 13:15:49.731598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.819 qpair failed and we were unable to recover it. 00:32:44.819 [2024-04-26 13:15:49.731785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.819 [2024-04-26 13:15:49.732075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.819 [2024-04-26 13:15:49.732082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.819 qpair failed and we were unable to recover it. 00:32:44.819 [2024-04-26 13:15:49.732391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.819 [2024-04-26 13:15:49.732697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.819 [2024-04-26 13:15:49.732703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.819 qpair failed and we were unable to recover it. 00:32:44.819 [2024-04-26 13:15:49.733044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.819 [2024-04-26 13:15:49.733367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.819 [2024-04-26 13:15:49.733374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.819 qpair failed and we were unable to recover it. 00:32:44.819 [2024-04-26 13:15:49.733687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.819 [2024-04-26 13:15:49.734020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.819 [2024-04-26 13:15:49.734026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.819 qpair failed and we were unable to recover it. 00:32:44.819 [2024-04-26 13:15:49.734347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.819 [2024-04-26 13:15:49.734534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.819 [2024-04-26 13:15:49.734541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.819 qpair failed and we were unable to recover it. 00:32:44.819 [2024-04-26 13:15:49.734888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.819 [2024-04-26 13:15:49.735184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.819 [2024-04-26 13:15:49.735191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.819 qpair failed and we were unable to recover it. 00:32:44.819 [2024-04-26 13:15:49.735516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.819 [2024-04-26 13:15:49.735810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.819 [2024-04-26 13:15:49.735817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.819 qpair failed and we were unable to recover it. 00:32:44.819 [2024-04-26 13:15:49.736032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.819 [2024-04-26 13:15:49.736310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.819 [2024-04-26 13:15:49.736316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.819 qpair failed and we were unable to recover it. 00:32:44.819 [2024-04-26 13:15:49.736578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.819 [2024-04-26 13:15:49.736878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.819 [2024-04-26 13:15:49.736885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.819 qpair failed and we were unable to recover it. 00:32:44.819 [2024-04-26 13:15:49.737049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.819 [2024-04-26 13:15:49.737372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.819 [2024-04-26 13:15:49.737379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.819 qpair failed and we were unable to recover it. 00:32:44.819 [2024-04-26 13:15:49.737708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.819 [2024-04-26 13:15:49.738016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.819 [2024-04-26 13:15:49.738024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.819 qpair failed and we were unable to recover it. 00:32:44.819 [2024-04-26 13:15:49.738342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.819 [2024-04-26 13:15:49.738638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.820 [2024-04-26 13:15:49.738644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.820 qpair failed and we were unable to recover it. 00:32:44.820 [2024-04-26 13:15:49.739061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.820 [2024-04-26 13:15:49.739391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.820 [2024-04-26 13:15:49.739398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.820 qpair failed and we were unable to recover it. 00:32:44.820 [2024-04-26 13:15:49.739743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.820 [2024-04-26 13:15:49.740062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.820 [2024-04-26 13:15:49.740069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.820 qpair failed and we were unable to recover it. 00:32:44.820 [2024-04-26 13:15:49.740364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.820 [2024-04-26 13:15:49.740677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.820 [2024-04-26 13:15:49.740683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.820 qpair failed and we were unable to recover it. 00:32:44.820 [2024-04-26 13:15:49.741001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.820 [2024-04-26 13:15:49.741321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.820 [2024-04-26 13:15:49.741327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.820 qpair failed and we were unable to recover it. 00:32:44.820 [2024-04-26 13:15:49.741635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.820 [2024-04-26 13:15:49.741953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.820 [2024-04-26 13:15:49.741960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.820 qpair failed and we were unable to recover it. 00:32:44.820 [2024-04-26 13:15:49.742170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.820 [2024-04-26 13:15:49.742449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.820 [2024-04-26 13:15:49.742455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.820 qpair failed and we were unable to recover it. 00:32:44.820 [2024-04-26 13:15:49.742767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.820 [2024-04-26 13:15:49.743058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.820 [2024-04-26 13:15:49.743065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.820 qpair failed and we were unable to recover it. 00:32:44.820 [2024-04-26 13:15:49.743367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.820 [2024-04-26 13:15:49.743687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.820 [2024-04-26 13:15:49.743693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.820 qpair failed and we were unable to recover it. 00:32:44.820 [2024-04-26 13:15:49.743975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.820 [2024-04-26 13:15:49.744308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.820 [2024-04-26 13:15:49.744315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.820 qpair failed and we were unable to recover it. 00:32:44.820 [2024-04-26 13:15:49.744615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.820 [2024-04-26 13:15:49.744916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.820 [2024-04-26 13:15:49.744923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.820 qpair failed and we were unable to recover it. 00:32:44.820 [2024-04-26 13:15:49.745236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.820 [2024-04-26 13:15:49.745549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.820 [2024-04-26 13:15:49.745555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.820 qpair failed and we were unable to recover it. 00:32:44.820 [2024-04-26 13:15:49.745849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.820 [2024-04-26 13:15:49.746150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.820 [2024-04-26 13:15:49.746156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.820 qpair failed and we were unable to recover it. 00:32:44.820 [2024-04-26 13:15:49.746466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.820 [2024-04-26 13:15:49.746749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.820 [2024-04-26 13:15:49.746755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.820 qpair failed and we were unable to recover it. 00:32:44.820 [2024-04-26 13:15:49.746926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.820 [2024-04-26 13:15:49.747303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.820 [2024-04-26 13:15:49.747309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.820 qpair failed and we were unable to recover it. 00:32:44.820 [2024-04-26 13:15:49.747525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.820 [2024-04-26 13:15:49.747850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.820 [2024-04-26 13:15:49.747857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.820 qpair failed and we were unable to recover it. 00:32:44.820 [2024-04-26 13:15:49.748159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.820 [2024-04-26 13:15:49.748481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.820 [2024-04-26 13:15:49.748488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.820 qpair failed and we were unable to recover it. 00:32:44.820 [2024-04-26 13:15:49.748812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.820 [2024-04-26 13:15:49.749131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.820 [2024-04-26 13:15:49.749138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.820 qpair failed and we were unable to recover it. 00:32:44.820 [2024-04-26 13:15:49.749475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.820 [2024-04-26 13:15:49.749789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.820 [2024-04-26 13:15:49.749795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.820 qpair failed and we were unable to recover it. 00:32:44.820 [2024-04-26 13:15:49.750085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.820 [2024-04-26 13:15:49.750425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.820 [2024-04-26 13:15:49.750432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.820 qpair failed and we were unable to recover it. 00:32:44.820 [2024-04-26 13:15:49.750726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.820 [2024-04-26 13:15:49.751020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.820 [2024-04-26 13:15:49.751027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.820 qpair failed and we were unable to recover it. 00:32:44.820 [2024-04-26 13:15:49.751344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.820 [2024-04-26 13:15:49.751665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.820 [2024-04-26 13:15:49.751671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.820 qpair failed and we were unable to recover it. 00:32:44.820 [2024-04-26 13:15:49.751986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.820 [2024-04-26 13:15:49.752332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.820 [2024-04-26 13:15:49.752339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.820 qpair failed and we were unable to recover it. 00:32:44.820 [2024-04-26 13:15:49.752677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.820 [2024-04-26 13:15:49.752983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.820 [2024-04-26 13:15:49.752990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.820 qpair failed and we were unable to recover it. 00:32:44.820 [2024-04-26 13:15:49.753315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.820 [2024-04-26 13:15:49.753612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.820 [2024-04-26 13:15:49.753618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.820 qpair failed and we were unable to recover it. 00:32:44.820 [2024-04-26 13:15:49.753927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.820 [2024-04-26 13:15:49.754115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.820 [2024-04-26 13:15:49.754121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.820 qpair failed and we were unable to recover it. 00:32:44.820 [2024-04-26 13:15:49.754417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.820 [2024-04-26 13:15:49.754696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.820 [2024-04-26 13:15:49.754702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.820 qpair failed and we were unable to recover it. 00:32:44.820 [2024-04-26 13:15:49.755072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.820 [2024-04-26 13:15:49.755403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.821 [2024-04-26 13:15:49.755410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.821 qpair failed and we were unable to recover it. 00:32:44.821 [2024-04-26 13:15:49.755725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.821 [2024-04-26 13:15:49.756100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.821 [2024-04-26 13:15:49.756106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.821 qpair failed and we were unable to recover it. 00:32:44.821 [2024-04-26 13:15:49.756405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.821 [2024-04-26 13:15:49.756738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.821 [2024-04-26 13:15:49.756745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.821 qpair failed and we were unable to recover it. 00:32:44.821 [2024-04-26 13:15:49.757063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.821 [2024-04-26 13:15:49.757401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.821 [2024-04-26 13:15:49.757408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.821 qpair failed and we were unable to recover it. 00:32:44.821 [2024-04-26 13:15:49.757611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.821 [2024-04-26 13:15:49.757920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.821 [2024-04-26 13:15:49.757926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.821 qpair failed and we were unable to recover it. 00:32:44.821 [2024-04-26 13:15:49.758245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.821 [2024-04-26 13:15:49.758558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.821 [2024-04-26 13:15:49.758565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.821 qpair failed and we were unable to recover it. 00:32:44.821 [2024-04-26 13:15:49.758877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.821 [2024-04-26 13:15:49.759195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.821 [2024-04-26 13:15:49.759202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.821 qpair failed and we were unable to recover it. 00:32:44.821 [2024-04-26 13:15:49.759508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.821 [2024-04-26 13:15:49.759792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.821 [2024-04-26 13:15:49.759799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.821 qpair failed and we were unable to recover it. 00:32:44.821 [2024-04-26 13:15:49.760125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.821 [2024-04-26 13:15:49.760394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.821 [2024-04-26 13:15:49.760401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.821 qpair failed and we were unable to recover it. 00:32:44.821 [2024-04-26 13:15:49.760777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.821 [2024-04-26 13:15:49.761093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.821 [2024-04-26 13:15:49.761101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.821 qpair failed and we were unable to recover it. 00:32:44.821 [2024-04-26 13:15:49.761397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.821 [2024-04-26 13:15:49.761630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.821 [2024-04-26 13:15:49.761637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.821 qpair failed and we were unable to recover it. 00:32:44.821 [2024-04-26 13:15:49.761912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.821 [2024-04-26 13:15:49.762239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.821 [2024-04-26 13:15:49.762247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.821 qpair failed and we were unable to recover it. 00:32:44.821 [2024-04-26 13:15:49.762558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.821 [2024-04-26 13:15:49.762878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.821 [2024-04-26 13:15:49.762885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.821 qpair failed and we were unable to recover it. 00:32:44.821 [2024-04-26 13:15:49.763186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.821 [2024-04-26 13:15:49.763522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.821 [2024-04-26 13:15:49.763529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.821 qpair failed and we were unable to recover it. 00:32:44.821 [2024-04-26 13:15:49.763828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.821 [2024-04-26 13:15:49.764148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.821 [2024-04-26 13:15:49.764154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.821 qpair failed and we were unable to recover it. 00:32:44.821 [2024-04-26 13:15:49.764460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.821 [2024-04-26 13:15:49.764786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.821 [2024-04-26 13:15:49.764793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.821 qpair failed and we were unable to recover it. 00:32:44.821 [2024-04-26 13:15:49.765114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.821 [2024-04-26 13:15:49.765463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.821 [2024-04-26 13:15:49.765469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.821 qpair failed and we were unable to recover it. 00:32:44.821 [2024-04-26 13:15:49.765768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.821 [2024-04-26 13:15:49.766090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.821 [2024-04-26 13:15:49.766096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.821 qpair failed and we were unable to recover it. 00:32:44.821 [2024-04-26 13:15:49.766327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.821 [2024-04-26 13:15:49.766637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.821 [2024-04-26 13:15:49.766644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.821 qpair failed and we were unable to recover it. 00:32:44.821 [2024-04-26 13:15:49.766888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.821 [2024-04-26 13:15:49.767207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.821 [2024-04-26 13:15:49.767213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.821 qpair failed and we were unable to recover it. 00:32:44.821 [2024-04-26 13:15:49.767505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.821 [2024-04-26 13:15:49.767815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.821 [2024-04-26 13:15:49.767821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.821 qpair failed and we were unable to recover it. 00:32:44.821 [2024-04-26 13:15:49.767986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.821 [2024-04-26 13:15:49.768358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.821 [2024-04-26 13:15:49.768364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.821 qpair failed and we were unable to recover it. 00:32:44.821 [2024-04-26 13:15:49.768674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.821 [2024-04-26 13:15:49.768991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.821 [2024-04-26 13:15:49.768998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.821 qpair failed and we were unable to recover it. 00:32:44.821 [2024-04-26 13:15:49.769316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.821 [2024-04-26 13:15:49.769598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.821 [2024-04-26 13:15:49.769605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.821 qpair failed and we were unable to recover it. 00:32:44.821 [2024-04-26 13:15:49.769969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.821 [2024-04-26 13:15:49.770287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.821 [2024-04-26 13:15:49.770294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.821 qpair failed and we were unable to recover it. 00:32:44.821 [2024-04-26 13:15:49.770579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.821 [2024-04-26 13:15:49.770889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.821 [2024-04-26 13:15:49.770896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.821 qpair failed and we were unable to recover it. 00:32:44.821 [2024-04-26 13:15:49.771231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.821 [2024-04-26 13:15:49.771538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.821 [2024-04-26 13:15:49.771545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.821 qpair failed and we were unable to recover it. 00:32:44.821 [2024-04-26 13:15:49.771902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.821 [2024-04-26 13:15:49.772202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.821 [2024-04-26 13:15:49.772209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.821 qpair failed and we were unable to recover it. 00:32:44.821 [2024-04-26 13:15:49.772510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.821 [2024-04-26 13:15:49.772804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.821 [2024-04-26 13:15:49.772810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.821 qpair failed and we were unable to recover it. 00:32:44.821 [2024-04-26 13:15:49.773084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.821 [2024-04-26 13:15:49.773451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.822 [2024-04-26 13:15:49.773458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.822 qpair failed and we were unable to recover it. 00:32:44.822 [2024-04-26 13:15:49.773754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.822 [2024-04-26 13:15:49.774065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.822 [2024-04-26 13:15:49.774072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.822 qpair failed and we were unable to recover it. 00:32:44.822 [2024-04-26 13:15:49.774369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.822 [2024-04-26 13:15:49.774657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.822 [2024-04-26 13:15:49.774664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.822 qpair failed and we were unable to recover it. 00:32:44.822 [2024-04-26 13:15:49.775011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.822 [2024-04-26 13:15:49.775347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.822 [2024-04-26 13:15:49.775354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.822 qpair failed and we were unable to recover it. 00:32:44.822 [2024-04-26 13:15:49.775669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.822 [2024-04-26 13:15:49.775954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.822 [2024-04-26 13:15:49.775962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.822 qpair failed and we were unable to recover it. 00:32:44.822 [2024-04-26 13:15:49.776287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.822 [2024-04-26 13:15:49.776574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.822 [2024-04-26 13:15:49.776581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.822 qpair failed and we were unable to recover it. 00:32:44.822 [2024-04-26 13:15:49.776875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.822 [2024-04-26 13:15:49.777173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.822 [2024-04-26 13:15:49.777180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.822 qpair failed and we were unable to recover it. 00:32:44.822 [2024-04-26 13:15:49.777482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.822 [2024-04-26 13:15:49.777797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.822 [2024-04-26 13:15:49.777803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.822 qpair failed and we were unable to recover it. 00:32:44.822 [2024-04-26 13:15:49.778129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.822 [2024-04-26 13:15:49.778462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.822 [2024-04-26 13:15:49.778469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.822 qpair failed and we were unable to recover it. 00:32:44.822 [2024-04-26 13:15:49.778777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.822 [2024-04-26 13:15:49.778969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.822 [2024-04-26 13:15:49.778976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.822 qpair failed and we were unable to recover it. 00:32:44.822 [2024-04-26 13:15:49.779305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.822 [2024-04-26 13:15:49.779616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.822 [2024-04-26 13:15:49.779622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.822 qpair failed and we were unable to recover it. 00:32:44.822 [2024-04-26 13:15:49.779811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.822 [2024-04-26 13:15:49.780163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.822 [2024-04-26 13:15:49.780170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.822 qpair failed and we were unable to recover it. 00:32:44.822 [2024-04-26 13:15:49.780484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.822 [2024-04-26 13:15:49.780804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.822 [2024-04-26 13:15:49.780811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.822 qpair failed and we were unable to recover it. 00:32:44.822 [2024-04-26 13:15:49.781006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.822 [2024-04-26 13:15:49.781362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.822 [2024-04-26 13:15:49.781369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.822 qpair failed and we were unable to recover it. 00:32:44.822 [2024-04-26 13:15:49.781663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.822 [2024-04-26 13:15:49.781966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.822 [2024-04-26 13:15:49.781973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.822 qpair failed and we were unable to recover it. 00:32:44.822 [2024-04-26 13:15:49.782352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.822 [2024-04-26 13:15:49.782658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.822 [2024-04-26 13:15:49.782666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.822 qpair failed and we were unable to recover it. 00:32:44.822 [2024-04-26 13:15:49.782962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.822 [2024-04-26 13:15:49.783263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.822 [2024-04-26 13:15:49.783271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.822 qpair failed and we were unable to recover it. 00:32:44.822 [2024-04-26 13:15:49.783583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.822 [2024-04-26 13:15:49.783889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.822 [2024-04-26 13:15:49.783896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.822 qpair failed and we were unable to recover it. 00:32:44.822 [2024-04-26 13:15:49.784156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.822 [2024-04-26 13:15:49.784453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.822 [2024-04-26 13:15:49.784461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.822 qpair failed and we were unable to recover it. 00:32:44.822 [2024-04-26 13:15:49.784827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.822 [2024-04-26 13:15:49.785126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.822 [2024-04-26 13:15:49.785133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.822 qpair failed and we were unable to recover it. 00:32:44.822 [2024-04-26 13:15:49.785429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.822 [2024-04-26 13:15:49.785775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.822 [2024-04-26 13:15:49.785782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.822 qpair failed and we were unable to recover it. 00:32:44.822 [2024-04-26 13:15:49.786095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.822 [2024-04-26 13:15:49.786399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.822 [2024-04-26 13:15:49.786407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.822 qpair failed and we were unable to recover it. 00:32:44.822 [2024-04-26 13:15:49.786714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.822 [2024-04-26 13:15:49.787071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.822 [2024-04-26 13:15:49.787079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.822 qpair failed and we were unable to recover it. 00:32:44.822 [2024-04-26 13:15:49.787402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.822 [2024-04-26 13:15:49.787701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.822 [2024-04-26 13:15:49.787708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.822 qpair failed and we were unable to recover it. 00:32:44.822 [2024-04-26 13:15:49.788012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.822 [2024-04-26 13:15:49.788271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.822 [2024-04-26 13:15:49.788278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.822 qpair failed and we were unable to recover it. 00:32:44.822 [2024-04-26 13:15:49.788481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.822 [2024-04-26 13:15:49.788752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.822 [2024-04-26 13:15:49.788760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.822 qpair failed and we were unable to recover it. 00:32:44.822 [2024-04-26 13:15:49.789073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.822 [2024-04-26 13:15:49.789270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.822 [2024-04-26 13:15:49.789279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.822 qpair failed and we were unable to recover it. 00:32:44.822 [2024-04-26 13:15:49.789572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.823 [2024-04-26 13:15:49.789888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.823 [2024-04-26 13:15:49.789896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.823 qpair failed and we were unable to recover it. 00:32:44.823 [2024-04-26 13:15:49.790224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.823 [2024-04-26 13:15:49.790567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.823 [2024-04-26 13:15:49.790574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.823 qpair failed and we were unable to recover it. 00:32:44.823 [2024-04-26 13:15:49.790908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.823 [2024-04-26 13:15:49.791218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.823 [2024-04-26 13:15:49.791225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.823 qpair failed and we were unable to recover it. 00:32:44.823 [2024-04-26 13:15:49.791532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.823 [2024-04-26 13:15:49.791873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.823 [2024-04-26 13:15:49.791881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.823 qpair failed and we were unable to recover it. 00:32:44.823 [2024-04-26 13:15:49.792162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.823 [2024-04-26 13:15:49.792475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.823 [2024-04-26 13:15:49.792482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.823 qpair failed and we were unable to recover it. 00:32:44.823 [2024-04-26 13:15:49.792777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.823 [2024-04-26 13:15:49.792999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.823 [2024-04-26 13:15:49.793006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.823 qpair failed and we were unable to recover it. 00:32:44.823 [2024-04-26 13:15:49.793327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.823 [2024-04-26 13:15:49.793616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.823 [2024-04-26 13:15:49.793623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.823 qpair failed and we were unable to recover it. 00:32:44.823 [2024-04-26 13:15:49.793920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.823 [2024-04-26 13:15:49.794238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.823 [2024-04-26 13:15:49.794244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.823 qpair failed and we were unable to recover it. 00:32:44.823 [2024-04-26 13:15:49.794538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.823 [2024-04-26 13:15:49.794750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.823 [2024-04-26 13:15:49.794757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.823 qpair failed and we were unable to recover it. 00:32:44.823 [2024-04-26 13:15:49.795056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.823 [2024-04-26 13:15:49.795380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.823 [2024-04-26 13:15:49.795389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.823 qpair failed and we were unable to recover it. 00:32:44.823 [2024-04-26 13:15:49.795693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.823 [2024-04-26 13:15:49.795927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.823 [2024-04-26 13:15:49.795934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.823 qpair failed and we were unable to recover it. 00:32:44.823 [2024-04-26 13:15:49.796243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.823 [2024-04-26 13:15:49.796520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.823 [2024-04-26 13:15:49.796527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.823 qpair failed and we were unable to recover it. 00:32:44.823 [2024-04-26 13:15:49.796843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.823 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 44: 19011 Killed "${NVMF_APP[@]}" "$@" 00:32:44.823 [2024-04-26 13:15:49.797171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.823 [2024-04-26 13:15:49.797179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.823 qpair failed and we were unable to recover it. 00:32:44.823 [2024-04-26 13:15:49.797394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.823 [2024-04-26 13:15:49.797610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.823 [2024-04-26 13:15:49.797617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.823 qpair failed and we were unable to recover it. 00:32:44.823 13:15:49 -- host/target_disconnect.sh@56 -- # disconnect_init 10.0.0.2 00:32:44.823 [2024-04-26 13:15:49.797826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.823 13:15:49 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:32:44.823 [2024-04-26 13:15:49.798158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.823 [2024-04-26 13:15:49.798165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.823 qpair failed and we were unable to recover it. 00:32:44.823 13:15:49 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:32:44.823 [2024-04-26 13:15:49.798475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.823 13:15:49 -- common/autotest_common.sh@710 -- # xtrace_disable 00:32:44.823 [2024-04-26 13:15:49.798757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.823 [2024-04-26 13:15:49.798764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.823 qpair failed and we were unable to recover it. 00:32:44.823 13:15:49 -- common/autotest_common.sh@10 -- # set +x 00:32:44.823 [2024-04-26 13:15:49.799046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.823 [2024-04-26 13:15:49.799265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.823 [2024-04-26 13:15:49.799272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.823 qpair failed and we were unable to recover it. 00:32:44.823 [2024-04-26 13:15:49.799584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.823 [2024-04-26 13:15:49.799789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.823 [2024-04-26 13:15:49.799796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.823 qpair failed and we were unable to recover it. 00:32:44.823 [2024-04-26 13:15:49.799980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.823 [2024-04-26 13:15:49.800173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.823 [2024-04-26 13:15:49.800180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.823 qpair failed and we were unable to recover it. 00:32:44.823 [2024-04-26 13:15:49.800517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.823 [2024-04-26 13:15:49.800746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.823 [2024-04-26 13:15:49.800753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.823 qpair failed and we were unable to recover it. 00:32:44.823 [2024-04-26 13:15:49.801067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.823 [2024-04-26 13:15:49.801383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.823 [2024-04-26 13:15:49.801390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.823 qpair failed and we were unable to recover it. 00:32:44.823 [2024-04-26 13:15:49.801703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.823 [2024-04-26 13:15:49.802028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.823 [2024-04-26 13:15:49.802035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.823 qpair failed and we were unable to recover it. 00:32:44.823 [2024-04-26 13:15:49.802210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.823 [2024-04-26 13:15:49.802547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.824 [2024-04-26 13:15:49.802553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.824 qpair failed and we were unable to recover it. 00:32:44.824 [2024-04-26 13:15:49.802864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.824 [2024-04-26 13:15:49.803094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.824 [2024-04-26 13:15:49.803101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.824 qpair failed and we were unable to recover it. 00:32:44.824 [2024-04-26 13:15:49.803405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.824 [2024-04-26 13:15:49.803718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.824 [2024-04-26 13:15:49.803725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.824 qpair failed and we were unable to recover it. 00:32:44.824 [2024-04-26 13:15:49.803947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.824 [2024-04-26 13:15:49.804290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.824 [2024-04-26 13:15:49.804298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.824 qpair failed and we were unable to recover it. 00:32:44.824 [2024-04-26 13:15:49.804468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.824 [2024-04-26 13:15:49.804754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.824 [2024-04-26 13:15:49.804761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.824 qpair failed and we were unable to recover it. 00:32:44.824 [2024-04-26 13:15:49.805071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.824 [2024-04-26 13:15:49.805443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.824 [2024-04-26 13:15:49.805451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.824 qpair failed and we were unable to recover it. 00:32:44.824 13:15:49 -- nvmf/common.sh@470 -- # nvmfpid=19886 00:32:44.824 [2024-04-26 13:15:49.805794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.824 13:15:49 -- nvmf/common.sh@471 -- # waitforlisten 19886 00:32:44.824 [2024-04-26 13:15:49.805871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.824 [2024-04-26 13:15:49.805878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.824 qpair failed and we were unable to recover it. 00:32:44.824 [2024-04-26 13:15:49.806094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.824 13:15:49 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:32:44.824 13:15:49 -- common/autotest_common.sh@817 -- # '[' -z 19886 ']' 00:32:44.824 [2024-04-26 13:15:49.806285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.824 [2024-04-26 13:15:49.806293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.824 qpair failed and we were unable to recover it. 00:32:44.824 13:15:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:44.824 [2024-04-26 13:15:49.806607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.824 13:15:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:32:44.824 [2024-04-26 13:15:49.806849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.824 [2024-04-26 13:15:49.806858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.824 qpair failed and we were unable to recover it. 00:32:44.824 13:15:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:44.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:44.824 [2024-04-26 13:15:49.807069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.824 13:15:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:32:44.824 [2024-04-26 13:15:49.807399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.824 [2024-04-26 13:15:49.807408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.824 qpair failed and we were unable to recover it. 00:32:44.824 13:15:49 -- common/autotest_common.sh@10 -- # set +x 00:32:44.824 [2024-04-26 13:15:49.807748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.824 [2024-04-26 13:15:49.807958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.824 [2024-04-26 13:15:49.807967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.824 qpair failed and we were unable to recover it. 00:32:44.824 [2024-04-26 13:15:49.808329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.824 [2024-04-26 13:15:49.808640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.824 [2024-04-26 13:15:49.808647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.824 qpair failed and we were unable to recover it. 00:32:44.824 [2024-04-26 13:15:49.808969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.824 [2024-04-26 13:15:49.809180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.824 [2024-04-26 13:15:49.809187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.824 qpair failed and we were unable to recover it. 00:32:44.824 [2024-04-26 13:15:49.809342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.824 [2024-04-26 13:15:49.809526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.824 [2024-04-26 13:15:49.809533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.824 qpair failed and we were unable to recover it. 00:32:44.824 [2024-04-26 13:15:49.809822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.824 [2024-04-26 13:15:49.810159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.824 [2024-04-26 13:15:49.810169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.824 qpair failed and we were unable to recover it. 00:32:44.824 [2024-04-26 13:15:49.810482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.824 [2024-04-26 13:15:49.810846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.824 [2024-04-26 13:15:49.810854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.824 qpair failed and we were unable to recover it. 00:32:44.824 [2024-04-26 13:15:49.811067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.824 [2024-04-26 13:15:49.811333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.824 [2024-04-26 13:15:49.811341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.824 qpair failed and we were unable to recover it. 00:32:44.824 [2024-04-26 13:15:49.811556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.824 [2024-04-26 13:15:49.811845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.824 [2024-04-26 13:15:49.811853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.824 qpair failed and we were unable to recover it. 00:32:44.824 [2024-04-26 13:15:49.812154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.824 [2024-04-26 13:15:49.812361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.824 [2024-04-26 13:15:49.812368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.824 qpair failed and we were unable to recover it. 00:32:44.824 [2024-04-26 13:15:49.812558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.824 [2024-04-26 13:15:49.812886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.824 [2024-04-26 13:15:49.812893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.824 qpair failed and we were unable to recover it. 00:32:44.824 [2024-04-26 13:15:49.813177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.824 [2024-04-26 13:15:49.813486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.824 [2024-04-26 13:15:49.813493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.824 qpair failed and we were unable to recover it. 00:32:44.824 [2024-04-26 13:15:49.813705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.824 [2024-04-26 13:15:49.814010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.824 [2024-04-26 13:15:49.814018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.824 qpair failed and we were unable to recover it. 00:32:44.824 [2024-04-26 13:15:49.814392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.824 [2024-04-26 13:15:49.814692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.824 [2024-04-26 13:15:49.814700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.824 qpair failed and we were unable to recover it. 00:32:44.824 [2024-04-26 13:15:49.814927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.824 [2024-04-26 13:15:49.815311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.824 [2024-04-26 13:15:49.815318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.824 qpair failed and we were unable to recover it. 00:32:44.824 [2024-04-26 13:15:49.815661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.824 [2024-04-26 13:15:49.815906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.824 [2024-04-26 13:15:49.815918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.824 qpair failed and we were unable to recover it. 00:32:44.824 [2024-04-26 13:15:49.816252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.824 [2024-04-26 13:15:49.816586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.824 [2024-04-26 13:15:49.816593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.824 qpair failed and we were unable to recover it. 00:32:44.824 [2024-04-26 13:15:49.816905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.824 [2024-04-26 13:15:49.817266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.824 [2024-04-26 13:15:49.817274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.824 qpair failed and we were unable to recover it. 00:32:44.824 [2024-04-26 13:15:49.817461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.824 [2024-04-26 13:15:49.817788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.824 [2024-04-26 13:15:49.817797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.824 qpair failed and we were unable to recover it. 00:32:44.824 [2024-04-26 13:15:49.818141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.825 [2024-04-26 13:15:49.818436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.825 [2024-04-26 13:15:49.818443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.825 qpair failed and we were unable to recover it. 00:32:44.825 [2024-04-26 13:15:49.818768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.825 [2024-04-26 13:15:49.819104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.825 [2024-04-26 13:15:49.819113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.825 qpair failed and we were unable to recover it. 00:32:44.825 [2024-04-26 13:15:49.819520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.825 [2024-04-26 13:15:49.819776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.825 [2024-04-26 13:15:49.819783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.825 qpair failed and we were unable to recover it. 00:32:44.825 [2024-04-26 13:15:49.820123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.825 [2024-04-26 13:15:49.820467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.825 [2024-04-26 13:15:49.820474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.825 qpair failed and we were unable to recover it. 00:32:44.825 [2024-04-26 13:15:49.820571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.825 [2024-04-26 13:15:49.820892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.825 [2024-04-26 13:15:49.820900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.825 qpair failed and we were unable to recover it. 00:32:44.825 [2024-04-26 13:15:49.821249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.825 [2024-04-26 13:15:49.821557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.825 [2024-04-26 13:15:49.821565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.825 qpair failed and we were unable to recover it. 00:32:44.825 [2024-04-26 13:15:49.821888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.825 [2024-04-26 13:15:49.822112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.825 [2024-04-26 13:15:49.822121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.825 qpair failed and we were unable to recover it. 00:32:44.825 [2024-04-26 13:15:49.822472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.825 [2024-04-26 13:15:49.822800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.825 [2024-04-26 13:15:49.822807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.825 qpair failed and we were unable to recover it. 00:32:44.825 [2024-04-26 13:15:49.823046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.825 [2024-04-26 13:15:49.823238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.825 [2024-04-26 13:15:49.823246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.825 qpair failed and we were unable to recover it. 00:32:44.825 [2024-04-26 13:15:49.823502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.825 [2024-04-26 13:15:49.823691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.825 [2024-04-26 13:15:49.823699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.825 qpair failed and we were unable to recover it. 00:32:44.825 [2024-04-26 13:15:49.824083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.825 [2024-04-26 13:15:49.824401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.825 [2024-04-26 13:15:49.824407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.825 qpair failed and we were unable to recover it. 00:32:44.825 [2024-04-26 13:15:49.824634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.825 [2024-04-26 13:15:49.824942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.825 [2024-04-26 13:15:49.824950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.825 qpair failed and we were unable to recover it. 00:32:44.825 [2024-04-26 13:15:49.825351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.825 [2024-04-26 13:15:49.825722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.825 [2024-04-26 13:15:49.825729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.825 qpair failed and we were unable to recover it. 00:32:44.825 [2024-04-26 13:15:49.825956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.825 [2024-04-26 13:15:49.826148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.825 [2024-04-26 13:15:49.826155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.825 qpair failed and we were unable to recover it. 00:32:44.825 [2024-04-26 13:15:49.826478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.825 [2024-04-26 13:15:49.826813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.825 [2024-04-26 13:15:49.826819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.825 qpair failed and we were unable to recover it. 00:32:44.825 [2024-04-26 13:15:49.827152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.825 [2024-04-26 13:15:49.827493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.825 [2024-04-26 13:15:49.827500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.825 qpair failed and we were unable to recover it. 00:32:44.825 [2024-04-26 13:15:49.827713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.825 [2024-04-26 13:15:49.828060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.825 [2024-04-26 13:15:49.828068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.825 qpair failed and we were unable to recover it. 00:32:44.825 [2024-04-26 13:15:49.828432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.825 [2024-04-26 13:15:49.828659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.825 [2024-04-26 13:15:49.828665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.825 qpair failed and we were unable to recover it. 00:32:44.825 [2024-04-26 13:15:49.828980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.825 [2024-04-26 13:15:49.829298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.825 [2024-04-26 13:15:49.829306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.825 qpair failed and we were unable to recover it. 00:32:44.825 [2024-04-26 13:15:49.829521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.825 [2024-04-26 13:15:49.829970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.825 [2024-04-26 13:15:49.829977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.825 qpair failed and we were unable to recover it. 00:32:44.825 [2024-04-26 13:15:49.830282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.825 [2024-04-26 13:15:49.830500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.825 [2024-04-26 13:15:49.830506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.825 qpair failed and we were unable to recover it. 00:32:44.825 [2024-04-26 13:15:49.830793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.825 [2024-04-26 13:15:49.831209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.825 [2024-04-26 13:15:49.831217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.825 qpair failed and we were unable to recover it. 00:32:44.825 [2024-04-26 13:15:49.831536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.825 [2024-04-26 13:15:49.831750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.825 [2024-04-26 13:15:49.831756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.825 qpair failed and we were unable to recover it. 00:32:44.825 [2024-04-26 13:15:49.831934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.825 [2024-04-26 13:15:49.832247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.825 [2024-04-26 13:15:49.832255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.825 qpair failed and we were unable to recover it. 00:32:44.825 [2024-04-26 13:15:49.832602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.825 [2024-04-26 13:15:49.832819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.825 [2024-04-26 13:15:49.832826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.825 qpair failed and we were unable to recover it. 00:32:44.825 [2024-04-26 13:15:49.833196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.825 [2024-04-26 13:15:49.833515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.825 [2024-04-26 13:15:49.833521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.825 qpair failed and we were unable to recover it. 00:32:44.825 [2024-04-26 13:15:49.833710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.825 [2024-04-26 13:15:49.834030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.825 [2024-04-26 13:15:49.834037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.825 qpair failed and we were unable to recover it. 00:32:44.825 [2024-04-26 13:15:49.834250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.825 [2024-04-26 13:15:49.834625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.825 [2024-04-26 13:15:49.834632] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.825 qpair failed and we were unable to recover it. 00:32:44.825 [2024-04-26 13:15:49.834947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.825 [2024-04-26 13:15:49.835045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.825 [2024-04-26 13:15:49.835051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.825 qpair failed and we were unable to recover it. 00:32:44.825 [2024-04-26 13:15:49.835463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.825 [2024-04-26 13:15:49.835786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.825 [2024-04-26 13:15:49.835792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.825 qpair failed and we were unable to recover it. 00:32:44.825 [2024-04-26 13:15:49.835986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.825 [2024-04-26 13:15:49.836340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.825 [2024-04-26 13:15:49.836346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.825 qpair failed and we were unable to recover it. 00:32:44.825 [2024-04-26 13:15:49.836570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.825 [2024-04-26 13:15:49.836912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.826 [2024-04-26 13:15:49.836918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.826 qpair failed and we were unable to recover it. 00:32:44.826 [2024-04-26 13:15:49.837287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.826 [2024-04-26 13:15:49.837398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.826 [2024-04-26 13:15:49.837404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.826 qpair failed and we were unable to recover it. 00:32:44.826 [2024-04-26 13:15:49.837707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.826 [2024-04-26 13:15:49.837939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.826 [2024-04-26 13:15:49.837946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.826 qpair failed and we were unable to recover it. 00:32:44.826 [2024-04-26 13:15:49.838275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.826 [2024-04-26 13:15:49.838471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.826 [2024-04-26 13:15:49.838477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.826 qpair failed and we were unable to recover it. 00:32:44.826 [2024-04-26 13:15:49.838712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.826 [2024-04-26 13:15:49.838882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.826 [2024-04-26 13:15:49.838889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.826 qpair failed and we were unable to recover it. 00:32:44.826 [2024-04-26 13:15:49.839056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.826 [2024-04-26 13:15:49.839380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.826 [2024-04-26 13:15:49.839387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.826 qpair failed and we were unable to recover it. 00:32:44.826 [2024-04-26 13:15:49.839710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.826 [2024-04-26 13:15:49.839926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.826 [2024-04-26 13:15:49.839933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.826 qpair failed and we were unable to recover it. 00:32:44.826 [2024-04-26 13:15:49.840290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.826 [2024-04-26 13:15:49.840619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.826 [2024-04-26 13:15:49.840626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.826 qpair failed and we were unable to recover it. 00:32:44.826 [2024-04-26 13:15:49.840957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.826 [2024-04-26 13:15:49.841340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.826 [2024-04-26 13:15:49.841352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.826 qpair failed and we were unable to recover it. 00:32:44.826 [2024-04-26 13:15:49.841680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.826 [2024-04-26 13:15:49.841866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.826 [2024-04-26 13:15:49.841874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.826 qpair failed and we were unable to recover it. 00:32:44.826 [2024-04-26 13:15:49.841981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.826 [2024-04-26 13:15:49.842305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.826 [2024-04-26 13:15:49.842312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.826 qpair failed and we were unable to recover it. 00:32:44.826 [2024-04-26 13:15:49.842712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.826 [2024-04-26 13:15:49.843096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.826 [2024-04-26 13:15:49.843107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.826 qpair failed and we were unable to recover it. 00:32:44.826 [2024-04-26 13:15:49.843488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.826 [2024-04-26 13:15:49.843728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.826 [2024-04-26 13:15:49.843735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.826 qpair failed and we were unable to recover it. 00:32:44.826 [2024-04-26 13:15:49.843953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.826 [2024-04-26 13:15:49.844299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.826 [2024-04-26 13:15:49.844307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.826 qpair failed and we were unable to recover it. 00:32:44.826 [2024-04-26 13:15:49.844504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.826 [2024-04-26 13:15:49.844806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.826 [2024-04-26 13:15:49.844813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.826 qpair failed and we were unable to recover it. 00:32:44.826 [2024-04-26 13:15:49.845275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.826 [2024-04-26 13:15:49.845576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.826 [2024-04-26 13:15:49.845584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.826 qpair failed and we were unable to recover it. 00:32:44.826 [2024-04-26 13:15:49.845914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.826 [2024-04-26 13:15:49.846227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.826 [2024-04-26 13:15:49.846234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.826 qpair failed and we were unable to recover it. 00:32:44.826 [2024-04-26 13:15:49.846419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.826 [2024-04-26 13:15:49.846746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.826 [2024-04-26 13:15:49.846753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.826 qpair failed and we were unable to recover it. 00:32:44.826 [2024-04-26 13:15:49.846958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.826 [2024-04-26 13:15:49.847262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.826 [2024-04-26 13:15:49.847269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.826 qpair failed and we were unable to recover it. 00:32:44.826 [2024-04-26 13:15:49.847489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.826 [2024-04-26 13:15:49.847849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.826 [2024-04-26 13:15:49.847857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.826 qpair failed and we were unable to recover it. 00:32:44.826 [2024-04-26 13:15:49.848047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.826 [2024-04-26 13:15:49.848497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.826 [2024-04-26 13:15:49.848507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.826 qpair failed and we were unable to recover it. 00:32:44.826 [2024-04-26 13:15:49.848845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.826 [2024-04-26 13:15:49.849147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.826 [2024-04-26 13:15:49.849154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.826 qpair failed and we were unable to recover it. 00:32:44.826 [2024-04-26 13:15:49.849402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.826 [2024-04-26 13:15:49.849576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.826 [2024-04-26 13:15:49.849583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.826 qpair failed and we were unable to recover it. 00:32:44.826 [2024-04-26 13:15:49.849743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.826 [2024-04-26 13:15:49.850040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.826 [2024-04-26 13:15:49.850048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.826 qpair failed and we were unable to recover it. 00:32:44.826 [2024-04-26 13:15:49.850385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.826 [2024-04-26 13:15:49.850578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.826 [2024-04-26 13:15:49.850586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.826 qpair failed and we were unable to recover it. 00:32:44.826 [2024-04-26 13:15:49.850777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.826 [2024-04-26 13:15:49.851167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.826 [2024-04-26 13:15:49.851177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.826 qpair failed and we were unable to recover it. 00:32:44.826 [2024-04-26 13:15:49.851573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.826 [2024-04-26 13:15:49.851776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.826 [2024-04-26 13:15:49.851783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.826 qpair failed and we were unable to recover it. 00:32:44.826 [2024-04-26 13:15:49.852027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.826 [2024-04-26 13:15:49.852339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.826 [2024-04-26 13:15:49.852346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.826 qpair failed and we were unable to recover it. 00:32:44.826 [2024-04-26 13:15:49.852652] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:32:44.826 [2024-04-26 13:15:49.852685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.826 [2024-04-26 13:15:49.852702] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:44.826 [2024-04-26 13:15:49.852852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.826 [2024-04-26 13:15:49.852861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.826 qpair failed and we were unable to recover it. 00:32:44.826 [2024-04-26 13:15:49.853155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.826 [2024-04-26 13:15:49.853577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.826 [2024-04-26 13:15:49.853587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.826 qpair failed and we were unable to recover it. 00:32:44.826 [2024-04-26 13:15:49.853765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.826 [2024-04-26 13:15:49.854011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.826 [2024-04-26 13:15:49.854019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.826 qpair failed and we were unable to recover it. 00:32:44.827 [2024-04-26 13:15:49.854327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.827 [2024-04-26 13:15:49.854658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.827 [2024-04-26 13:15:49.854665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.827 qpair failed and we were unable to recover it. 00:32:44.827 [2024-04-26 13:15:49.854894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.827 [2024-04-26 13:15:49.855083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.827 [2024-04-26 13:15:49.855090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.827 qpair failed and we were unable to recover it. 00:32:44.827 [2024-04-26 13:15:49.855300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.827 [2024-04-26 13:15:49.855485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.827 [2024-04-26 13:15:49.855493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.827 qpair failed and we were unable to recover it. 00:32:44.827 [2024-04-26 13:15:49.855543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.827 [2024-04-26 13:15:49.855942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.827 [2024-04-26 13:15:49.855950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.827 qpair failed and we were unable to recover it. 00:32:44.827 [2024-04-26 13:15:49.856174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.827 [2024-04-26 13:15:49.856485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.827 [2024-04-26 13:15:49.856492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.827 qpair failed and we were unable to recover it. 00:32:44.827 [2024-04-26 13:15:49.856806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.827 [2024-04-26 13:15:49.857117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.827 [2024-04-26 13:15:49.857124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.827 qpair failed and we were unable to recover it. 00:32:44.827 [2024-04-26 13:15:49.857444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.827 [2024-04-26 13:15:49.857778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.827 [2024-04-26 13:15:49.857786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.827 qpair failed and we were unable to recover it. 00:32:44.827 [2024-04-26 13:15:49.857984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.827 [2024-04-26 13:15:49.858417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.827 [2024-04-26 13:15:49.858424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.827 qpair failed and we were unable to recover it. 00:32:44.827 [2024-04-26 13:15:49.858478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.827 [2024-04-26 13:15:49.858646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.827 [2024-04-26 13:15:49.858654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:44.827 qpair failed and we were unable to recover it. 00:32:44.827 [2024-04-26 13:15:49.858990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.098 [2024-04-26 13:15:49.859196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.098 [2024-04-26 13:15:49.859204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.098 qpair failed and we were unable to recover it. 00:32:45.098 [2024-04-26 13:15:49.859497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.098 [2024-04-26 13:15:49.859705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.098 [2024-04-26 13:15:49.859713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.098 qpair failed and we were unable to recover it. 00:32:45.098 [2024-04-26 13:15:49.860017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.098 [2024-04-26 13:15:49.860372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.098 [2024-04-26 13:15:49.860379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.098 qpair failed and we were unable to recover it. 00:32:45.098 [2024-04-26 13:15:49.860565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.098 [2024-04-26 13:15:49.860914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.098 [2024-04-26 13:15:49.860922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.098 qpair failed and we were unable to recover it. 00:32:45.098 [2024-04-26 13:15:49.861138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.098 [2024-04-26 13:15:49.861462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.098 [2024-04-26 13:15:49.861470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.098 qpair failed and we were unable to recover it. 00:32:45.098 [2024-04-26 13:15:49.861818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.098 [2024-04-26 13:15:49.862018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.098 [2024-04-26 13:15:49.862026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.098 qpair failed and we were unable to recover it. 00:32:45.098 [2024-04-26 13:15:49.862380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.098 [2024-04-26 13:15:49.862612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.098 [2024-04-26 13:15:49.862619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.098 qpair failed and we were unable to recover it. 00:32:45.098 [2024-04-26 13:15:49.862964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.098 [2024-04-26 13:15:49.863325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.098 [2024-04-26 13:15:49.863333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.098 qpair failed and we were unable to recover it. 00:32:45.098 [2024-04-26 13:15:49.863694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.098 [2024-04-26 13:15:49.864021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.098 [2024-04-26 13:15:49.864029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.098 qpair failed and we were unable to recover it. 00:32:45.098 [2024-04-26 13:15:49.864396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.098 [2024-04-26 13:15:49.864605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.098 [2024-04-26 13:15:49.864613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.098 qpair failed and we were unable to recover it. 00:32:45.098 [2024-04-26 13:15:49.864803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.098 [2024-04-26 13:15:49.864972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.098 [2024-04-26 13:15:49.864981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.098 qpair failed and we were unable to recover it. 00:32:45.098 [2024-04-26 13:15:49.865321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.098 [2024-04-26 13:15:49.865668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.098 [2024-04-26 13:15:49.865676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.098 qpair failed and we were unable to recover it. 00:32:45.098 [2024-04-26 13:15:49.865863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.098 [2024-04-26 13:15:49.866105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.098 [2024-04-26 13:15:49.866113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.098 qpair failed and we were unable to recover it. 00:32:45.098 [2024-04-26 13:15:49.866424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.098 [2024-04-26 13:15:49.866781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.098 [2024-04-26 13:15:49.866789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.098 qpair failed and we were unable to recover it. 00:32:45.098 [2024-04-26 13:15:49.867102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.098 [2024-04-26 13:15:49.867318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.098 [2024-04-26 13:15:49.867326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.098 qpair failed and we were unable to recover it. 00:32:45.098 [2024-04-26 13:15:49.867557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.098 [2024-04-26 13:15:49.867917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.098 [2024-04-26 13:15:49.867925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.098 qpair failed and we were unable to recover it. 00:32:45.098 [2024-04-26 13:15:49.868277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.098 [2024-04-26 13:15:49.868624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.098 [2024-04-26 13:15:49.868631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.098 qpair failed and we were unable to recover it. 00:32:45.098 [2024-04-26 13:15:49.868969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.098 [2024-04-26 13:15:49.869274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.098 [2024-04-26 13:15:49.869282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.098 qpair failed and we were unable to recover it. 00:32:45.098 [2024-04-26 13:15:49.869608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.098 [2024-04-26 13:15:49.869795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.098 [2024-04-26 13:15:49.869802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.098 qpair failed and we were unable to recover it. 00:32:45.098 [2024-04-26 13:15:49.870131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.098 [2024-04-26 13:15:49.870455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.098 [2024-04-26 13:15:49.870462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.098 qpair failed and we were unable to recover it. 00:32:45.098 [2024-04-26 13:15:49.870768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.098 [2024-04-26 13:15:49.871080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.098 [2024-04-26 13:15:49.871088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.098 qpair failed and we were unable to recover it. 00:32:45.098 [2024-04-26 13:15:49.871290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.098 [2024-04-26 13:15:49.871587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.098 [2024-04-26 13:15:49.871594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.098 qpair failed and we were unable to recover it. 00:32:45.098 [2024-04-26 13:15:49.871916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.098 [2024-04-26 13:15:49.872236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.098 [2024-04-26 13:15:49.872243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.098 qpair failed and we were unable to recover it. 00:32:45.098 [2024-04-26 13:15:49.872465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.098 [2024-04-26 13:15:49.872613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.098 [2024-04-26 13:15:49.872620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.098 qpair failed and we were unable to recover it. 00:32:45.098 [2024-04-26 13:15:49.872809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.098 [2024-04-26 13:15:49.873013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.098 [2024-04-26 13:15:49.873021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.098 qpair failed and we were unable to recover it. 00:32:45.098 [2024-04-26 13:15:49.873346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.098 [2024-04-26 13:15:49.873707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.099 [2024-04-26 13:15:49.873715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.099 qpair failed and we were unable to recover it. 00:32:45.099 [2024-04-26 13:15:49.874051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.099 [2024-04-26 13:15:49.874405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.099 [2024-04-26 13:15:49.874413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.099 qpair failed and we were unable to recover it. 00:32:45.099 [2024-04-26 13:15:49.874598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.099 [2024-04-26 13:15:49.874922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.099 [2024-04-26 13:15:49.874930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.099 qpair failed and we were unable to recover it. 00:32:45.099 [2024-04-26 13:15:49.875286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.099 [2024-04-26 13:15:49.875636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.099 [2024-04-26 13:15:49.875644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.099 qpair failed and we were unable to recover it. 00:32:45.099 [2024-04-26 13:15:49.875847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.099 [2024-04-26 13:15:49.876152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.099 [2024-04-26 13:15:49.876159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.099 qpair failed and we were unable to recover it. 00:32:45.099 [2024-04-26 13:15:49.876481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.099 [2024-04-26 13:15:49.876755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.099 [2024-04-26 13:15:49.876762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.099 qpair failed and we were unable to recover it. 00:32:45.099 [2024-04-26 13:15:49.877071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.099 [2024-04-26 13:15:49.877398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.099 [2024-04-26 13:15:49.877405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.099 qpair failed and we were unable to recover it. 00:32:45.099 [2024-04-26 13:15:49.877706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.099 [2024-04-26 13:15:49.878033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.099 [2024-04-26 13:15:49.878041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.099 qpair failed and we were unable to recover it. 00:32:45.099 [2024-04-26 13:15:49.878361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.099 [2024-04-26 13:15:49.878680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.099 [2024-04-26 13:15:49.878687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.099 qpair failed and we were unable to recover it. 00:32:45.099 [2024-04-26 13:15:49.879016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.099 [2024-04-26 13:15:49.879359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.099 [2024-04-26 13:15:49.879365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.099 qpair failed and we were unable to recover it. 00:32:45.099 [2024-04-26 13:15:49.879528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.099 [2024-04-26 13:15:49.879700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.099 [2024-04-26 13:15:49.879707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.099 qpair failed and we were unable to recover it. 00:32:45.099 [2024-04-26 13:15:49.880048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.099 [2024-04-26 13:15:49.880379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.099 [2024-04-26 13:15:49.880386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.099 qpair failed and we were unable to recover it. 00:32:45.099 [2024-04-26 13:15:49.880738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.099 [2024-04-26 13:15:49.881082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.099 [2024-04-26 13:15:49.881089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.099 qpair failed and we were unable to recover it. 00:32:45.099 [2024-04-26 13:15:49.881419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.099 [2024-04-26 13:15:49.881667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.099 [2024-04-26 13:15:49.881674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.099 qpair failed and we were unable to recover it. 00:32:45.099 [2024-04-26 13:15:49.882002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.099 [2024-04-26 13:15:49.882294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.099 [2024-04-26 13:15:49.882300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.099 qpair failed and we were unable to recover it. 00:32:45.099 [2024-04-26 13:15:49.882603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.099 [2024-04-26 13:15:49.882948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.099 [2024-04-26 13:15:49.882956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.099 qpair failed and we were unable to recover it. 00:32:45.099 [2024-04-26 13:15:49.883174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.099 [2024-04-26 13:15:49.883450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.099 [2024-04-26 13:15:49.883457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.099 qpair failed and we were unable to recover it. 00:32:45.099 [2024-04-26 13:15:49.883814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.099 [2024-04-26 13:15:49.884155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.099 [2024-04-26 13:15:49.884162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.099 qpair failed and we were unable to recover it. 00:32:45.099 [2024-04-26 13:15:49.884499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.099 [2024-04-26 13:15:49.884801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.099 [2024-04-26 13:15:49.884808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.099 qpair failed and we were unable to recover it. 00:32:45.099 [2024-04-26 13:15:49.885136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.099 EAL: No free 2048 kB hugepages reported on node 1 00:32:45.099 [2024-04-26 13:15:49.885480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.099 [2024-04-26 13:15:49.885489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.099 qpair failed and we were unable to recover it. 00:32:45.099 [2024-04-26 13:15:49.885833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.099 [2024-04-26 13:15:49.886032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.099 [2024-04-26 13:15:49.886039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.099 qpair failed and we were unable to recover it. 00:32:45.099 [2024-04-26 13:15:49.886349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.099 [2024-04-26 13:15:49.886524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.099 [2024-04-26 13:15:49.886530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.099 qpair failed and we were unable to recover it. 00:32:45.099 [2024-04-26 13:15:49.886866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.099 [2024-04-26 13:15:49.887076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.099 [2024-04-26 13:15:49.887083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.099 qpair failed and we were unable to recover it. 00:32:45.099 [2024-04-26 13:15:49.887417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.099 [2024-04-26 13:15:49.887734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.099 [2024-04-26 13:15:49.887741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.099 qpair failed and we were unable to recover it. 00:32:45.099 [2024-04-26 13:15:49.887956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.099 [2024-04-26 13:15:49.888152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.099 [2024-04-26 13:15:49.888159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.099 qpair failed and we were unable to recover it. 00:32:45.099 [2024-04-26 13:15:49.888470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.099 [2024-04-26 13:15:49.888829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.099 [2024-04-26 13:15:49.888835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.099 qpair failed and we were unable to recover it. 00:32:45.099 [2024-04-26 13:15:49.889177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.099 [2024-04-26 13:15:49.889377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.099 [2024-04-26 13:15:49.889384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.099 qpair failed and we were unable to recover it. 00:32:45.099 [2024-04-26 13:15:49.889715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.099 [2024-04-26 13:15:49.890010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.099 [2024-04-26 13:15:49.890017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.099 qpair failed and we were unable to recover it. 00:32:45.099 [2024-04-26 13:15:49.890331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.099 [2024-04-26 13:15:49.890681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.099 [2024-04-26 13:15:49.890688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.099 qpair failed and we were unable to recover it. 00:32:45.100 [2024-04-26 13:15:49.891012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.100 [2024-04-26 13:15:49.891326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.100 [2024-04-26 13:15:49.891332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.100 qpair failed and we were unable to recover it. 00:32:45.100 [2024-04-26 13:15:49.891688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.100 [2024-04-26 13:15:49.891978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.100 [2024-04-26 13:15:49.891985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.100 qpair failed and we were unable to recover it. 00:32:45.100 [2024-04-26 13:15:49.892322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.100 [2024-04-26 13:15:49.892522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.100 [2024-04-26 13:15:49.892530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.100 qpair failed and we were unable to recover it. 00:32:45.100 [2024-04-26 13:15:49.892836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.100 [2024-04-26 13:15:49.893246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.100 [2024-04-26 13:15:49.893253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.100 qpair failed and we were unable to recover it. 00:32:45.100 [2024-04-26 13:15:49.893555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.100 [2024-04-26 13:15:49.893878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.100 [2024-04-26 13:15:49.893885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.100 qpair failed and we were unable to recover it. 00:32:45.100 [2024-04-26 13:15:49.894073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.100 [2024-04-26 13:15:49.894424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.100 [2024-04-26 13:15:49.894430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.100 qpair failed and we were unable to recover it. 00:32:45.100 [2024-04-26 13:15:49.894732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.100 [2024-04-26 13:15:49.895037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.100 [2024-04-26 13:15:49.895045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.100 qpair failed and we were unable to recover it. 00:32:45.100 [2024-04-26 13:15:49.895366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.100 [2024-04-26 13:15:49.895684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.100 [2024-04-26 13:15:49.895690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.100 qpair failed and we were unable to recover it. 00:32:45.100 [2024-04-26 13:15:49.895968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.100 [2024-04-26 13:15:49.896328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.100 [2024-04-26 13:15:49.896335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.100 qpair failed and we were unable to recover it. 00:32:45.100 [2024-04-26 13:15:49.896602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.100 [2024-04-26 13:15:49.896919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.100 [2024-04-26 13:15:49.896926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.100 qpair failed and we were unable to recover it. 00:32:45.100 [2024-04-26 13:15:49.897098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.100 [2024-04-26 13:15:49.897390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.100 [2024-04-26 13:15:49.897396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.100 qpair failed and we were unable to recover it. 00:32:45.100 [2024-04-26 13:15:49.897738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.100 [2024-04-26 13:15:49.898071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.100 [2024-04-26 13:15:49.898080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.100 qpair failed and we were unable to recover it. 00:32:45.100 [2024-04-26 13:15:49.898445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.100 [2024-04-26 13:15:49.898753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.100 [2024-04-26 13:15:49.898760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.100 qpair failed and we were unable to recover it. 00:32:45.100 [2024-04-26 13:15:49.899072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.100 [2024-04-26 13:15:49.899247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.100 [2024-04-26 13:15:49.899254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.100 qpair failed and we were unable to recover it. 00:32:45.100 [2024-04-26 13:15:49.899628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.100 [2024-04-26 13:15:49.899918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.100 [2024-04-26 13:15:49.899925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.100 qpair failed and we were unable to recover it. 00:32:45.100 [2024-04-26 13:15:49.900298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.100 [2024-04-26 13:15:49.900604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.100 [2024-04-26 13:15:49.900610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.100 qpair failed and we were unable to recover it. 00:32:45.100 [2024-04-26 13:15:49.900964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.100 [2024-04-26 13:15:49.901184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.100 [2024-04-26 13:15:49.901190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.100 qpair failed and we were unable to recover it. 00:32:45.100 [2024-04-26 13:15:49.901387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.100 [2024-04-26 13:15:49.901561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.100 [2024-04-26 13:15:49.901568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.100 qpair failed and we were unable to recover it. 00:32:45.100 [2024-04-26 13:15:49.901880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.100 [2024-04-26 13:15:49.902202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.100 [2024-04-26 13:15:49.902208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.100 qpair failed and we were unable to recover it. 00:32:45.100 [2024-04-26 13:15:49.902397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.100 [2024-04-26 13:15:49.902794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.100 [2024-04-26 13:15:49.902801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.100 qpair failed and we were unable to recover it. 00:32:45.100 [2024-04-26 13:15:49.903133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.100 [2024-04-26 13:15:49.903303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.100 [2024-04-26 13:15:49.903310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.100 qpair failed and we were unable to recover it. 00:32:45.100 [2024-04-26 13:15:49.903619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.100 [2024-04-26 13:15:49.903804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.100 [2024-04-26 13:15:49.903814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.100 qpair failed and we were unable to recover it. 00:32:45.100 [2024-04-26 13:15:49.904137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.100 [2024-04-26 13:15:49.904451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.100 [2024-04-26 13:15:49.904458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.100 qpair failed and we were unable to recover it. 00:32:45.100 [2024-04-26 13:15:49.904655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.100 [2024-04-26 13:15:49.904909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.100 [2024-04-26 13:15:49.904916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.100 qpair failed and we were unable to recover it. 00:32:45.100 [2024-04-26 13:15:49.905104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.100 [2024-04-26 13:15:49.905300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.100 [2024-04-26 13:15:49.905307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.100 qpair failed and we were unable to recover it. 00:32:45.100 [2024-04-26 13:15:49.905501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.100 [2024-04-26 13:15:49.905740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.100 [2024-04-26 13:15:49.905746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.100 qpair failed and we were unable to recover it. 00:32:45.100 [2024-04-26 13:15:49.906064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.100 [2024-04-26 13:15:49.906345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.100 [2024-04-26 13:15:49.906351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.100 qpair failed and we were unable to recover it. 00:32:45.100 [2024-04-26 13:15:49.906548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.100 [2024-04-26 13:15:49.906918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.100 [2024-04-26 13:15:49.906925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.100 qpair failed and we were unable to recover it. 00:32:45.100 [2024-04-26 13:15:49.907253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.100 [2024-04-26 13:15:49.907455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.100 [2024-04-26 13:15:49.907462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.101 qpair failed and we were unable to recover it. 00:32:45.101 [2024-04-26 13:15:49.907778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.101 [2024-04-26 13:15:49.908085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.101 [2024-04-26 13:15:49.908091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.101 qpair failed and we were unable to recover it. 00:32:45.101 [2024-04-26 13:15:49.908397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.101 [2024-04-26 13:15:49.908747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.101 [2024-04-26 13:15:49.908754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.101 qpair failed and we were unable to recover it. 00:32:45.101 [2024-04-26 13:15:49.908911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.101 [2024-04-26 13:15:49.909081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.101 [2024-04-26 13:15:49.909088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.101 qpair failed and we were unable to recover it. 00:32:45.101 [2024-04-26 13:15:49.909386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.101 [2024-04-26 13:15:49.909690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.101 [2024-04-26 13:15:49.909698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.101 qpair failed and we were unable to recover it. 00:32:45.101 [2024-04-26 13:15:49.909907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.101 [2024-04-26 13:15:49.910193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.101 [2024-04-26 13:15:49.910200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.101 qpair failed and we were unable to recover it. 00:32:45.101 [2024-04-26 13:15:49.910503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.101 [2024-04-26 13:15:49.910816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.101 [2024-04-26 13:15:49.910822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.101 qpair failed and we were unable to recover it. 00:32:45.101 [2024-04-26 13:15:49.911127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.101 [2024-04-26 13:15:49.911368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.101 [2024-04-26 13:15:49.911375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.101 qpair failed and we were unable to recover it. 00:32:45.101 [2024-04-26 13:15:49.911561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.101 [2024-04-26 13:15:49.911901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.101 [2024-04-26 13:15:49.911907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.101 qpair failed and we were unable to recover it. 00:32:45.101 [2024-04-26 13:15:49.912218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.101 [2024-04-26 13:15:49.912420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.101 [2024-04-26 13:15:49.912427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.101 qpair failed and we were unable to recover it. 00:32:45.101 [2024-04-26 13:15:49.912784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.101 [2024-04-26 13:15:49.913105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.101 [2024-04-26 13:15:49.913111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.101 qpair failed and we were unable to recover it. 00:32:45.101 [2024-04-26 13:15:49.913470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.101 [2024-04-26 13:15:49.913658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.101 [2024-04-26 13:15:49.913664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.101 qpair failed and we were unable to recover it. 00:32:45.101 [2024-04-26 13:15:49.913983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.101 [2024-04-26 13:15:49.914297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.101 [2024-04-26 13:15:49.914303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.101 qpair failed and we were unable to recover it. 00:32:45.101 [2024-04-26 13:15:49.914622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.101 [2024-04-26 13:15:49.914909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.101 [2024-04-26 13:15:49.914916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.101 qpair failed and we were unable to recover it. 00:32:45.101 [2024-04-26 13:15:49.915222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.101 [2024-04-26 13:15:49.915532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.101 [2024-04-26 13:15:49.915538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.101 qpair failed and we were unable to recover it. 00:32:45.101 [2024-04-26 13:15:49.915848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.101 [2024-04-26 13:15:49.916136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.101 [2024-04-26 13:15:49.916143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.101 qpair failed and we were unable to recover it. 00:32:45.101 [2024-04-26 13:15:49.916452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.101 [2024-04-26 13:15:49.916784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.101 [2024-04-26 13:15:49.916791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.101 qpair failed and we were unable to recover it. 00:32:45.101 [2024-04-26 13:15:49.916881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.101 [2024-04-26 13:15:49.917180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.101 [2024-04-26 13:15:49.917186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.101 qpair failed and we were unable to recover it. 00:32:45.101 [2024-04-26 13:15:49.917417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.101 [2024-04-26 13:15:49.917593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.101 [2024-04-26 13:15:49.917599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.101 qpair failed and we were unable to recover it. 00:32:45.101 [2024-04-26 13:15:49.917951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.101 [2024-04-26 13:15:49.918278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.101 [2024-04-26 13:15:49.918285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.101 qpair failed and we were unable to recover it. 00:32:45.101 [2024-04-26 13:15:49.918605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.101 [2024-04-26 13:15:49.918926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.101 [2024-04-26 13:15:49.918933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.101 qpair failed and we were unable to recover it. 00:32:45.101 [2024-04-26 13:15:49.919316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.101 [2024-04-26 13:15:49.919625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.101 [2024-04-26 13:15:49.919631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.101 qpair failed and we were unable to recover it. 00:32:45.101 [2024-04-26 13:15:49.919985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.101 [2024-04-26 13:15:49.920279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.101 [2024-04-26 13:15:49.920286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.101 qpair failed and we were unable to recover it. 00:32:45.101 [2024-04-26 13:15:49.920595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.101 [2024-04-26 13:15:49.920945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.101 [2024-04-26 13:15:49.920951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.101 qpair failed and we were unable to recover it. 00:32:45.101 [2024-04-26 13:15:49.921130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.101 [2024-04-26 13:15:49.921470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.101 [2024-04-26 13:15:49.921476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.101 qpair failed and we were unable to recover it. 00:32:45.101 [2024-04-26 13:15:49.921810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.101 [2024-04-26 13:15:49.922144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.101 [2024-04-26 13:15:49.922151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.101 qpair failed and we were unable to recover it. 00:32:45.101 [2024-04-26 13:15:49.922500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.101 [2024-04-26 13:15:49.922826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.101 [2024-04-26 13:15:49.922832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.101 qpair failed and we were unable to recover it. 00:32:45.101 [2024-04-26 13:15:49.923201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.101 [2024-04-26 13:15:49.923526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.101 [2024-04-26 13:15:49.923533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.101 qpair failed and we were unable to recover it. 00:32:45.101 [2024-04-26 13:15:49.923833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.101 [2024-04-26 13:15:49.923994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.101 [2024-04-26 13:15:49.924001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.101 qpair failed and we were unable to recover it. 00:32:45.101 [2024-04-26 13:15:49.924231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.101 [2024-04-26 13:15:49.924615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.102 [2024-04-26 13:15:49.924622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.102 qpair failed and we were unable to recover it. 00:32:45.102 [2024-04-26 13:15:49.924826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.102 [2024-04-26 13:15:49.925176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.102 [2024-04-26 13:15:49.925183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.102 qpair failed and we were unable to recover it. 00:32:45.102 [2024-04-26 13:15:49.925495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.102 [2024-04-26 13:15:49.925842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.102 [2024-04-26 13:15:49.925850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.102 qpair failed and we were unable to recover it. 00:32:45.102 [2024-04-26 13:15:49.926136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.102 [2024-04-26 13:15:49.926461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.102 [2024-04-26 13:15:49.926467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.102 qpair failed and we were unable to recover it. 00:32:45.102 [2024-04-26 13:15:49.926813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.102 [2024-04-26 13:15:49.927142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.102 [2024-04-26 13:15:49.927148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.102 qpair failed and we were unable to recover it. 00:32:45.102 [2024-04-26 13:15:49.927464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.102 [2024-04-26 13:15:49.927645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.102 [2024-04-26 13:15:49.927652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.102 qpair failed and we were unable to recover it. 00:32:45.102 [2024-04-26 13:15:49.927859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.102 [2024-04-26 13:15:49.928217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.102 [2024-04-26 13:15:49.928224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.102 qpair failed and we were unable to recover it. 00:32:45.102 [2024-04-26 13:15:49.928548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.102 [2024-04-26 13:15:49.928848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.102 [2024-04-26 13:15:49.928855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.102 qpair failed and we were unable to recover it. 00:32:45.102 [2024-04-26 13:15:49.929165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.102 [2024-04-26 13:15:49.929514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.102 [2024-04-26 13:15:49.929520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.102 qpair failed and we were unable to recover it. 00:32:45.102 [2024-04-26 13:15:49.929841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.102 [2024-04-26 13:15:49.930022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.102 [2024-04-26 13:15:49.930029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.102 qpair failed and we were unable to recover it. 00:32:45.102 [2024-04-26 13:15:49.930394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.102 [2024-04-26 13:15:49.930717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.102 [2024-04-26 13:15:49.930724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.102 qpair failed and we were unable to recover it. 00:32:45.102 [2024-04-26 13:15:49.930911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.102 [2024-04-26 13:15:49.931165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.102 [2024-04-26 13:15:49.931172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.102 qpair failed and we were unable to recover it. 00:32:45.102 [2024-04-26 13:15:49.931407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.102 [2024-04-26 13:15:49.931631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.102 [2024-04-26 13:15:49.931637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.102 qpair failed and we were unable to recover it. 00:32:45.102 [2024-04-26 13:15:49.931903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.102 [2024-04-26 13:15:49.932256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.102 [2024-04-26 13:15:49.932262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.102 qpair failed and we were unable to recover it. 00:32:45.102 [2024-04-26 13:15:49.932578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.102 [2024-04-26 13:15:49.932870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.102 [2024-04-26 13:15:49.932876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.102 qpair failed and we were unable to recover it. 00:32:45.102 [2024-04-26 13:15:49.933199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.102 [2024-04-26 13:15:49.933408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.102 [2024-04-26 13:15:49.933415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.102 qpair failed and we were unable to recover it. 00:32:45.102 [2024-04-26 13:15:49.933724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.102 [2024-04-26 13:15:49.934067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.102 [2024-04-26 13:15:49.934074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.102 qpair failed and we were unable to recover it. 00:32:45.102 [2024-04-26 13:15:49.934386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.102 [2024-04-26 13:15:49.934727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.102 [2024-04-26 13:15:49.934734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.102 qpair failed and we were unable to recover it. 00:32:45.102 [2024-04-26 13:15:49.934997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.102 [2024-04-26 13:15:49.935177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.102 [2024-04-26 13:15:49.935184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.102 qpair failed and we were unable to recover it. 00:32:45.102 [2024-04-26 13:15:49.935489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.102 [2024-04-26 13:15:49.935807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.102 [2024-04-26 13:15:49.935814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.102 qpair failed and we were unable to recover it. 00:32:45.102 [2024-04-26 13:15:49.936137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.102 [2024-04-26 13:15:49.936343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.102 [2024-04-26 13:15:49.936350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.102 qpair failed and we were unable to recover it. 00:32:45.102 [2024-04-26 13:15:49.936654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.102 [2024-04-26 13:15:49.936845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.102 [2024-04-26 13:15:49.936853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.102 qpair failed and we were unable to recover it. 00:32:45.102 [2024-04-26 13:15:49.937042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.102 [2024-04-26 13:15:49.937328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.102 [2024-04-26 13:15:49.937334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.102 qpair failed and we were unable to recover it. 00:32:45.102 [2024-04-26 13:15:49.937675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.102 [2024-04-26 13:15:49.937869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.102 [2024-04-26 13:15:49.937876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.102 qpair failed and we were unable to recover it. 00:32:45.102 [2024-04-26 13:15:49.938177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.102 [2024-04-26 13:15:49.938483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.102 [2024-04-26 13:15:49.938490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.103 qpair failed and we were unable to recover it. 00:32:45.103 [2024-04-26 13:15:49.938836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.103 [2024-04-26 13:15:49.939032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.103 [2024-04-26 13:15:49.939038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.103 qpair failed and we were unable to recover it. 00:32:45.103 [2024-04-26 13:15:49.939370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.103 [2024-04-26 13:15:49.939701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.103 [2024-04-26 13:15:49.939707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.103 qpair failed and we were unable to recover it. 00:32:45.103 [2024-04-26 13:15:49.940004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.103 [2024-04-26 13:15:49.940319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.103 [2024-04-26 13:15:49.940325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.103 qpair failed and we were unable to recover it. 00:32:45.103 [2024-04-26 13:15:49.940483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.103 [2024-04-26 13:15:49.940518] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:45.103 [2024-04-26 13:15:49.940669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.103 [2024-04-26 13:15:49.940676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.103 qpair failed and we were unable to recover it. 00:32:45.103 [2024-04-26 13:15:49.940848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.103 [2024-04-26 13:15:49.941173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.103 [2024-04-26 13:15:49.941179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.103 qpair failed and we were unable to recover it. 00:32:45.103 [2024-04-26 13:15:49.941483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.103 [2024-04-26 13:15:49.941790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.103 [2024-04-26 13:15:49.941796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.103 qpair failed and we were unable to recover it. 00:32:45.103 [2024-04-26 13:15:49.942122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.103 [2024-04-26 13:15:49.942326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.103 [2024-04-26 13:15:49.942332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.103 qpair failed and we were unable to recover it. 00:32:45.103 [2024-04-26 13:15:49.942646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.103 [2024-04-26 13:15:49.942813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.103 [2024-04-26 13:15:49.942820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.103 qpair failed and we were unable to recover it. 00:32:45.103 [2024-04-26 13:15:49.942987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.103 [2024-04-26 13:15:49.943196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.103 [2024-04-26 13:15:49.943203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.103 qpair failed and we were unable to recover it. 00:32:45.103 [2024-04-26 13:15:49.943395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.103 [2024-04-26 13:15:49.943551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.103 [2024-04-26 13:15:49.943558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.103 qpair failed and we were unable to recover it. 00:32:45.103 [2024-04-26 13:15:49.943788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.103 [2024-04-26 13:15:49.944089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.103 [2024-04-26 13:15:49.944097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.103 qpair failed and we were unable to recover it. 00:32:45.103 [2024-04-26 13:15:49.944456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.103 [2024-04-26 13:15:49.944645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.103 [2024-04-26 13:15:49.944652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.103 qpair failed and we were unable to recover it. 00:32:45.103 [2024-04-26 13:15:49.944947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.103 [2024-04-26 13:15:49.945143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.103 [2024-04-26 13:15:49.945150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.103 qpair failed and we were unable to recover it. 00:32:45.103 [2024-04-26 13:15:49.945466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.103 [2024-04-26 13:15:49.945782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.103 [2024-04-26 13:15:49.945790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.103 qpair failed and we were unable to recover it. 00:32:45.103 [2024-04-26 13:15:49.946112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.103 [2024-04-26 13:15:49.946279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.103 [2024-04-26 13:15:49.946286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.103 qpair failed and we were unable to recover it. 00:32:45.103 [2024-04-26 13:15:49.946581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.103 [2024-04-26 13:15:49.946944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.103 [2024-04-26 13:15:49.946952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.103 qpair failed and we were unable to recover it. 00:32:45.103 [2024-04-26 13:15:49.947289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.103 [2024-04-26 13:15:49.947612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.103 [2024-04-26 13:15:49.947619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.103 qpair failed and we were unable to recover it. 00:32:45.103 [2024-04-26 13:15:49.947984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.103 [2024-04-26 13:15:49.948314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.103 [2024-04-26 13:15:49.948320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.103 qpair failed and we were unable to recover it. 00:32:45.103 [2024-04-26 13:15:49.948644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.103 [2024-04-26 13:15:49.948697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.103 [2024-04-26 13:15:49.948704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.103 qpair failed and we were unable to recover it. 00:32:45.103 [2024-04-26 13:15:49.948990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.103 [2024-04-26 13:15:49.949191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.103 [2024-04-26 13:15:49.949198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.103 qpair failed and we were unable to recover it. 00:32:45.103 [2024-04-26 13:15:49.949535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.103 [2024-04-26 13:15:49.949732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.103 [2024-04-26 13:15:49.949739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.103 qpair failed and we were unable to recover it. 00:32:45.103 [2024-04-26 13:15:49.950052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.103 [2024-04-26 13:15:49.950406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.103 [2024-04-26 13:15:49.950412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.103 qpair failed and we were unable to recover it. 00:32:45.103 [2024-04-26 13:15:49.950740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.103 [2024-04-26 13:15:49.951093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.103 [2024-04-26 13:15:49.951100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.103 qpair failed and we were unable to recover it. 00:32:45.103 [2024-04-26 13:15:49.951141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.103 [2024-04-26 13:15:49.951552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.103 [2024-04-26 13:15:49.951558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.103 qpair failed and we were unable to recover it. 00:32:45.103 [2024-04-26 13:15:49.951883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.103 [2024-04-26 13:15:49.952224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.103 [2024-04-26 13:15:49.952231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.103 qpair failed and we were unable to recover it. 00:32:45.103 [2024-04-26 13:15:49.952541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.103 [2024-04-26 13:15:49.952750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.103 [2024-04-26 13:15:49.952757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.103 qpair failed and we were unable to recover it. 00:32:45.103 [2024-04-26 13:15:49.953082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.103 [2024-04-26 13:15:49.953438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.103 [2024-04-26 13:15:49.953444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.103 qpair failed and we were unable to recover it. 00:32:45.103 [2024-04-26 13:15:49.953755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.103 [2024-04-26 13:15:49.954065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.103 [2024-04-26 13:15:49.954072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.103 qpair failed and we were unable to recover it. 00:32:45.104 [2024-04-26 13:15:49.954360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.104 [2024-04-26 13:15:49.954669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.104 [2024-04-26 13:15:49.954676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.104 qpair failed and we were unable to recover it. 00:32:45.104 [2024-04-26 13:15:49.954957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.104 [2024-04-26 13:15:49.955275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.104 [2024-04-26 13:15:49.955281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.104 qpair failed and we were unable to recover it. 00:32:45.104 [2024-04-26 13:15:49.955608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.104 [2024-04-26 13:15:49.955920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.104 [2024-04-26 13:15:49.955927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.104 qpair failed and we were unable to recover it. 00:32:45.104 [2024-04-26 13:15:49.956257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.104 [2024-04-26 13:15:49.956406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.104 [2024-04-26 13:15:49.956413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.104 qpair failed and we were unable to recover it. 00:32:45.104 [2024-04-26 13:15:49.956718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.104 [2024-04-26 13:15:49.957026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.104 [2024-04-26 13:15:49.957033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.104 qpair failed and we were unable to recover it. 00:32:45.104 [2024-04-26 13:15:49.957349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.104 [2024-04-26 13:15:49.957498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.104 [2024-04-26 13:15:49.957505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.104 qpair failed and we were unable to recover it. 00:32:45.104 [2024-04-26 13:15:49.957813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.104 [2024-04-26 13:15:49.958138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.104 [2024-04-26 13:15:49.958145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.104 qpair failed and we were unable to recover it. 00:32:45.104 [2024-04-26 13:15:49.958473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.104 [2024-04-26 13:15:49.958639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.104 [2024-04-26 13:15:49.958646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.104 qpair failed and we were unable to recover it. 00:32:45.104 [2024-04-26 13:15:49.959063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.104 [2024-04-26 13:15:49.959344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.104 [2024-04-26 13:15:49.959351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.104 qpair failed and we were unable to recover it. 00:32:45.104 [2024-04-26 13:15:49.959553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.104 [2024-04-26 13:15:49.959841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.104 [2024-04-26 13:15:49.959848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.104 qpair failed and we were unable to recover it. 00:32:45.104 [2024-04-26 13:15:49.960215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.104 [2024-04-26 13:15:49.960540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.104 [2024-04-26 13:15:49.960547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.104 qpair failed and we were unable to recover it. 00:32:45.104 [2024-04-26 13:15:49.960874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.104 [2024-04-26 13:15:49.961080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.104 [2024-04-26 13:15:49.961087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.104 qpair failed and we were unable to recover it. 00:32:45.104 [2024-04-26 13:15:49.961400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.104 [2024-04-26 13:15:49.961702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.104 [2024-04-26 13:15:49.961709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.104 qpair failed and we were unable to recover it. 00:32:45.104 [2024-04-26 13:15:49.961904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.104 [2024-04-26 13:15:49.962131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.104 [2024-04-26 13:15:49.962137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.104 qpair failed and we were unable to recover it. 00:32:45.104 [2024-04-26 13:15:49.962319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.104 [2024-04-26 13:15:49.962492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.104 [2024-04-26 13:15:49.962500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.104 qpair failed and we were unable to recover it. 00:32:45.104 [2024-04-26 13:15:49.962795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.104 [2024-04-26 13:15:49.963114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.104 [2024-04-26 13:15:49.963120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.104 qpair failed and we were unable to recover it. 00:32:45.104 [2024-04-26 13:15:49.963414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.104 [2024-04-26 13:15:49.963739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.104 [2024-04-26 13:15:49.963746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.104 qpair failed and we were unable to recover it. 00:32:45.104 [2024-04-26 13:15:49.964059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.104 [2024-04-26 13:15:49.964380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.104 [2024-04-26 13:15:49.964387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.104 qpair failed and we were unable to recover it. 00:32:45.104 [2024-04-26 13:15:49.964712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.104 [2024-04-26 13:15:49.965026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.104 [2024-04-26 13:15:49.965033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.104 qpair failed and we were unable to recover it. 00:32:45.104 [2024-04-26 13:15:49.965210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.104 [2024-04-26 13:15:49.965368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.104 [2024-04-26 13:15:49.965375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.104 qpair failed and we were unable to recover it. 00:32:45.104 [2024-04-26 13:15:49.965689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.104 [2024-04-26 13:15:49.966015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.104 [2024-04-26 13:15:49.966022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.104 qpair failed and we were unable to recover it. 00:32:45.104 [2024-04-26 13:15:49.966381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.104 [2024-04-26 13:15:49.966696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.104 [2024-04-26 13:15:49.966702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.104 qpair failed and we were unable to recover it. 00:32:45.104 [2024-04-26 13:15:49.967027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.104 [2024-04-26 13:15:49.967351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.104 [2024-04-26 13:15:49.967358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.104 qpair failed and we were unable to recover it. 00:32:45.104 [2024-04-26 13:15:49.967651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.104 [2024-04-26 13:15:49.967978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.104 [2024-04-26 13:15:49.967985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.104 qpair failed and we were unable to recover it. 00:32:45.104 [2024-04-26 13:15:49.968381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.104 [2024-04-26 13:15:49.968569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.104 [2024-04-26 13:15:49.968575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.104 qpair failed and we were unable to recover it. 00:32:45.104 [2024-04-26 13:15:49.968771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.104 [2024-04-26 13:15:49.969022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.104 [2024-04-26 13:15:49.969029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.104 qpair failed and we were unable to recover it. 00:32:45.104 [2024-04-26 13:15:49.969235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.104 [2024-04-26 13:15:49.969480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.104 [2024-04-26 13:15:49.969486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.104 qpair failed and we were unable to recover it. 00:32:45.104 [2024-04-26 13:15:49.969707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.104 [2024-04-26 13:15:49.970060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.104 [2024-04-26 13:15:49.970067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.104 qpair failed and we were unable to recover it. 00:32:45.104 [2024-04-26 13:15:49.970380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.104 [2024-04-26 13:15:49.970684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.105 [2024-04-26 13:15:49.970691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.105 qpair failed and we were unable to recover it. 00:32:45.105 [2024-04-26 13:15:49.971008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.105 [2024-04-26 13:15:49.971344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.105 [2024-04-26 13:15:49.971351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.105 qpair failed and we were unable to recover it. 00:32:45.105 [2024-04-26 13:15:49.971651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.105 [2024-04-26 13:15:49.971819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.105 [2024-04-26 13:15:49.971827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.105 qpair failed and we were unable to recover it. 00:32:45.105 [2024-04-26 13:15:49.972182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.105 [2024-04-26 13:15:49.972379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.105 [2024-04-26 13:15:49.972385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.105 qpair failed and we were unable to recover it. 00:32:45.105 [2024-04-26 13:15:49.972719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.105 [2024-04-26 13:15:49.973076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.105 [2024-04-26 13:15:49.973084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.105 qpair failed and we were unable to recover it. 00:32:45.105 [2024-04-26 13:15:49.973296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.105 [2024-04-26 13:15:49.973452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.105 [2024-04-26 13:15:49.973458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.105 qpair failed and we were unable to recover it. 00:32:45.105 [2024-04-26 13:15:49.973779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.105 [2024-04-26 13:15:49.974085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.105 [2024-04-26 13:15:49.974093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.105 qpair failed and we were unable to recover it. 00:32:45.105 [2024-04-26 13:15:49.974268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.105 [2024-04-26 13:15:49.974608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.105 [2024-04-26 13:15:49.974615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.105 qpair failed and we were unable to recover it. 00:32:45.105 [2024-04-26 13:15:49.974918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.105 [2024-04-26 13:15:49.975208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.105 [2024-04-26 13:15:49.975215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.105 qpair failed and we were unable to recover it. 00:32:45.105 [2024-04-26 13:15:49.975520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.105 [2024-04-26 13:15:49.975834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.105 [2024-04-26 13:15:49.975846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.105 qpair failed and we were unable to recover it. 00:32:45.105 [2024-04-26 13:15:49.976157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.105 [2024-04-26 13:15:49.976496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.105 [2024-04-26 13:15:49.976503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.105 qpair failed and we were unable to recover it. 00:32:45.105 [2024-04-26 13:15:49.976820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.105 [2024-04-26 13:15:49.977147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.105 [2024-04-26 13:15:49.977154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.105 qpair failed and we were unable to recover it. 00:32:45.105 [2024-04-26 13:15:49.977363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.105 [2024-04-26 13:15:49.977693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.105 [2024-04-26 13:15:49.977700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.105 qpair failed and we were unable to recover it. 00:32:45.105 [2024-04-26 13:15:49.978006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.105 [2024-04-26 13:15:49.978312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.105 [2024-04-26 13:15:49.978319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.105 qpair failed and we were unable to recover it. 00:32:45.105 [2024-04-26 13:15:49.978642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.105 [2024-04-26 13:15:49.978953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.105 [2024-04-26 13:15:49.978960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.105 qpair failed and we were unable to recover it. 00:32:45.105 [2024-04-26 13:15:49.979292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.105 [2024-04-26 13:15:49.979479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.105 [2024-04-26 13:15:49.979486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.105 qpair failed and we were unable to recover it. 00:32:45.105 [2024-04-26 13:15:49.979831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.105 [2024-04-26 13:15:49.980150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.105 [2024-04-26 13:15:49.980157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.105 qpair failed and we were unable to recover it. 00:32:45.105 [2024-04-26 13:15:49.980457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.105 [2024-04-26 13:15:49.980641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.105 [2024-04-26 13:15:49.980647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.105 qpair failed and we were unable to recover it. 00:32:45.105 [2024-04-26 13:15:49.980967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.105 [2024-04-26 13:15:49.981295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.105 [2024-04-26 13:15:49.981301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.105 qpair failed and we were unable to recover it. 00:32:45.105 [2024-04-26 13:15:49.981629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.105 [2024-04-26 13:15:49.981918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.105 [2024-04-26 13:15:49.981924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.105 qpair failed and we were unable to recover it. 00:32:45.105 [2024-04-26 13:15:49.982245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.105 [2024-04-26 13:15:49.982548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.105 [2024-04-26 13:15:49.982555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.105 qpair failed and we were unable to recover it. 00:32:45.105 [2024-04-26 13:15:49.982741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.105 [2024-04-26 13:15:49.983037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.105 [2024-04-26 13:15:49.983044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.105 qpair failed and we were unable to recover it. 00:32:45.105 [2024-04-26 13:15:49.983376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.105 [2024-04-26 13:15:49.983664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.105 [2024-04-26 13:15:49.983670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.105 qpair failed and we were unable to recover it. 00:32:45.105 [2024-04-26 13:15:49.983971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.105 [2024-04-26 13:15:49.984154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.105 [2024-04-26 13:15:49.984160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.105 qpair failed and we were unable to recover it. 00:32:45.105 [2024-04-26 13:15:49.984475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.105 [2024-04-26 13:15:49.984762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.105 [2024-04-26 13:15:49.984768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.105 qpair failed and we were unable to recover it. 00:32:45.105 [2024-04-26 13:15:49.985141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.105 [2024-04-26 13:15:49.985310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.105 [2024-04-26 13:15:49.985316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.105 qpair failed and we were unable to recover it. 00:32:45.105 [2024-04-26 13:15:49.985630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.105 [2024-04-26 13:15:49.985823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.105 [2024-04-26 13:15:49.985831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.105 qpair failed and we were unable to recover it. 00:32:45.105 [2024-04-26 13:15:49.986165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.105 [2024-04-26 13:15:49.986363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.105 [2024-04-26 13:15:49.986369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.105 qpair failed and we were unable to recover it. 00:32:45.105 [2024-04-26 13:15:49.986714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.105 [2024-04-26 13:15:49.987005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.105 [2024-04-26 13:15:49.987012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.105 qpair failed and we were unable to recover it. 00:32:45.105 [2024-04-26 13:15:49.987051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.105 [2024-04-26 13:15:49.987385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.106 [2024-04-26 13:15:49.987391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.106 qpair failed and we were unable to recover it. 00:32:45.106 [2024-04-26 13:15:49.987708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.106 [2024-04-26 13:15:49.987903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.106 [2024-04-26 13:15:49.987910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.106 qpair failed and we were unable to recover it. 00:32:45.106 [2024-04-26 13:15:49.988244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.106 [2024-04-26 13:15:49.988387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.106 [2024-04-26 13:15:49.988394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.106 qpair failed and we were unable to recover it. 00:32:45.106 [2024-04-26 13:15:49.988677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.106 [2024-04-26 13:15:49.988987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.106 [2024-04-26 13:15:49.988994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.106 qpair failed and we were unable to recover it. 00:32:45.106 [2024-04-26 13:15:49.989347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.106 [2024-04-26 13:15:49.989575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.106 [2024-04-26 13:15:49.989581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.106 qpair failed and we were unable to recover it. 00:32:45.106 [2024-04-26 13:15:49.989759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.106 [2024-04-26 13:15:49.990043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.106 [2024-04-26 13:15:49.990050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.106 qpair failed and we were unable to recover it. 00:32:45.106 [2024-04-26 13:15:49.990383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.106 [2024-04-26 13:15:49.990722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.106 [2024-04-26 13:15:49.990729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.106 qpair failed and we were unable to recover it. 00:32:45.106 [2024-04-26 13:15:49.990942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.106 [2024-04-26 13:15:49.991026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.106 [2024-04-26 13:15:49.991033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.106 qpair failed and we were unable to recover it. 00:32:45.106 [2024-04-26 13:15:49.991387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.106 [2024-04-26 13:15:49.991696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.106 [2024-04-26 13:15:49.991702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.106 qpair failed and we were unable to recover it. 00:32:45.106 [2024-04-26 13:15:49.992021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.106 [2024-04-26 13:15:49.992348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.106 [2024-04-26 13:15:49.992354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.106 qpair failed and we were unable to recover it. 00:32:45.106 [2024-04-26 13:15:49.992656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.106 [2024-04-26 13:15:49.992947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.106 [2024-04-26 13:15:49.992954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.106 qpair failed and we were unable to recover it. 00:32:45.106 [2024-04-26 13:15:49.993134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.106 [2024-04-26 13:15:49.993473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.106 [2024-04-26 13:15:49.993479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.106 qpair failed and we were unable to recover it. 00:32:45.106 [2024-04-26 13:15:49.993780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.106 [2024-04-26 13:15:49.993958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.106 [2024-04-26 13:15:49.993965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.106 qpair failed and we were unable to recover it. 00:32:45.106 [2024-04-26 13:15:49.994305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.106 [2024-04-26 13:15:49.994603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.106 [2024-04-26 13:15:49.994609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.106 qpair failed and we were unable to recover it. 00:32:45.106 [2024-04-26 13:15:49.994902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.106 [2024-04-26 13:15:49.995133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.106 [2024-04-26 13:15:49.995140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.106 qpair failed and we were unable to recover it. 00:32:45.106 [2024-04-26 13:15:49.995452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.106 [2024-04-26 13:15:49.995643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.106 [2024-04-26 13:15:49.995651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.106 qpair failed and we were unable to recover it. 00:32:45.106 [2024-04-26 13:15:49.995835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.106 [2024-04-26 13:15:49.996155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.106 [2024-04-26 13:15:49.996162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.106 qpair failed and we were unable to recover it. 00:32:45.106 [2024-04-26 13:15:49.996317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.106 [2024-04-26 13:15:49.996597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.106 [2024-04-26 13:15:49.996604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.106 qpair failed and we were unable to recover it. 00:32:45.106 [2024-04-26 13:15:49.996792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.106 [2024-04-26 13:15:49.997126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.106 [2024-04-26 13:15:49.997133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.106 qpair failed and we were unable to recover it. 00:32:45.106 [2024-04-26 13:15:49.997468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.106 [2024-04-26 13:15:49.997787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.106 [2024-04-26 13:15:49.997794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.106 qpair failed and we were unable to recover it. 00:32:45.106 [2024-04-26 13:15:49.998166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.106 [2024-04-26 13:15:49.998465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.106 [2024-04-26 13:15:49.998471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.106 qpair failed and we were unable to recover it. 00:32:45.106 [2024-04-26 13:15:49.998777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.106 [2024-04-26 13:15:49.999082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.106 [2024-04-26 13:15:49.999089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.106 qpair failed and we were unable to recover it. 00:32:45.106 [2024-04-26 13:15:49.999260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.106 [2024-04-26 13:15:49.999677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.106 [2024-04-26 13:15:49.999683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.106 qpair failed and we were unable to recover it. 00:32:45.106 [2024-04-26 13:15:49.999971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.106 [2024-04-26 13:15:50.000281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.106 [2024-04-26 13:15:50.000288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.106 qpair failed and we were unable to recover it. 00:32:45.106 [2024-04-26 13:15:50.000579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.106 [2024-04-26 13:15:50.000901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.106 [2024-04-26 13:15:50.000909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.106 qpair failed and we were unable to recover it. 00:32:45.106 [2024-04-26 13:15:50.001214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.106 [2024-04-26 13:15:50.001548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.106 [2024-04-26 13:15:50.001561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.106 qpair failed and we were unable to recover it. 00:32:45.106 [2024-04-26 13:15:50.001789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.106 [2024-04-26 13:15:50.001973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.106 [2024-04-26 13:15:50.001982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.106 qpair failed and we were unable to recover it. 00:32:45.106 [2024-04-26 13:15:50.002789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.106 [2024-04-26 13:15:50.003055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.106 [2024-04-26 13:15:50.003064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.106 qpair failed and we were unable to recover it. 00:32:45.106 [2024-04-26 13:15:50.003432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.106 [2024-04-26 13:15:50.003766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.106 [2024-04-26 13:15:50.003773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.106 qpair failed and we were unable to recover it. 00:32:45.107 [2024-04-26 13:15:50.004130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.107 [2024-04-26 13:15:50.004432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.107 [2024-04-26 13:15:50.004440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.107 qpair failed and we were unable to recover it. 00:32:45.107 [2024-04-26 13:15:50.004784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.107 [2024-04-26 13:15:50.004982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.107 [2024-04-26 13:15:50.004990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.107 qpair failed and we were unable to recover it. 00:32:45.107 [2024-04-26 13:15:50.005017] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:45.107 [2024-04-26 13:15:50.005048] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:45.107 [2024-04-26 13:15:50.005056] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:45.107 [2024-04-26 13:15:50.005064] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:45.107 [2024-04-26 13:15:50.005070] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:45.107 [2024-04-26 13:15:50.005238] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:32:45.107 [2024-04-26 13:15:50.005342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.107 [2024-04-26 13:15:50.005300] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:32:45.107 [2024-04-26 13:15:50.005699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.107 [2024-04-26 13:15:50.005707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.107 qpair failed and we were unable to recover it. 00:32:45.107 [2024-04-26 13:15:50.005749] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:32:45.107 [2024-04-26 13:15:50.005749] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:32:45.107 [2024-04-26 13:15:50.006020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.107 [2024-04-26 13:15:50.006248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.107 [2024-04-26 13:15:50.006256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.107 qpair failed and we were unable to recover it. 00:32:45.107 [2024-04-26 13:15:50.006497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.107 [2024-04-26 13:15:50.006664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.107 [2024-04-26 13:15:50.006671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.107 qpair failed and we were unable to recover it. 00:32:45.107 [2024-04-26 13:15:50.006901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.107 [2024-04-26 13:15:50.007179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.107 [2024-04-26 13:15:50.007186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.107 qpair failed and we were unable to recover it. 00:32:45.107 [2024-04-26 13:15:50.007535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.107 [2024-04-26 13:15:50.007856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.107 [2024-04-26 13:15:50.007863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.107 qpair failed and we were unable to recover it. 00:32:45.107 [2024-04-26 13:15:50.007997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.107 [2024-04-26 13:15:50.008109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.107 [2024-04-26 13:15:50.008116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.107 qpair failed and we were unable to recover it. 00:32:45.107 [2024-04-26 13:15:50.008323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.107 [2024-04-26 13:15:50.008511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.107 [2024-04-26 13:15:50.008518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.107 qpair failed and we were unable to recover it. 00:32:45.107 [2024-04-26 13:15:50.008681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.107 [2024-04-26 13:15:50.008991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.107 [2024-04-26 13:15:50.008999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.107 qpair failed and we were unable to recover it. 00:32:45.107 [2024-04-26 13:15:50.009319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.107 [2024-04-26 13:15:50.009649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.107 [2024-04-26 13:15:50.009656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.107 qpair failed and we were unable to recover it. 00:32:45.107 [2024-04-26 13:15:50.009879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.107 [2024-04-26 13:15:50.010173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.107 [2024-04-26 13:15:50.010179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.107 qpair failed and we were unable to recover it. 00:32:45.107 [2024-04-26 13:15:50.010379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.107 [2024-04-26 13:15:50.010589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.107 [2024-04-26 13:15:50.010596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.107 qpair failed and we were unable to recover it. 00:32:45.107 [2024-04-26 13:15:50.010983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.107 [2024-04-26 13:15:50.011287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.107 [2024-04-26 13:15:50.011294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.107 qpair failed and we were unable to recover it. 00:32:45.107 [2024-04-26 13:15:50.011640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.107 [2024-04-26 13:15:50.011871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.107 [2024-04-26 13:15:50.011878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.107 qpair failed and we were unable to recover it. 00:32:45.107 [2024-04-26 13:15:50.012211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.107 [2024-04-26 13:15:50.012562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.107 [2024-04-26 13:15:50.012569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.107 qpair failed and we were unable to recover it. 00:32:45.107 [2024-04-26 13:15:50.012734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.107 [2024-04-26 13:15:50.012803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.107 [2024-04-26 13:15:50.012810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.107 qpair failed and we were unable to recover it. 00:32:45.107 [2024-04-26 13:15:50.012887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.107 [2024-04-26 13:15:50.013239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.107 [2024-04-26 13:15:50.013245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.107 qpair failed and we were unable to recover it. 00:32:45.107 [2024-04-26 13:15:50.013314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.107 [2024-04-26 13:15:50.013535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.107 [2024-04-26 13:15:50.013547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.107 qpair failed and we were unable to recover it. 00:32:45.107 [2024-04-26 13:15:50.013756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.107 [2024-04-26 13:15:50.013954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.107 [2024-04-26 13:15:50.013961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.107 qpair failed and we were unable to recover it. 00:32:45.107 [2024-04-26 13:15:50.014204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.107 [2024-04-26 13:15:50.014393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.107 [2024-04-26 13:15:50.014399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.107 qpair failed and we were unable to recover it. 00:32:45.107 [2024-04-26 13:15:50.014597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.107 [2024-04-26 13:15:50.014929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.107 [2024-04-26 13:15:50.014936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.107 qpair failed and we were unable to recover it. 00:32:45.107 [2024-04-26 13:15:50.015130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.107 [2024-04-26 13:15:50.015350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.108 [2024-04-26 13:15:50.015357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.108 qpair failed and we were unable to recover it. 00:32:45.108 [2024-04-26 13:15:50.015535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.108 [2024-04-26 13:15:50.015885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.108 [2024-04-26 13:15:50.015892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.108 qpair failed and we were unable to recover it. 00:32:45.108 [2024-04-26 13:15:50.016227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.108 [2024-04-26 13:15:50.016559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.108 [2024-04-26 13:15:50.016566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.108 qpair failed and we were unable to recover it. 00:32:45.108 [2024-04-26 13:15:50.016860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.108 [2024-04-26 13:15:50.017122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.108 [2024-04-26 13:15:50.017129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.108 qpair failed and we were unable to recover it. 00:32:45.108 [2024-04-26 13:15:50.017440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.108 [2024-04-26 13:15:50.017774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.108 [2024-04-26 13:15:50.017781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.108 qpair failed and we were unable to recover it. 00:32:45.108 [2024-04-26 13:15:50.018138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.108 [2024-04-26 13:15:50.018339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.108 [2024-04-26 13:15:50.018346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.108 qpair failed and we were unable to recover it. 00:32:45.108 [2024-04-26 13:15:50.018510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.108 [2024-04-26 13:15:50.018825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.108 [2024-04-26 13:15:50.018831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.108 qpair failed and we were unable to recover it. 00:32:45.108 [2024-04-26 13:15:50.018878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.108 [2024-04-26 13:15:50.019191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.108 [2024-04-26 13:15:50.019203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.108 qpair failed and we were unable to recover it. 00:32:45.108 [2024-04-26 13:15:50.019551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.108 [2024-04-26 13:15:50.019859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.108 [2024-04-26 13:15:50.019866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.108 qpair failed and we were unable to recover it. 00:32:45.108 [2024-04-26 13:15:50.020084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.108 [2024-04-26 13:15:50.020422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.108 [2024-04-26 13:15:50.020429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.108 qpair failed and we were unable to recover it. 00:32:45.108 [2024-04-26 13:15:50.020789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.108 [2024-04-26 13:15:50.021113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.108 [2024-04-26 13:15:50.021120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.108 qpair failed and we were unable to recover it. 00:32:45.108 [2024-04-26 13:15:50.021439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.108 [2024-04-26 13:15:50.021806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.108 [2024-04-26 13:15:50.021813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.108 qpair failed and we were unable to recover it. 00:32:45.108 [2024-04-26 13:15:50.022059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.108 [2024-04-26 13:15:50.022383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.108 [2024-04-26 13:15:50.022390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.108 qpair failed and we were unable to recover it. 00:32:45.108 [2024-04-26 13:15:50.022688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.108 [2024-04-26 13:15:50.023024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.108 [2024-04-26 13:15:50.023031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.108 qpair failed and we were unable to recover it. 00:32:45.108 [2024-04-26 13:15:50.023360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.108 [2024-04-26 13:15:50.023672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.108 [2024-04-26 13:15:50.023679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.108 qpair failed and we were unable to recover it. 00:32:45.108 [2024-04-26 13:15:50.023966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.108 [2024-04-26 13:15:50.024192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.108 [2024-04-26 13:15:50.024199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.108 qpair failed and we were unable to recover it. 00:32:45.108 [2024-04-26 13:15:50.024280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.108 [2024-04-26 13:15:50.024367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.108 [2024-04-26 13:15:50.024373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.108 qpair failed and we were unable to recover it. 00:32:45.108 [2024-04-26 13:15:50.024668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.108 [2024-04-26 13:15:50.024827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.108 [2024-04-26 13:15:50.024834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.108 qpair failed and we were unable to recover it. 00:32:45.108 [2024-04-26 13:15:50.025040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.108 [2024-04-26 13:15:50.025232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.108 [2024-04-26 13:15:50.025239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.108 qpair failed and we were unable to recover it. 00:32:45.108 [2024-04-26 13:15:50.025571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.108 [2024-04-26 13:15:50.025919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.108 [2024-04-26 13:15:50.025927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.108 qpair failed and we were unable to recover it. 00:32:45.108 [2024-04-26 13:15:50.026086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.108 [2024-04-26 13:15:50.026384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.108 [2024-04-26 13:15:50.026390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.108 qpair failed and we were unable to recover it. 00:32:45.108 [2024-04-26 13:15:50.026728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.108 [2024-04-26 13:15:50.026937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.108 [2024-04-26 13:15:50.026944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.108 qpair failed and we were unable to recover it. 00:32:45.108 [2024-04-26 13:15:50.027297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.108 [2024-04-26 13:15:50.027645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.108 [2024-04-26 13:15:50.027653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.108 qpair failed and we were unable to recover it. 00:32:45.108 [2024-04-26 13:15:50.027979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.108 [2024-04-26 13:15:50.028301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.108 [2024-04-26 13:15:50.028309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.108 qpair failed and we were unable to recover it. 00:32:45.108 [2024-04-26 13:15:50.028610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.108 [2024-04-26 13:15:50.028789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.108 [2024-04-26 13:15:50.028798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.108 qpair failed and we were unable to recover it. 00:32:45.108 [2024-04-26 13:15:50.029105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.108 [2024-04-26 13:15:50.029409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.108 [2024-04-26 13:15:50.029416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.108 qpair failed and we were unable to recover it. 00:32:45.108 [2024-04-26 13:15:50.029592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.108 [2024-04-26 13:15:50.029958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.108 [2024-04-26 13:15:50.029966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.108 qpair failed and we were unable to recover it. 00:32:45.108 [2024-04-26 13:15:50.030272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.108 [2024-04-26 13:15:50.030460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.108 [2024-04-26 13:15:50.030468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.108 qpair failed and we were unable to recover it. 00:32:45.108 [2024-04-26 13:15:50.030800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.108 [2024-04-26 13:15:50.031122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.108 [2024-04-26 13:15:50.031129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.108 qpair failed and we were unable to recover it. 00:32:45.108 [2024-04-26 13:15:50.031516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.108 [2024-04-26 13:15:50.031690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.109 [2024-04-26 13:15:50.031695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.109 qpair failed and we were unable to recover it. 00:32:45.109 [2024-04-26 13:15:50.031990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.109 [2024-04-26 13:15:50.032306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.109 [2024-04-26 13:15:50.032313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.109 qpair failed and we were unable to recover it. 00:32:45.109 [2024-04-26 13:15:50.032486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.109 [2024-04-26 13:15:50.032756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.109 [2024-04-26 13:15:50.032764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.109 qpair failed and we were unable to recover it. 00:32:45.109 [2024-04-26 13:15:50.033078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.109 [2024-04-26 13:15:50.033222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.109 [2024-04-26 13:15:50.033228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.109 qpair failed and we were unable to recover it. 00:32:45.109 [2024-04-26 13:15:50.033456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.109 [2024-04-26 13:15:50.033732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.109 [2024-04-26 13:15:50.033739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.109 qpair failed and we were unable to recover it. 00:32:45.109 [2024-04-26 13:15:50.033917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.109 [2024-04-26 13:15:50.034255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.109 [2024-04-26 13:15:50.034263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.109 qpair failed and we were unable to recover it. 00:32:45.109 [2024-04-26 13:15:50.034580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.109 [2024-04-26 13:15:50.034937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.109 [2024-04-26 13:15:50.034947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.109 qpair failed and we were unable to recover it. 00:32:45.109 [2024-04-26 13:15:50.035264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.109 [2024-04-26 13:15:50.035490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.109 [2024-04-26 13:15:50.035497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.109 qpair failed and we were unable to recover it. 00:32:45.109 [2024-04-26 13:15:50.035656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.109 [2024-04-26 13:15:50.035919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.109 [2024-04-26 13:15:50.035927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.109 qpair failed and we were unable to recover it. 00:32:45.109 [2024-04-26 13:15:50.036257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.109 [2024-04-26 13:15:50.036602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.109 [2024-04-26 13:15:50.036610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.109 qpair failed and we were unable to recover it. 00:32:45.109 [2024-04-26 13:15:50.036905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.109 [2024-04-26 13:15:50.037204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.109 [2024-04-26 13:15:50.037211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.109 qpair failed and we were unable to recover it. 00:32:45.109 [2024-04-26 13:15:50.037582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.109 [2024-04-26 13:15:50.037782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.109 [2024-04-26 13:15:50.037789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.109 qpair failed and we were unable to recover it. 00:32:45.109 [2024-04-26 13:15:50.037983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.109 [2024-04-26 13:15:50.038163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.109 [2024-04-26 13:15:50.038169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.109 qpair failed and we were unable to recover it. 00:32:45.109 [2024-04-26 13:15:50.038458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.109 [2024-04-26 13:15:50.038769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.109 [2024-04-26 13:15:50.038778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.109 qpair failed and we were unable to recover it. 00:32:45.109 [2024-04-26 13:15:50.039061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.109 [2024-04-26 13:15:50.039404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.109 [2024-04-26 13:15:50.039413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.109 qpair failed and we were unable to recover it. 00:32:45.109 [2024-04-26 13:15:50.039587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.109 [2024-04-26 13:15:50.039915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.109 [2024-04-26 13:15:50.039923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.109 qpair failed and we were unable to recover it. 00:32:45.109 [2024-04-26 13:15:50.040208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.109 [2024-04-26 13:15:50.040403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.109 [2024-04-26 13:15:50.040411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.109 qpair failed and we were unable to recover it. 00:32:45.109 [2024-04-26 13:15:50.040608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.109 [2024-04-26 13:15:50.040948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.109 [2024-04-26 13:15:50.040955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.109 qpair failed and we were unable to recover it. 00:32:45.109 [2024-04-26 13:15:50.041280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.109 [2024-04-26 13:15:50.041591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.109 [2024-04-26 13:15:50.041600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.109 qpair failed and we were unable to recover it. 00:32:45.109 [2024-04-26 13:15:50.041779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.109 [2024-04-26 13:15:50.042107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.109 [2024-04-26 13:15:50.042115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.109 qpair failed and we were unable to recover it. 00:32:45.109 [2024-04-26 13:15:50.042418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.109 [2024-04-26 13:15:50.042474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.109 [2024-04-26 13:15:50.042481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.109 qpair failed and we were unable to recover it. 00:32:45.109 [2024-04-26 13:15:50.042640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.109 [2024-04-26 13:15:50.042943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.109 [2024-04-26 13:15:50.042950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.109 qpair failed and we were unable to recover it. 00:32:45.109 [2024-04-26 13:15:50.043173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.109 [2024-04-26 13:15:50.043421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.109 [2024-04-26 13:15:50.043427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.109 qpair failed and we were unable to recover it. 00:32:45.109 [2024-04-26 13:15:50.043771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.109 [2024-04-26 13:15:50.044073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.109 [2024-04-26 13:15:50.044085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.109 qpair failed and we were unable to recover it. 00:32:45.109 [2024-04-26 13:15:50.044390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.109 [2024-04-26 13:15:50.044576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.109 [2024-04-26 13:15:50.044582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.109 qpair failed and we were unable to recover it. 00:32:45.109 [2024-04-26 13:15:50.044850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.109 [2024-04-26 13:15:50.045161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.109 [2024-04-26 13:15:50.045169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.109 qpair failed and we were unable to recover it. 00:32:45.109 [2024-04-26 13:15:50.045502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.109 [2024-04-26 13:15:50.045685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.109 [2024-04-26 13:15:50.045692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.109 qpair failed and we were unable to recover it. 00:32:45.109 [2024-04-26 13:15:50.046017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.109 [2024-04-26 13:15:50.046358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.109 [2024-04-26 13:15:50.046364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.109 qpair failed and we were unable to recover it. 00:32:45.109 [2024-04-26 13:15:50.046694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.109 [2024-04-26 13:15:50.047020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.109 [2024-04-26 13:15:50.047027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.109 qpair failed and we were unable to recover it. 00:32:45.110 [2024-04-26 13:15:50.047205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.110 [2024-04-26 13:15:50.047509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.110 [2024-04-26 13:15:50.047516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.110 qpair failed and we were unable to recover it. 00:32:45.110 [2024-04-26 13:15:50.047761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.110 [2024-04-26 13:15:50.048050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.110 [2024-04-26 13:15:50.048058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.110 qpair failed and we were unable to recover it. 00:32:45.110 [2024-04-26 13:15:50.048384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.110 [2024-04-26 13:15:50.048455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.110 [2024-04-26 13:15:50.048461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.110 qpair failed and we were unable to recover it. 00:32:45.110 [2024-04-26 13:15:50.048781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.110 [2024-04-26 13:15:50.048826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.110 [2024-04-26 13:15:50.048832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.110 qpair failed and we were unable to recover it. 00:32:45.110 [2024-04-26 13:15:50.049160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.110 [2024-04-26 13:15:50.049328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.110 [2024-04-26 13:15:50.049337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.110 qpair failed and we were unable to recover it. 00:32:45.110 [2024-04-26 13:15:50.049640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.110 [2024-04-26 13:15:50.049957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.110 [2024-04-26 13:15:50.049965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.110 qpair failed and we were unable to recover it. 00:32:45.110 [2024-04-26 13:15:50.050304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.110 [2024-04-26 13:15:50.050476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.110 [2024-04-26 13:15:50.050484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.110 qpair failed and we were unable to recover it. 00:32:45.110 [2024-04-26 13:15:50.050792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.110 [2024-04-26 13:15:50.051083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.110 [2024-04-26 13:15:50.051090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.110 qpair failed and we were unable to recover it. 00:32:45.110 [2024-04-26 13:15:50.051406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.110 [2024-04-26 13:15:50.051600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.110 [2024-04-26 13:15:50.051607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.110 qpair failed and we were unable to recover it. 00:32:45.110 [2024-04-26 13:15:50.052028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.110 [2024-04-26 13:15:50.052350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.110 [2024-04-26 13:15:50.052357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.110 qpair failed and we were unable to recover it. 00:32:45.110 [2024-04-26 13:15:50.052696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.110 [2024-04-26 13:15:50.053015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.110 [2024-04-26 13:15:50.053022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.110 qpair failed and we were unable to recover it. 00:32:45.110 [2024-04-26 13:15:50.053311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.110 [2024-04-26 13:15:50.053497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.110 [2024-04-26 13:15:50.053504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.110 qpair failed and we were unable to recover it. 00:32:45.110 [2024-04-26 13:15:50.053817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.110 [2024-04-26 13:15:50.054153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.110 [2024-04-26 13:15:50.054160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.110 qpair failed and we were unable to recover it. 00:32:45.110 [2024-04-26 13:15:50.054466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.110 [2024-04-26 13:15:50.054806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.110 [2024-04-26 13:15:50.054814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.110 qpair failed and we were unable to recover it. 00:32:45.110 [2024-04-26 13:15:50.055106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.110 [2024-04-26 13:15:50.055180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.110 [2024-04-26 13:15:50.055188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.110 qpair failed and we were unable to recover it. 00:32:45.110 [2024-04-26 13:15:50.055455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.110 [2024-04-26 13:15:50.055754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.110 [2024-04-26 13:15:50.055762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.110 qpair failed and we were unable to recover it. 00:32:45.110 [2024-04-26 13:15:50.056222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.110 [2024-04-26 13:15:50.056535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.110 [2024-04-26 13:15:50.056543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.110 qpair failed and we were unable to recover it. 00:32:45.110 [2024-04-26 13:15:50.056851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.110 [2024-04-26 13:15:50.057220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.110 [2024-04-26 13:15:50.057230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.110 qpair failed and we were unable to recover it. 00:32:45.110 [2024-04-26 13:15:50.057541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.110 [2024-04-26 13:15:50.058062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.110 [2024-04-26 13:15:50.058072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.110 qpair failed and we were unable to recover it. 00:32:45.110 [2024-04-26 13:15:50.058369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.110 [2024-04-26 13:15:50.058438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.110 [2024-04-26 13:15:50.058444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.110 qpair failed and we were unable to recover it. 00:32:45.110 [2024-04-26 13:15:50.058935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.110 [2024-04-26 13:15:50.059302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.110 [2024-04-26 13:15:50.059311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.110 qpair failed and we were unable to recover it. 00:32:45.110 [2024-04-26 13:15:50.059787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.110 [2024-04-26 13:15:50.059929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.110 [2024-04-26 13:15:50.059937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.110 qpair failed and we were unable to recover it. 00:32:45.110 [2024-04-26 13:15:50.060144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.110 [2024-04-26 13:15:50.060329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.110 [2024-04-26 13:15:50.060336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.110 qpair failed and we were unable to recover it. 00:32:45.110 [2024-04-26 13:15:50.060526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.110 [2024-04-26 13:15:50.060801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.110 [2024-04-26 13:15:50.060809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.110 qpair failed and we were unable to recover it. 00:32:45.110 [2024-04-26 13:15:50.060998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.110 [2024-04-26 13:15:50.061277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.110 [2024-04-26 13:15:50.061285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.110 qpair failed and we were unable to recover it. 00:32:45.110 [2024-04-26 13:15:50.061679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.110 [2024-04-26 13:15:50.061877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.110 [2024-04-26 13:15:50.061886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.110 qpair failed and we were unable to recover it. 00:32:45.110 [2024-04-26 13:15:50.062194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.110 [2024-04-26 13:15:50.062506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.110 [2024-04-26 13:15:50.062513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.110 qpair failed and we were unable to recover it. 00:32:45.110 [2024-04-26 13:15:50.062613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.110 [2024-04-26 13:15:50.062891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.110 [2024-04-26 13:15:50.062898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.110 qpair failed and we were unable to recover it. 00:32:45.110 [2024-04-26 13:15:50.063081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.110 [2024-04-26 13:15:50.063241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.111 [2024-04-26 13:15:50.063248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.111 qpair failed and we were unable to recover it. 00:32:45.111 [2024-04-26 13:15:50.063457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.111 [2024-04-26 13:15:50.063978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.111 [2024-04-26 13:15:50.063988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.111 qpair failed and we were unable to recover it. 00:32:45.111 [2024-04-26 13:15:50.064283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.111 [2024-04-26 13:15:50.064465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.111 [2024-04-26 13:15:50.064472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.111 qpair failed and we were unable to recover it. 00:32:45.111 [2024-04-26 13:15:50.064666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.111 [2024-04-26 13:15:50.064991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.111 [2024-04-26 13:15:50.064999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.111 qpair failed and we were unable to recover it. 00:32:45.111 [2024-04-26 13:15:50.065196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.111 [2024-04-26 13:15:50.065534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.111 [2024-04-26 13:15:50.065541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.111 qpair failed and we were unable to recover it. 00:32:45.111 [2024-04-26 13:15:50.065858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.111 [2024-04-26 13:15:50.066116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.111 [2024-04-26 13:15:50.066123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.111 qpair failed and we were unable to recover it. 00:32:45.111 [2024-04-26 13:15:50.066645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.111 [2024-04-26 13:15:50.066971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.111 [2024-04-26 13:15:50.066979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.111 qpair failed and we were unable to recover it. 00:32:45.111 [2024-04-26 13:15:50.067313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.111 [2024-04-26 13:15:50.067605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.111 [2024-04-26 13:15:50.067612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.111 qpair failed and we were unable to recover it. 00:32:45.111 [2024-04-26 13:15:50.067668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.111 [2024-04-26 13:15:50.067860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.111 [2024-04-26 13:15:50.067868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.111 qpair failed and we were unable to recover it. 00:32:45.111 [2024-04-26 13:15:50.068190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.111 [2024-04-26 13:15:50.068552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.111 [2024-04-26 13:15:50.068562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.111 qpair failed and we were unable to recover it. 00:32:45.111 [2024-04-26 13:15:50.068725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.111 [2024-04-26 13:15:50.069102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.111 [2024-04-26 13:15:50.069109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.111 qpair failed and we were unable to recover it. 00:32:45.111 [2024-04-26 13:15:50.069426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.111 [2024-04-26 13:15:50.069792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.111 [2024-04-26 13:15:50.069799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.111 qpair failed and we were unable to recover it. 00:32:45.111 [2024-04-26 13:15:50.069991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.111 [2024-04-26 13:15:50.070166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.111 [2024-04-26 13:15:50.070173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.111 qpair failed and we were unable to recover it. 00:32:45.111 [2024-04-26 13:15:50.070492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.111 [2024-04-26 13:15:50.070899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.111 [2024-04-26 13:15:50.070909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.111 qpair failed and we were unable to recover it. 00:32:45.111 [2024-04-26 13:15:50.071296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.111 [2024-04-26 13:15:50.071516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.111 [2024-04-26 13:15:50.071524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.111 qpair failed and we were unable to recover it. 00:32:45.111 [2024-04-26 13:15:50.071919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.111 [2024-04-26 13:15:50.072146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.111 [2024-04-26 13:15:50.072153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.111 qpair failed and we were unable to recover it. 00:32:45.111 [2024-04-26 13:15:50.072340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.111 [2024-04-26 13:15:50.072482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.111 [2024-04-26 13:15:50.072489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.111 qpair failed and we were unable to recover it. 00:32:45.111 [2024-04-26 13:15:50.072817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.111 [2024-04-26 13:15:50.073224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.111 [2024-04-26 13:15:50.073234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.111 qpair failed and we were unable to recover it. 00:32:45.111 [2024-04-26 13:15:50.073551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.111 [2024-04-26 13:15:50.074080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.111 [2024-04-26 13:15:50.074089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.111 qpair failed and we were unable to recover it. 00:32:45.111 [2024-04-26 13:15:50.074368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.111 [2024-04-26 13:15:50.074727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.111 [2024-04-26 13:15:50.074733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.111 qpair failed and we were unable to recover it. 00:32:45.111 [2024-04-26 13:15:50.075015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.111 [2024-04-26 13:15:50.075358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.111 [2024-04-26 13:15:50.075365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.111 qpair failed and we were unable to recover it. 00:32:45.111 [2024-04-26 13:15:50.075571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.111 [2024-04-26 13:15:50.075782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.111 [2024-04-26 13:15:50.075790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.111 qpair failed and we were unable to recover it. 00:32:45.111 [2024-04-26 13:15:50.075983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.111 [2024-04-26 13:15:50.076206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.111 [2024-04-26 13:15:50.076213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.111 qpair failed and we were unable to recover it. 00:32:45.111 [2024-04-26 13:15:50.076630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.111 [2024-04-26 13:15:50.076942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.111 [2024-04-26 13:15:50.076951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.111 qpair failed and we were unable to recover it. 00:32:45.111 [2024-04-26 13:15:50.077148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.111 [2024-04-26 13:15:50.077439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.111 [2024-04-26 13:15:50.077445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.111 qpair failed and we were unable to recover it. 00:32:45.111 [2024-04-26 13:15:50.077786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.111 [2024-04-26 13:15:50.077992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.112 [2024-04-26 13:15:50.078000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.112 qpair failed and we were unable to recover it. 00:32:45.112 [2024-04-26 13:15:50.078173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.112 [2024-04-26 13:15:50.078370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.112 [2024-04-26 13:15:50.078376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.112 qpair failed and we were unable to recover it. 00:32:45.112 [2024-04-26 13:15:50.078734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.112 [2024-04-26 13:15:50.079136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.112 [2024-04-26 13:15:50.079146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.112 qpair failed and we were unable to recover it. 00:32:45.112 [2024-04-26 13:15:50.079306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.112 [2024-04-26 13:15:50.079534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.112 [2024-04-26 13:15:50.079542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.112 qpair failed and we were unable to recover it. 00:32:45.112 [2024-04-26 13:15:50.079860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.112 [2024-04-26 13:15:50.080152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.112 [2024-04-26 13:15:50.080159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.112 qpair failed and we were unable to recover it. 00:32:45.112 [2024-04-26 13:15:50.080476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.112 [2024-04-26 13:15:50.080857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.112 [2024-04-26 13:15:50.080864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.112 qpair failed and we were unable to recover it. 00:32:45.112 [2024-04-26 13:15:50.081070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.112 [2024-04-26 13:15:50.081363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.112 [2024-04-26 13:15:50.081371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.112 qpair failed and we were unable to recover it. 00:32:45.112 [2024-04-26 13:15:50.081721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.112 [2024-04-26 13:15:50.082139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.112 [2024-04-26 13:15:50.082149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.112 qpair failed and we were unable to recover it. 00:32:45.112 [2024-04-26 13:15:50.082326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.112 [2024-04-26 13:15:50.082653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.112 [2024-04-26 13:15:50.082660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.112 qpair failed and we were unable to recover it. 00:32:45.112 [2024-04-26 13:15:50.082935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.112 [2024-04-26 13:15:50.083098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.112 [2024-04-26 13:15:50.083105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.112 qpair failed and we were unable to recover it. 00:32:45.112 [2024-04-26 13:15:50.083406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.112 [2024-04-26 13:15:50.083715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.112 [2024-04-26 13:15:50.083722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.112 qpair failed and we were unable to recover it. 00:32:45.112 [2024-04-26 13:15:50.084047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.112 [2024-04-26 13:15:50.084388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.112 [2024-04-26 13:15:50.084395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.112 qpair failed and we were unable to recover it. 00:32:45.112 [2024-04-26 13:15:50.084843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.112 [2024-04-26 13:15:50.085125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.112 [2024-04-26 13:15:50.085133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.112 qpair failed and we were unable to recover it. 00:32:45.112 [2024-04-26 13:15:50.085447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.112 [2024-04-26 13:15:50.085749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.112 [2024-04-26 13:15:50.085756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.112 qpair failed and we were unable to recover it. 00:32:45.112 [2024-04-26 13:15:50.086110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.112 [2024-04-26 13:15:50.086331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.112 [2024-04-26 13:15:50.086337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.112 qpair failed and we were unable to recover it. 00:32:45.112 [2024-04-26 13:15:50.086668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.112 [2024-04-26 13:15:50.086986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.112 [2024-04-26 13:15:50.086993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.112 qpair failed and we were unable to recover it. 00:32:45.112 [2024-04-26 13:15:50.087562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.112 [2024-04-26 13:15:50.087734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.112 [2024-04-26 13:15:50.087742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.112 qpair failed and we were unable to recover it. 00:32:45.112 [2024-04-26 13:15:50.087919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.112 [2024-04-26 13:15:50.088120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.112 [2024-04-26 13:15:50.088127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.112 qpair failed and we were unable to recover it. 00:32:45.112 [2024-04-26 13:15:50.088429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.112 [2024-04-26 13:15:50.088718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.112 [2024-04-26 13:15:50.088725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.112 qpair failed and we were unable to recover it. 00:32:45.112 [2024-04-26 13:15:50.089055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.112 [2024-04-26 13:15:50.089475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.112 [2024-04-26 13:15:50.089485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.112 qpair failed and we were unable to recover it. 00:32:45.112 [2024-04-26 13:15:50.089803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.112 [2024-04-26 13:15:50.090110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.112 [2024-04-26 13:15:50.090117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.112 qpair failed and we were unable to recover it. 00:32:45.112 [2024-04-26 13:15:50.090309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.112 [2024-04-26 13:15:50.090679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.112 [2024-04-26 13:15:50.090686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.112 qpair failed and we were unable to recover it. 00:32:45.112 [2024-04-26 13:15:50.090953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.112 [2024-04-26 13:15:50.091152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.112 [2024-04-26 13:15:50.091159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.112 qpair failed and we were unable to recover it. 00:32:45.112 [2024-04-26 13:15:50.091447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.112 [2024-04-26 13:15:50.091820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.112 [2024-04-26 13:15:50.091827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.112 qpair failed and we were unable to recover it. 00:32:45.112 [2024-04-26 13:15:50.092154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.112 [2024-04-26 13:15:50.092503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.112 [2024-04-26 13:15:50.092509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.112 qpair failed and we were unable to recover it. 00:32:45.112 [2024-04-26 13:15:50.092807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.112 [2024-04-26 13:15:50.093134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.112 [2024-04-26 13:15:50.093141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.112 qpair failed and we were unable to recover it. 00:32:45.112 [2024-04-26 13:15:50.093184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.112 [2024-04-26 13:15:50.093340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.112 [2024-04-26 13:15:50.093347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.112 qpair failed and we were unable to recover it. 00:32:45.112 [2024-04-26 13:15:50.093758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.112 [2024-04-26 13:15:50.093930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.112 [2024-04-26 13:15:50.093937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.112 qpair failed and we were unable to recover it. 00:32:45.112 [2024-04-26 13:15:50.094344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.112 [2024-04-26 13:15:50.094637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.112 [2024-04-26 13:15:50.094644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.112 qpair failed and we were unable to recover it. 00:32:45.113 [2024-04-26 13:15:50.094926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.113 [2024-04-26 13:15:50.095253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.113 [2024-04-26 13:15:50.095259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.113 qpair failed and we were unable to recover it. 00:32:45.113 [2024-04-26 13:15:50.095451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.113 [2024-04-26 13:15:50.095622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.113 [2024-04-26 13:15:50.095629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.113 qpair failed and we were unable to recover it. 00:32:45.113 [2024-04-26 13:15:50.095949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.113 [2024-04-26 13:15:50.096284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.113 [2024-04-26 13:15:50.096291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.113 qpair failed and we were unable to recover it. 00:32:45.113 [2024-04-26 13:15:50.096650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.113 [2024-04-26 13:15:50.096798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.113 [2024-04-26 13:15:50.096804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.113 qpair failed and we were unable to recover it. 00:32:45.113 [2024-04-26 13:15:50.096981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.113 [2024-04-26 13:15:50.097221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.113 [2024-04-26 13:15:50.097228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.113 qpair failed and we were unable to recover it. 00:32:45.113 [2024-04-26 13:15:50.097497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.113 [2024-04-26 13:15:50.097761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.113 [2024-04-26 13:15:50.097768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.113 qpair failed and we were unable to recover it. 00:32:45.113 [2024-04-26 13:15:50.098105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.113 [2024-04-26 13:15:50.098276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.113 [2024-04-26 13:15:50.098282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.113 qpair failed and we were unable to recover it. 00:32:45.113 [2024-04-26 13:15:50.098529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.113 [2024-04-26 13:15:50.099003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.113 [2024-04-26 13:15:50.099013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.113 qpair failed and we were unable to recover it. 00:32:45.113 [2024-04-26 13:15:50.099325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.113 [2024-04-26 13:15:50.099647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.113 [2024-04-26 13:15:50.099653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.113 qpair failed and we were unable to recover it. 00:32:45.113 [2024-04-26 13:15:50.099849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.113 [2024-04-26 13:15:50.100182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.113 [2024-04-26 13:15:50.100189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.113 qpair failed and we were unable to recover it. 00:32:45.113 [2024-04-26 13:15:50.100363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.113 [2024-04-26 13:15:50.100720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.113 [2024-04-26 13:15:50.100727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.113 qpair failed and we were unable to recover it. 00:32:45.113 [2024-04-26 13:15:50.101065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.113 [2024-04-26 13:15:50.101394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.113 [2024-04-26 13:15:50.101400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.113 qpair failed and we were unable to recover it. 00:32:45.113 [2024-04-26 13:15:50.101438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.113 [2024-04-26 13:15:50.101627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.113 [2024-04-26 13:15:50.101634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.113 qpair failed and we were unable to recover it. 00:32:45.113 [2024-04-26 13:15:50.101817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.113 [2024-04-26 13:15:50.102159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.113 [2024-04-26 13:15:50.102166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.113 qpair failed and we were unable to recover it. 00:32:45.113 [2024-04-26 13:15:50.102466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.113 [2024-04-26 13:15:50.102660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.113 [2024-04-26 13:15:50.102666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.113 qpair failed and we were unable to recover it. 00:32:45.113 [2024-04-26 13:15:50.102866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.113 [2024-04-26 13:15:50.103246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.113 [2024-04-26 13:15:50.103252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.113 qpair failed and we were unable to recover it. 00:32:45.113 [2024-04-26 13:15:50.103563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.113 [2024-04-26 13:15:50.103599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.113 [2024-04-26 13:15:50.103605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.113 qpair failed and we were unable to recover it. 00:32:45.113 [2024-04-26 13:15:50.103890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.113 [2024-04-26 13:15:50.104132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.113 [2024-04-26 13:15:50.104139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.113 qpair failed and we were unable to recover it. 00:32:45.113 [2024-04-26 13:15:50.104342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.113 [2024-04-26 13:15:50.104663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.113 [2024-04-26 13:15:50.104670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.113 qpair failed and we were unable to recover it. 00:32:45.113 [2024-04-26 13:15:50.104872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.113 [2024-04-26 13:15:50.105197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.113 [2024-04-26 13:15:50.105204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.113 qpair failed and we were unable to recover it. 00:32:45.113 [2024-04-26 13:15:50.105378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.113 [2024-04-26 13:15:50.105662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.113 [2024-04-26 13:15:50.105669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.113 qpair failed and we were unable to recover it. 00:32:45.113 [2024-04-26 13:15:50.105987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.113 [2024-04-26 13:15:50.106281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.113 [2024-04-26 13:15:50.106288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.113 qpair failed and we were unable to recover it. 00:32:45.113 [2024-04-26 13:15:50.106482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.113 [2024-04-26 13:15:50.106847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.113 [2024-04-26 13:15:50.106854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.113 qpair failed and we were unable to recover it. 00:32:45.113 [2024-04-26 13:15:50.107156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.113 [2024-04-26 13:15:50.107492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.113 [2024-04-26 13:15:50.107498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.113 qpair failed and we were unable to recover it. 00:32:45.113 [2024-04-26 13:15:50.107796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.113 [2024-04-26 13:15:50.107873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.113 [2024-04-26 13:15:50.107879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.113 qpair failed and we were unable to recover it. 00:32:45.113 [2024-04-26 13:15:50.108053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.113 [2024-04-26 13:15:50.108240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.113 [2024-04-26 13:15:50.108247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.113 qpair failed and we were unable to recover it. 00:32:45.113 [2024-04-26 13:15:50.108345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.113 [2024-04-26 13:15:50.108637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.113 [2024-04-26 13:15:50.108644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.113 qpair failed and we were unable to recover it. 00:32:45.113 [2024-04-26 13:15:50.108802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.113 [2024-04-26 13:15:50.109077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.113 [2024-04-26 13:15:50.109084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.113 qpair failed and we were unable to recover it. 00:32:45.113 [2024-04-26 13:15:50.109426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.113 [2024-04-26 13:15:50.109749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.114 [2024-04-26 13:15:50.109756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.114 qpair failed and we were unable to recover it. 00:32:45.114 [2024-04-26 13:15:50.110049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.114 [2024-04-26 13:15:50.110384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.114 [2024-04-26 13:15:50.110391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.114 qpair failed and we were unable to recover it. 00:32:45.114 [2024-04-26 13:15:50.110695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.114 [2024-04-26 13:15:50.110978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.114 [2024-04-26 13:15:50.110986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.114 qpair failed and we were unable to recover it. 00:32:45.114 [2024-04-26 13:15:50.111223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.114 [2024-04-26 13:15:50.111553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.114 [2024-04-26 13:15:50.111559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.114 qpair failed and we were unable to recover it. 00:32:45.114 [2024-04-26 13:15:50.111867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.114 [2024-04-26 13:15:50.112061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.114 [2024-04-26 13:15:50.112068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.114 qpair failed and we were unable to recover it. 00:32:45.114 [2024-04-26 13:15:50.112400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.114 [2024-04-26 13:15:50.112715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.114 [2024-04-26 13:15:50.112721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.114 qpair failed and we were unable to recover it. 00:32:45.114 [2024-04-26 13:15:50.113113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.114 [2024-04-26 13:15:50.113464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.114 [2024-04-26 13:15:50.113471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.114 qpair failed and we were unable to recover it. 00:32:45.114 [2024-04-26 13:15:50.113641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.114 [2024-04-26 13:15:50.113798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.114 [2024-04-26 13:15:50.113805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.114 qpair failed and we were unable to recover it. 00:32:45.114 [2024-04-26 13:15:50.114128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.114 [2024-04-26 13:15:50.114170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.114 [2024-04-26 13:15:50.114176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.114 qpair failed and we were unable to recover it. 00:32:45.114 [2024-04-26 13:15:50.114485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.114 [2024-04-26 13:15:50.114830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.114 [2024-04-26 13:15:50.114840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.114 qpair failed and we were unable to recover it. 00:32:45.114 [2024-04-26 13:15:50.115145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.114 [2024-04-26 13:15:50.115478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.114 [2024-04-26 13:15:50.115485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.114 qpair failed and we were unable to recover it. 00:32:45.114 [2024-04-26 13:15:50.116323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.114 [2024-04-26 13:15:50.116648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.114 [2024-04-26 13:15:50.116657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.114 qpair failed and we were unable to recover it. 00:32:45.114 [2024-04-26 13:15:50.117359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.114 [2024-04-26 13:15:50.117539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.114 [2024-04-26 13:15:50.117547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.114 qpair failed and we were unable to recover it. 00:32:45.114 [2024-04-26 13:15:50.117639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.114 [2024-04-26 13:15:50.118441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.114 [2024-04-26 13:15:50.118455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.114 qpair failed and we were unable to recover it. 00:32:45.114 [2024-04-26 13:15:50.118756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.114 [2024-04-26 13:15:50.119060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.114 [2024-04-26 13:15:50.119068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.114 qpair failed and we were unable to recover it. 00:32:45.114 [2024-04-26 13:15:50.119264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.114 [2024-04-26 13:15:50.119623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.114 [2024-04-26 13:15:50.119630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.114 qpair failed and we were unable to recover it. 00:32:45.114 [2024-04-26 13:15:50.119807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.114 [2024-04-26 13:15:50.120115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.114 [2024-04-26 13:15:50.120122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.114 qpair failed and we were unable to recover it. 00:32:45.114 [2024-04-26 13:15:50.120437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.114 [2024-04-26 13:15:50.120780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.114 [2024-04-26 13:15:50.120786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.114 qpair failed and we were unable to recover it. 00:32:45.114 [2024-04-26 13:15:50.121109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.114 [2024-04-26 13:15:50.121303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.114 [2024-04-26 13:15:50.121310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.114 qpair failed and we were unable to recover it. 00:32:45.114 [2024-04-26 13:15:50.121579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.114 [2024-04-26 13:15:50.121834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.114 [2024-04-26 13:15:50.121851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.114 qpair failed and we were unable to recover it. 00:32:45.114 [2024-04-26 13:15:50.122164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.114 [2024-04-26 13:15:50.122334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.114 [2024-04-26 13:15:50.122341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.114 qpair failed and we were unable to recover it. 00:32:45.114 [2024-04-26 13:15:50.122559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.114 [2024-04-26 13:15:50.122924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.114 [2024-04-26 13:15:50.122931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.114 qpair failed and we were unable to recover it. 00:32:45.114 [2024-04-26 13:15:50.123096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.114 [2024-04-26 13:15:50.123380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.114 [2024-04-26 13:15:50.123386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.114 qpair failed and we were unable to recover it. 00:32:45.114 [2024-04-26 13:15:50.123565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.114 [2024-04-26 13:15:50.123847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.114 [2024-04-26 13:15:50.123854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.114 qpair failed and we were unable to recover it. 00:32:45.114 [2024-04-26 13:15:50.124057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.114 [2024-04-26 13:15:50.124359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.114 [2024-04-26 13:15:50.124366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.114 qpair failed and we were unable to recover it. 00:32:45.114 [2024-04-26 13:15:50.124525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.114 [2024-04-26 13:15:50.124909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.114 [2024-04-26 13:15:50.124916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.114 qpair failed and we were unable to recover it. 00:32:45.114 [2024-04-26 13:15:50.125269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.114 [2024-04-26 13:15:50.125553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.114 [2024-04-26 13:15:50.125559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.114 qpair failed and we were unable to recover it. 00:32:45.114 [2024-04-26 13:15:50.125864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.114 [2024-04-26 13:15:50.126063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.114 [2024-04-26 13:15:50.126070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.114 qpair failed and we were unable to recover it. 00:32:45.114 [2024-04-26 13:15:50.126427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.114 [2024-04-26 13:15:50.126741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.114 [2024-04-26 13:15:50.126748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.114 qpair failed and we were unable to recover it. 00:32:45.114 [2024-04-26 13:15:50.127062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.115 [2024-04-26 13:15:50.127380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.115 [2024-04-26 13:15:50.127387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.115 qpair failed and we were unable to recover it. 00:32:45.115 [2024-04-26 13:15:50.127703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.115 [2024-04-26 13:15:50.127902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.115 [2024-04-26 13:15:50.127909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.115 qpair failed and we were unable to recover it. 00:32:45.115 [2024-04-26 13:15:50.128292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.115 [2024-04-26 13:15:50.128614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.115 [2024-04-26 13:15:50.128621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.115 qpair failed and we were unable to recover it. 00:32:45.115 [2024-04-26 13:15:50.128783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.115 [2024-04-26 13:15:50.128882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.115 [2024-04-26 13:15:50.128888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.115 qpair failed and we were unable to recover it. 00:32:45.115 [2024-04-26 13:15:50.129092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.115 [2024-04-26 13:15:50.129405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.115 [2024-04-26 13:15:50.129411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.115 qpair failed and we were unable to recover it. 00:32:45.115 [2024-04-26 13:15:50.129602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.115 [2024-04-26 13:15:50.129973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.115 [2024-04-26 13:15:50.129980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.115 qpair failed and we were unable to recover it. 00:32:45.115 [2024-04-26 13:15:50.130276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.115 [2024-04-26 13:15:50.130604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.115 [2024-04-26 13:15:50.130613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.115 qpair failed and we were unable to recover it. 00:32:45.115 [2024-04-26 13:15:50.130770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.115 [2024-04-26 13:15:50.130996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.115 [2024-04-26 13:15:50.131004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.115 qpair failed and we were unable to recover it. 00:32:45.115 [2024-04-26 13:15:50.131302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.115 [2024-04-26 13:15:50.131467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.115 [2024-04-26 13:15:50.131474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.115 qpair failed and we were unable to recover it. 00:32:45.115 [2024-04-26 13:15:50.131754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.115 [2024-04-26 13:15:50.132051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.115 [2024-04-26 13:15:50.132058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.115 qpair failed and we were unable to recover it. 00:32:45.115 [2024-04-26 13:15:50.132378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.115 [2024-04-26 13:15:50.132664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.115 [2024-04-26 13:15:50.132671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.115 qpair failed and we were unable to recover it. 00:32:45.115 [2024-04-26 13:15:50.132932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.115 [2024-04-26 13:15:50.133223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.115 [2024-04-26 13:15:50.133230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.115 qpair failed and we were unable to recover it. 00:32:45.115 [2024-04-26 13:15:50.133422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.115 [2024-04-26 13:15:50.133784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.115 [2024-04-26 13:15:50.133791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.115 qpair failed and we were unable to recover it. 00:32:45.115 [2024-04-26 13:15:50.133993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.115 [2024-04-26 13:15:50.134306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.115 [2024-04-26 13:15:50.134313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.115 qpair failed and we were unable to recover it. 00:32:45.115 [2024-04-26 13:15:50.134484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.115 [2024-04-26 13:15:50.134811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.115 [2024-04-26 13:15:50.134818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.115 qpair failed and we were unable to recover it. 00:32:45.115 [2024-04-26 13:15:50.135152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.115 [2024-04-26 13:15:50.135473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.115 [2024-04-26 13:15:50.135480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.115 qpair failed and we were unable to recover it. 00:32:45.115 [2024-04-26 13:15:50.135768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.115 [2024-04-26 13:15:50.135987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.115 [2024-04-26 13:15:50.135996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.115 qpair failed and we were unable to recover it. 00:32:45.115 [2024-04-26 13:15:50.136275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.115 [2024-04-26 13:15:50.136632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.115 [2024-04-26 13:15:50.136639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.115 qpair failed and we were unable to recover it. 00:32:45.115 [2024-04-26 13:15:50.136965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.115 [2024-04-26 13:15:50.137141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.115 [2024-04-26 13:15:50.137147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.115 qpair failed and we were unable to recover it. 00:32:45.115 [2024-04-26 13:15:50.137457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.115 [2024-04-26 13:15:50.137639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.115 [2024-04-26 13:15:50.137646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.115 qpair failed and we were unable to recover it. 00:32:45.115 [2024-04-26 13:15:50.137935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.115 [2024-04-26 13:15:50.138258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.115 [2024-04-26 13:15:50.138264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.115 qpair failed and we were unable to recover it. 00:32:45.115 [2024-04-26 13:15:50.138557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.115 [2024-04-26 13:15:50.138830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.115 [2024-04-26 13:15:50.138850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.115 qpair failed and we were unable to recover it. 00:32:45.115 [2024-04-26 13:15:50.139139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.115 [2024-04-26 13:15:50.139305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.115 [2024-04-26 13:15:50.139311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.115 qpair failed and we were unable to recover it. 00:32:45.115 [2024-04-26 13:15:50.139454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.115 [2024-04-26 13:15:50.139792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.115 [2024-04-26 13:15:50.139798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.115 qpair failed and we were unable to recover it. 00:32:45.115 [2024-04-26 13:15:50.140028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.115 [2024-04-26 13:15:50.140342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.115 [2024-04-26 13:15:50.140348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.115 qpair failed and we were unable to recover it. 00:32:45.115 [2024-04-26 13:15:50.140656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.115 [2024-04-26 13:15:50.140969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.115 [2024-04-26 13:15:50.140976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.115 qpair failed and we were unable to recover it. 00:32:45.115 [2024-04-26 13:15:50.141324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.115 [2024-04-26 13:15:50.141487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.115 [2024-04-26 13:15:50.141495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.115 qpair failed and we were unable to recover it. 00:32:45.115 [2024-04-26 13:15:50.141691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.115 [2024-04-26 13:15:50.142012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.115 [2024-04-26 13:15:50.142019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.115 qpair failed and we were unable to recover it. 00:32:45.115 [2024-04-26 13:15:50.142324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.115 [2024-04-26 13:15:50.142649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.115 [2024-04-26 13:15:50.142656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.116 qpair failed and we were unable to recover it. 00:32:45.116 [2024-04-26 13:15:50.142833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.116 [2024-04-26 13:15:50.143152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.116 [2024-04-26 13:15:50.143159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.116 qpair failed and we were unable to recover it. 00:32:45.116 [2024-04-26 13:15:50.143369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.116 [2024-04-26 13:15:50.143716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.116 [2024-04-26 13:15:50.143722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.116 qpair failed and we were unable to recover it. 00:32:45.116 [2024-04-26 13:15:50.144040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.116 [2024-04-26 13:15:50.144365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.116 [2024-04-26 13:15:50.144372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.116 qpair failed and we were unable to recover it. 00:32:45.116 [2024-04-26 13:15:50.144670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.116 [2024-04-26 13:15:50.145000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.116 [2024-04-26 13:15:50.145006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.116 qpair failed and we were unable to recover it. 00:32:45.116 [2024-04-26 13:15:50.145329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.116 [2024-04-26 13:15:50.145652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.116 [2024-04-26 13:15:50.145658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.116 qpair failed and we were unable to recover it. 00:32:45.116 [2024-04-26 13:15:50.145973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.116 [2024-04-26 13:15:50.146279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.116 [2024-04-26 13:15:50.146286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.116 qpair failed and we were unable to recover it. 00:32:45.116 [2024-04-26 13:15:50.146408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.116 [2024-04-26 13:15:50.146702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.116 [2024-04-26 13:15:50.146709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.116 qpair failed and we were unable to recover it. 00:32:45.116 [2024-04-26 13:15:50.147017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.116 [2024-04-26 13:15:50.147347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.116 [2024-04-26 13:15:50.147355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.116 qpair failed and we were unable to recover it. 00:32:45.116 [2024-04-26 13:15:50.147654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.116 [2024-04-26 13:15:50.147963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.116 [2024-04-26 13:15:50.147970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.116 qpair failed and we were unable to recover it. 00:32:45.116 [2024-04-26 13:15:50.148171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.385 [2024-04-26 13:15:50.148532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.385 [2024-04-26 13:15:50.148539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.385 qpair failed and we were unable to recover it. 00:32:45.385 [2024-04-26 13:15:50.148882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.385 [2024-04-26 13:15:50.149227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.385 [2024-04-26 13:15:50.149234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.385 qpair failed and we were unable to recover it. 00:32:45.385 [2024-04-26 13:15:50.149523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.385 [2024-04-26 13:15:50.149857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.385 [2024-04-26 13:15:50.149864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.385 qpair failed and we were unable to recover it. 00:32:45.385 [2024-04-26 13:15:50.150173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.385 [2024-04-26 13:15:50.150338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.385 [2024-04-26 13:15:50.150345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.385 qpair failed and we were unable to recover it. 00:32:45.385 [2024-04-26 13:15:50.150521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.385 [2024-04-26 13:15:50.150802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.385 [2024-04-26 13:15:50.150808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.385 qpair failed and we were unable to recover it. 00:32:45.385 [2024-04-26 13:15:50.151096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.385 [2024-04-26 13:15:50.151285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.385 [2024-04-26 13:15:50.151297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.385 qpair failed and we were unable to recover it. 00:32:45.385 [2024-04-26 13:15:50.151603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.385 [2024-04-26 13:15:50.151776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.385 [2024-04-26 13:15:50.151782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.385 qpair failed and we were unable to recover it. 00:32:45.385 [2024-04-26 13:15:50.151825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.385 [2024-04-26 13:15:50.152154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.385 [2024-04-26 13:15:50.152162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.385 qpair failed and we were unable to recover it. 00:32:45.385 [2024-04-26 13:15:50.152477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.385 [2024-04-26 13:15:50.152783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.385 [2024-04-26 13:15:50.152790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.385 qpair failed and we were unable to recover it. 00:32:45.385 [2024-04-26 13:15:50.153133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.385 [2024-04-26 13:15:50.153455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.385 [2024-04-26 13:15:50.153461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.385 qpair failed and we were unable to recover it. 00:32:45.385 [2024-04-26 13:15:50.153644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.385 [2024-04-26 13:15:50.153920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.385 [2024-04-26 13:15:50.153927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.385 qpair failed and we were unable to recover it. 00:32:45.385 [2024-04-26 13:15:50.154107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.386 [2024-04-26 13:15:50.154265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.386 [2024-04-26 13:15:50.154272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.386 qpair failed and we were unable to recover it. 00:32:45.386 [2024-04-26 13:15:50.154456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.386 [2024-04-26 13:15:50.154786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.386 [2024-04-26 13:15:50.154793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.386 qpair failed and we were unable to recover it. 00:32:45.386 [2024-04-26 13:15:50.154968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.386 [2024-04-26 13:15:50.155263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.386 [2024-04-26 13:15:50.155270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.386 qpair failed and we were unable to recover it. 00:32:45.386 [2024-04-26 13:15:50.155566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.386 [2024-04-26 13:15:50.155804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.386 [2024-04-26 13:15:50.155810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.386 qpair failed and we were unable to recover it. 00:32:45.386 [2024-04-26 13:15:50.155962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.386 [2024-04-26 13:15:50.156387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.386 [2024-04-26 13:15:50.156393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.386 qpair failed and we were unable to recover it. 00:32:45.386 [2024-04-26 13:15:50.156546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.386 [2024-04-26 13:15:50.156588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.386 [2024-04-26 13:15:50.156594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.386 qpair failed and we were unable to recover it. 00:32:45.386 [2024-04-26 13:15:50.156690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.386 [2024-04-26 13:15:50.156983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.386 [2024-04-26 13:15:50.156990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.386 qpair failed and we were unable to recover it. 00:32:45.386 [2024-04-26 13:15:50.157332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.386 [2024-04-26 13:15:50.157657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.386 [2024-04-26 13:15:50.157663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.386 qpair failed and we were unable to recover it. 00:32:45.386 [2024-04-26 13:15:50.157984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.386 [2024-04-26 13:15:50.158310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.386 [2024-04-26 13:15:50.158317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.386 qpair failed and we were unable to recover it. 00:32:45.386 [2024-04-26 13:15:50.158630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.386 [2024-04-26 13:15:50.158971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.386 [2024-04-26 13:15:50.158978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.386 qpair failed and we were unable to recover it. 00:32:45.386 [2024-04-26 13:15:50.159305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.386 [2024-04-26 13:15:50.159348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.386 [2024-04-26 13:15:50.159355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.386 qpair failed and we were unable to recover it. 00:32:45.386 [2024-04-26 13:15:50.159663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.386 [2024-04-26 13:15:50.159883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.386 [2024-04-26 13:15:50.159890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.386 qpair failed and we were unable to recover it. 00:32:45.386 [2024-04-26 13:15:50.160071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.386 [2024-04-26 13:15:50.160253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.386 [2024-04-26 13:15:50.160260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.386 qpair failed and we were unable to recover it. 00:32:45.386 [2024-04-26 13:15:50.160460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.386 [2024-04-26 13:15:50.160658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.386 [2024-04-26 13:15:50.160664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.386 qpair failed and we were unable to recover it. 00:32:45.386 [2024-04-26 13:15:50.161022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.386 [2024-04-26 13:15:50.161190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.386 [2024-04-26 13:15:50.161197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.386 qpair failed and we were unable to recover it. 00:32:45.386 [2024-04-26 13:15:50.161351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.386 [2024-04-26 13:15:50.161641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.386 [2024-04-26 13:15:50.161648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.386 qpair failed and we were unable to recover it. 00:32:45.386 [2024-04-26 13:15:50.161842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.386 [2024-04-26 13:15:50.162043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.386 [2024-04-26 13:15:50.162050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.386 qpair failed and we were unable to recover it. 00:32:45.386 [2024-04-26 13:15:50.162216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.386 [2024-04-26 13:15:50.162403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.386 [2024-04-26 13:15:50.162409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.386 qpair failed and we were unable to recover it. 00:32:45.386 [2024-04-26 13:15:50.162669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.386 [2024-04-26 13:15:50.162715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.386 [2024-04-26 13:15:50.162721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.386 qpair failed and we were unable to recover it. 00:32:45.386 [2024-04-26 13:15:50.163027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.386 [2024-04-26 13:15:50.163347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.386 [2024-04-26 13:15:50.163353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.386 qpair failed and we were unable to recover it. 00:32:45.386 [2024-04-26 13:15:50.163547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.386 [2024-04-26 13:15:50.163898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.386 [2024-04-26 13:15:50.163904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.386 qpair failed and we were unable to recover it. 00:32:45.386 [2024-04-26 13:15:50.164209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.386 [2024-04-26 13:15:50.164404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.386 [2024-04-26 13:15:50.164410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.386 qpair failed and we were unable to recover it. 00:32:45.386 [2024-04-26 13:15:50.164612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.386 [2024-04-26 13:15:50.164895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.386 [2024-04-26 13:15:50.164902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.386 qpair failed and we were unable to recover it. 00:32:45.386 [2024-04-26 13:15:50.165278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.386 [2024-04-26 13:15:50.165576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.386 [2024-04-26 13:15:50.165582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.386 qpair failed and we were unable to recover it. 00:32:45.386 [2024-04-26 13:15:50.165895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.386 [2024-04-26 13:15:50.166090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.386 [2024-04-26 13:15:50.166097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.386 qpair failed and we were unable to recover it. 00:32:45.386 [2024-04-26 13:15:50.166429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.386 [2024-04-26 13:15:50.166624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.386 [2024-04-26 13:15:50.166631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.386 qpair failed and we were unable to recover it. 00:32:45.386 [2024-04-26 13:15:50.166946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.386 [2024-04-26 13:15:50.167285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.386 [2024-04-26 13:15:50.167293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.386 qpair failed and we were unable to recover it. 00:32:45.386 [2024-04-26 13:15:50.167603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.386 [2024-04-26 13:15:50.167947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.386 [2024-04-26 13:15:50.167954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.386 qpair failed and we were unable to recover it. 00:32:45.386 [2024-04-26 13:15:50.168256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.387 [2024-04-26 13:15:50.168541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.387 [2024-04-26 13:15:50.168547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.387 qpair failed and we were unable to recover it. 00:32:45.387 [2024-04-26 13:15:50.168766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.387 [2024-04-26 13:15:50.168984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.387 [2024-04-26 13:15:50.168991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.387 qpair failed and we were unable to recover it. 00:32:45.387 [2024-04-26 13:15:50.169305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.387 [2024-04-26 13:15:50.169606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.387 [2024-04-26 13:15:50.169613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.387 qpair failed and we were unable to recover it. 00:32:45.387 [2024-04-26 13:15:50.169923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.387 [2024-04-26 13:15:50.169961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.387 [2024-04-26 13:15:50.169968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.387 qpair failed and we were unable to recover it. 00:32:45.387 [2024-04-26 13:15:50.170132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.387 [2024-04-26 13:15:50.170351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.387 [2024-04-26 13:15:50.170358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.387 qpair failed and we were unable to recover it. 00:32:45.387 [2024-04-26 13:15:50.170669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.387 [2024-04-26 13:15:50.170970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.387 [2024-04-26 13:15:50.170977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.387 qpair failed and we were unable to recover it. 00:32:45.387 [2024-04-26 13:15:50.171152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.387 [2024-04-26 13:15:50.171480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.387 [2024-04-26 13:15:50.171486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.387 qpair failed and we were unable to recover it. 00:32:45.387 [2024-04-26 13:15:50.171696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.387 [2024-04-26 13:15:50.172017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.387 [2024-04-26 13:15:50.172024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.387 qpair failed and we were unable to recover it. 00:32:45.387 [2024-04-26 13:15:50.172205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.387 [2024-04-26 13:15:50.172492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.387 [2024-04-26 13:15:50.172498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.387 qpair failed and we were unable to recover it. 00:32:45.387 [2024-04-26 13:15:50.172825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.387 [2024-04-26 13:15:50.173126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.387 [2024-04-26 13:15:50.173134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.387 qpair failed and we were unable to recover it. 00:32:45.387 [2024-04-26 13:15:50.173173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.387 [2024-04-26 13:15:50.173558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.387 [2024-04-26 13:15:50.173565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.387 qpair failed and we were unable to recover it. 00:32:45.387 [2024-04-26 13:15:50.173878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.387 [2024-04-26 13:15:50.174048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.387 [2024-04-26 13:15:50.174055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.387 qpair failed and we were unable to recover it. 00:32:45.387 [2024-04-26 13:15:50.174327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.387 [2024-04-26 13:15:50.174630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.387 [2024-04-26 13:15:50.174636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.387 qpair failed and we were unable to recover it. 00:32:45.387 [2024-04-26 13:15:50.174955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.387 [2024-04-26 13:15:50.175286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.387 [2024-04-26 13:15:50.175292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.387 qpair failed and we were unable to recover it. 00:32:45.387 [2024-04-26 13:15:50.175596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.387 [2024-04-26 13:15:50.175929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.387 [2024-04-26 13:15:50.175936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.387 qpair failed and we were unable to recover it. 00:32:45.387 [2024-04-26 13:15:50.176268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.387 [2024-04-26 13:15:50.176559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.387 [2024-04-26 13:15:50.176565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.387 qpair failed and we were unable to recover it. 00:32:45.387 [2024-04-26 13:15:50.176868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.387 [2024-04-26 13:15:50.177057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.387 [2024-04-26 13:15:50.177064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.387 qpair failed and we were unable to recover it. 00:32:45.387 [2024-04-26 13:15:50.177345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.387 [2024-04-26 13:15:50.177687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.387 [2024-04-26 13:15:50.177694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.387 qpair failed and we were unable to recover it. 00:32:45.387 [2024-04-26 13:15:50.177991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.387 [2024-04-26 13:15:50.178312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.387 [2024-04-26 13:15:50.178318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.387 qpair failed and we were unable to recover it. 00:32:45.387 [2024-04-26 13:15:50.178515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.387 [2024-04-26 13:15:50.178864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.387 [2024-04-26 13:15:50.178870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.387 qpair failed and we were unable to recover it. 00:32:45.387 [2024-04-26 13:15:50.179182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.387 [2024-04-26 13:15:50.179362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.387 [2024-04-26 13:15:50.179369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.387 qpair failed and we were unable to recover it. 00:32:45.387 [2024-04-26 13:15:50.179532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.387 [2024-04-26 13:15:50.179726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.387 [2024-04-26 13:15:50.179732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.387 qpair failed and we were unable to recover it. 00:32:45.387 [2024-04-26 13:15:50.179918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.387 [2024-04-26 13:15:50.180104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.387 [2024-04-26 13:15:50.180111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.387 qpair failed and we were unable to recover it. 00:32:45.387 [2024-04-26 13:15:50.180504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.387 [2024-04-26 13:15:50.180831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.387 [2024-04-26 13:15:50.180842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.387 qpair failed and we were unable to recover it. 00:32:45.387 [2024-04-26 13:15:50.181156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.387 [2024-04-26 13:15:50.181351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.387 [2024-04-26 13:15:50.181358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.387 qpair failed and we were unable to recover it. 00:32:45.387 [2024-04-26 13:15:50.181665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.387 [2024-04-26 13:15:50.181990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.387 [2024-04-26 13:15:50.181996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.387 qpair failed and we were unable to recover it. 00:32:45.387 [2024-04-26 13:15:50.182307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.387 [2024-04-26 13:15:50.182619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.388 [2024-04-26 13:15:50.182625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.388 qpair failed and we were unable to recover it. 00:32:45.388 [2024-04-26 13:15:50.182924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.388 [2024-04-26 13:15:50.183133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.388 [2024-04-26 13:15:50.183139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.388 qpair failed and we were unable to recover it. 00:32:45.388 [2024-04-26 13:15:50.183314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.388 [2024-04-26 13:15:50.183642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.388 [2024-04-26 13:15:50.183649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.388 qpair failed and we were unable to recover it. 00:32:45.388 [2024-04-26 13:15:50.183822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.388 [2024-04-26 13:15:50.184153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.388 [2024-04-26 13:15:50.184159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.388 qpair failed and we were unable to recover it. 00:32:45.388 [2024-04-26 13:15:50.184455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.388 [2024-04-26 13:15:50.184497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.388 [2024-04-26 13:15:50.184504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.388 qpair failed and we were unable to recover it. 00:32:45.388 [2024-04-26 13:15:50.184725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.388 [2024-04-26 13:15:50.184846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.388 [2024-04-26 13:15:50.184854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.388 qpair failed and we were unable to recover it. 00:32:45.388 [2024-04-26 13:15:50.185172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.388 [2024-04-26 13:15:50.185495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.388 [2024-04-26 13:15:50.185501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.388 qpair failed and we were unable to recover it. 00:32:45.388 [2024-04-26 13:15:50.185669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.388 [2024-04-26 13:15:50.185946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.388 [2024-04-26 13:15:50.185959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.388 qpair failed and we were unable to recover it. 00:32:45.388 [2024-04-26 13:15:50.186279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.388 [2024-04-26 13:15:50.186318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.388 [2024-04-26 13:15:50.186324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.388 qpair failed and we were unable to recover it. 00:32:45.388 [2024-04-26 13:15:50.186612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.388 [2024-04-26 13:15:50.186843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.388 [2024-04-26 13:15:50.186849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.388 qpair failed and we were unable to recover it. 00:32:45.388 [2024-04-26 13:15:50.187219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.388 [2024-04-26 13:15:50.187550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.388 [2024-04-26 13:15:50.187557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.388 qpair failed and we were unable to recover it. 00:32:45.388 [2024-04-26 13:15:50.187822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.388 [2024-04-26 13:15:50.188021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.388 [2024-04-26 13:15:50.188028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.388 qpair failed and we were unable to recover it. 00:32:45.388 [2024-04-26 13:15:50.188341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.388 [2024-04-26 13:15:50.188633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.388 [2024-04-26 13:15:50.188640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.388 qpair failed and we were unable to recover it. 00:32:45.388 [2024-04-26 13:15:50.188807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.388 [2024-04-26 13:15:50.189131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.388 [2024-04-26 13:15:50.189138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.388 qpair failed and we were unable to recover it. 00:32:45.388 [2024-04-26 13:15:50.189334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.388 [2024-04-26 13:15:50.189525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.388 [2024-04-26 13:15:50.189531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.388 qpair failed and we were unable to recover it. 00:32:45.388 [2024-04-26 13:15:50.189834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.388 [2024-04-26 13:15:50.190180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.388 [2024-04-26 13:15:50.190187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.388 qpair failed and we were unable to recover it. 00:32:45.388 [2024-04-26 13:15:50.190518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.388 [2024-04-26 13:15:50.190827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.388 [2024-04-26 13:15:50.190834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.388 qpair failed and we were unable to recover it. 00:32:45.388 [2024-04-26 13:15:50.191156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.388 [2024-04-26 13:15:50.191505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.388 [2024-04-26 13:15:50.191513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.388 qpair failed and we were unable to recover it. 00:32:45.388 [2024-04-26 13:15:50.191831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.388 [2024-04-26 13:15:50.192123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.388 [2024-04-26 13:15:50.192130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.388 qpair failed and we were unable to recover it. 00:32:45.388 [2024-04-26 13:15:50.192397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.388 [2024-04-26 13:15:50.192715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.388 [2024-04-26 13:15:50.192721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.388 qpair failed and we were unable to recover it. 00:32:45.388 [2024-04-26 13:15:50.192912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.388 [2024-04-26 13:15:50.192951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.388 [2024-04-26 13:15:50.192957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.388 qpair failed and we were unable to recover it. 00:32:45.388 [2024-04-26 13:15:50.193342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.388 [2024-04-26 13:15:50.193670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.388 [2024-04-26 13:15:50.193676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.388 qpair failed and we were unable to recover it. 00:32:45.388 [2024-04-26 13:15:50.193981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.388 [2024-04-26 13:15:50.194321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.388 [2024-04-26 13:15:50.194327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.388 qpair failed and we were unable to recover it. 00:32:45.388 [2024-04-26 13:15:50.194636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.388 [2024-04-26 13:15:50.194977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.388 [2024-04-26 13:15:50.194984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.388 qpair failed and we were unable to recover it. 00:32:45.388 [2024-04-26 13:15:50.195290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.388 [2024-04-26 13:15:50.195572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.388 [2024-04-26 13:15:50.195579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.388 qpair failed and we were unable to recover it. 00:32:45.388 [2024-04-26 13:15:50.195877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.388 [2024-04-26 13:15:50.196220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.388 [2024-04-26 13:15:50.196227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.388 qpair failed and we were unable to recover it. 00:32:45.388 [2024-04-26 13:15:50.196406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.388 [2024-04-26 13:15:50.196590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.388 [2024-04-26 13:15:50.196596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.388 qpair failed and we were unable to recover it. 00:32:45.388 [2024-04-26 13:15:50.196898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.388 [2024-04-26 13:15:50.197134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.388 [2024-04-26 13:15:50.197142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.388 qpair failed and we were unable to recover it. 00:32:45.388 [2024-04-26 13:15:50.197449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.389 [2024-04-26 13:15:50.197789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.389 [2024-04-26 13:15:50.197795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.389 qpair failed and we were unable to recover it. 00:32:45.389 [2024-04-26 13:15:50.198109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.389 [2024-04-26 13:15:50.198404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.389 [2024-04-26 13:15:50.198410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.389 qpair failed and we were unable to recover it. 00:32:45.389 [2024-04-26 13:15:50.198691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.389 [2024-04-26 13:15:50.199014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.389 [2024-04-26 13:15:50.199020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.389 qpair failed and we were unable to recover it. 00:32:45.389 [2024-04-26 13:15:50.199440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.389 [2024-04-26 13:15:50.199791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.389 [2024-04-26 13:15:50.199797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.389 qpair failed and we were unable to recover it. 00:32:45.389 [2024-04-26 13:15:50.200124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.389 [2024-04-26 13:15:50.200315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.389 [2024-04-26 13:15:50.200321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.389 qpair failed and we were unable to recover it. 00:32:45.389 [2024-04-26 13:15:50.200456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.389 [2024-04-26 13:15:50.200687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.389 [2024-04-26 13:15:50.200694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.389 qpair failed and we were unable to recover it. 00:32:45.389 [2024-04-26 13:15:50.200982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.389 [2024-04-26 13:15:50.201183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.389 [2024-04-26 13:15:50.201190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.389 qpair failed and we were unable to recover it. 00:32:45.389 [2024-04-26 13:15:50.201518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.389 [2024-04-26 13:15:50.201822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.389 [2024-04-26 13:15:50.201829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.389 qpair failed and we were unable to recover it. 00:32:45.389 [2024-04-26 13:15:50.202133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.389 [2024-04-26 13:15:50.202325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.389 [2024-04-26 13:15:50.202332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.389 qpair failed and we were unable to recover it. 00:32:45.389 [2024-04-26 13:15:50.202673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.389 [2024-04-26 13:15:50.202982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.389 [2024-04-26 13:15:50.202988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.389 qpair failed and we were unable to recover it. 00:32:45.389 [2024-04-26 13:15:50.203311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.389 [2024-04-26 13:15:50.203665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.389 [2024-04-26 13:15:50.203671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.389 qpair failed and we were unable to recover it. 00:32:45.389 [2024-04-26 13:15:50.203992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.389 [2024-04-26 13:15:50.204184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.389 [2024-04-26 13:15:50.204191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.389 qpair failed and we were unable to recover it. 00:32:45.389 [2024-04-26 13:15:50.204494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.389 [2024-04-26 13:15:50.204836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.389 [2024-04-26 13:15:50.204850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.389 qpair failed and we were unable to recover it. 00:32:45.389 [2024-04-26 13:15:50.205142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.389 [2024-04-26 13:15:50.205475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.389 [2024-04-26 13:15:50.205481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.389 qpair failed and we were unable to recover it. 00:32:45.389 [2024-04-26 13:15:50.205655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.389 [2024-04-26 13:15:50.205888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.389 [2024-04-26 13:15:50.205895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.389 qpair failed and we were unable to recover it. 00:32:45.389 [2024-04-26 13:15:50.206190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.389 [2024-04-26 13:15:50.206500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.389 [2024-04-26 13:15:50.206506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.389 qpair failed and we were unable to recover it. 00:32:45.389 [2024-04-26 13:15:50.206677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.389 [2024-04-26 13:15:50.207018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.389 [2024-04-26 13:15:50.207025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.389 qpair failed and we were unable to recover it. 00:32:45.389 [2024-04-26 13:15:50.207332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.389 [2024-04-26 13:15:50.207662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.389 [2024-04-26 13:15:50.207669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.389 qpair failed and we were unable to recover it. 00:32:45.389 [2024-04-26 13:15:50.207857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.389 [2024-04-26 13:15:50.208126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.389 [2024-04-26 13:15:50.208133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.389 qpair failed and we were unable to recover it. 00:32:45.389 [2024-04-26 13:15:50.208315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.389 [2024-04-26 13:15:50.208664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.389 [2024-04-26 13:15:50.208671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.389 qpair failed and we were unable to recover it. 00:32:45.389 [2024-04-26 13:15:50.209011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.389 [2024-04-26 13:15:50.209333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.389 [2024-04-26 13:15:50.209339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.389 qpair failed and we were unable to recover it. 00:32:45.389 [2024-04-26 13:15:50.209738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.389 [2024-04-26 13:15:50.210010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.389 [2024-04-26 13:15:50.210017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.389 qpair failed and we were unable to recover it. 00:32:45.389 [2024-04-26 13:15:50.210367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.389 [2024-04-26 13:15:50.210690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.389 [2024-04-26 13:15:50.210698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.389 qpair failed and we were unable to recover it. 00:32:45.389 [2024-04-26 13:15:50.210890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.389 [2024-04-26 13:15:50.211077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.389 [2024-04-26 13:15:50.211084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.389 qpair failed and we were unable to recover it. 00:32:45.389 [2024-04-26 13:15:50.211281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.389 [2024-04-26 13:15:50.211611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.389 [2024-04-26 13:15:50.211617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.389 qpair failed and we were unable to recover it. 00:32:45.389 [2024-04-26 13:15:50.211927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.389 [2024-04-26 13:15:50.212249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.389 [2024-04-26 13:15:50.212255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.389 qpair failed and we were unable to recover it. 00:32:45.389 [2024-04-26 13:15:50.212512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.389 [2024-04-26 13:15:50.212858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.389 [2024-04-26 13:15:50.212867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.389 qpair failed and we were unable to recover it. 00:32:45.389 [2024-04-26 13:15:50.213251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.389 [2024-04-26 13:15:50.213422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.389 [2024-04-26 13:15:50.213429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.389 qpair failed and we were unable to recover it. 00:32:45.389 [2024-04-26 13:15:50.213765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.389 [2024-04-26 13:15:50.213947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.389 [2024-04-26 13:15:50.213954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.389 qpair failed and we were unable to recover it. 00:32:45.390 [2024-04-26 13:15:50.214256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.390 [2024-04-26 13:15:50.214604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.390 [2024-04-26 13:15:50.214610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.390 qpair failed and we were unable to recover it. 00:32:45.390 [2024-04-26 13:15:50.214766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.390 [2024-04-26 13:15:50.215040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.390 [2024-04-26 13:15:50.215054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.390 qpair failed and we were unable to recover it. 00:32:45.390 [2024-04-26 13:15:50.215227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.390 [2024-04-26 13:15:50.215517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.390 [2024-04-26 13:15:50.215523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.390 qpair failed and we were unable to recover it. 00:32:45.390 [2024-04-26 13:15:50.215876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.390 [2024-04-26 13:15:50.216197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.390 [2024-04-26 13:15:50.216203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.390 qpair failed and we were unable to recover it. 00:32:45.390 [2024-04-26 13:15:50.216515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.390 [2024-04-26 13:15:50.216820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.390 [2024-04-26 13:15:50.216826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.390 qpair failed and we were unable to recover it. 00:32:45.390 [2024-04-26 13:15:50.217128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.390 [2024-04-26 13:15:50.217292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.390 [2024-04-26 13:15:50.217298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.390 qpair failed and we were unable to recover it. 00:32:45.390 [2024-04-26 13:15:50.217581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.390 [2024-04-26 13:15:50.217679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.390 [2024-04-26 13:15:50.217685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.390 qpair failed and we were unable to recover it. 00:32:45.390 [2024-04-26 13:15:50.217726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.390 [2024-04-26 13:15:50.217966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.390 [2024-04-26 13:15:50.217975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.390 qpair failed and we were unable to recover it. 00:32:45.390 [2024-04-26 13:15:50.218141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.390 [2024-04-26 13:15:50.218430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.390 [2024-04-26 13:15:50.218437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.390 qpair failed and we were unable to recover it. 00:32:45.390 [2024-04-26 13:15:50.218753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.390 [2024-04-26 13:15:50.218991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.390 [2024-04-26 13:15:50.218998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.390 qpair failed and we were unable to recover it. 00:32:45.390 [2024-04-26 13:15:50.219176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.390 [2024-04-26 13:15:50.219467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.390 [2024-04-26 13:15:50.219474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.390 qpair failed and we were unable to recover it. 00:32:45.390 [2024-04-26 13:15:50.219784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.390 [2024-04-26 13:15:50.219962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.390 [2024-04-26 13:15:50.219969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.390 qpair failed and we were unable to recover it. 00:32:45.390 [2024-04-26 13:15:50.220314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.390 [2024-04-26 13:15:50.220655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.390 [2024-04-26 13:15:50.220661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.390 qpair failed and we were unable to recover it. 00:32:45.390 [2024-04-26 13:15:50.220997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.390 [2024-04-26 13:15:50.221329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.390 [2024-04-26 13:15:50.221335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.390 qpair failed and we were unable to recover it. 00:32:45.390 [2024-04-26 13:15:50.221654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.390 [2024-04-26 13:15:50.221904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.390 [2024-04-26 13:15:50.221911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.390 qpair failed and we were unable to recover it. 00:32:45.390 [2024-04-26 13:15:50.222095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.390 [2024-04-26 13:15:50.222421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.390 [2024-04-26 13:15:50.222428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.390 qpair failed and we were unable to recover it. 00:32:45.390 [2024-04-26 13:15:50.222766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.390 [2024-04-26 13:15:50.223089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.390 [2024-04-26 13:15:50.223096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.390 qpair failed and we were unable to recover it. 00:32:45.390 [2024-04-26 13:15:50.223251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.390 [2024-04-26 13:15:50.223417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.390 [2024-04-26 13:15:50.223425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.390 qpair failed and we were unable to recover it. 00:32:45.390 [2024-04-26 13:15:50.223660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.390 [2024-04-26 13:15:50.223945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.390 [2024-04-26 13:15:50.223952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.390 qpair failed and we were unable to recover it. 00:32:45.390 [2024-04-26 13:15:50.224278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.390 [2024-04-26 13:15:50.224440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.390 [2024-04-26 13:15:50.224447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.390 qpair failed and we were unable to recover it. 00:32:45.390 [2024-04-26 13:15:50.224617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.390 [2024-04-26 13:15:50.224948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.390 [2024-04-26 13:15:50.224955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.391 qpair failed and we were unable to recover it. 00:32:45.391 [2024-04-26 13:15:50.225141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.391 [2024-04-26 13:15:50.225496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.391 [2024-04-26 13:15:50.225503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.391 qpair failed and we were unable to recover it. 00:32:45.391 [2024-04-26 13:15:50.225807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.391 [2024-04-26 13:15:50.226151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.391 [2024-04-26 13:15:50.226158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.391 qpair failed and we were unable to recover it. 00:32:45.391 [2024-04-26 13:15:50.226465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.391 [2024-04-26 13:15:50.226682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.391 [2024-04-26 13:15:50.226688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.391 qpair failed and we were unable to recover it. 00:32:45.391 [2024-04-26 13:15:50.227029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.391 [2024-04-26 13:15:50.227204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.391 [2024-04-26 13:15:50.227210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.391 qpair failed and we were unable to recover it. 00:32:45.391 [2024-04-26 13:15:50.227527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.391 [2024-04-26 13:15:50.227896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.391 [2024-04-26 13:15:50.227902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.391 qpair failed and we were unable to recover it. 00:32:45.391 [2024-04-26 13:15:50.228109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.391 [2024-04-26 13:15:50.228462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.391 [2024-04-26 13:15:50.228468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.391 qpair failed and we were unable to recover it. 00:32:45.391 [2024-04-26 13:15:50.228768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.391 [2024-04-26 13:15:50.229086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.391 [2024-04-26 13:15:50.229094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.391 qpair failed and we were unable to recover it. 00:32:45.391 [2024-04-26 13:15:50.229392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.391 [2024-04-26 13:15:50.229599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.391 [2024-04-26 13:15:50.229606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.391 qpair failed and we were unable to recover it. 00:32:45.391 [2024-04-26 13:15:50.229878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.391 [2024-04-26 13:15:50.230256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.391 [2024-04-26 13:15:50.230265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.391 qpair failed and we were unable to recover it. 00:32:45.391 [2024-04-26 13:15:50.230459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.391 [2024-04-26 13:15:50.230779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.391 [2024-04-26 13:15:50.230786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.391 qpair failed and we were unable to recover it. 00:32:45.391 [2024-04-26 13:15:50.231106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.391 [2024-04-26 13:15:50.231429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.391 [2024-04-26 13:15:50.231436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.391 qpair failed and we were unable to recover it. 00:32:45.391 [2024-04-26 13:15:50.231639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.391 [2024-04-26 13:15:50.231927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.391 [2024-04-26 13:15:50.231933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.391 qpair failed and we were unable to recover it. 00:32:45.391 [2024-04-26 13:15:50.232272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.391 [2024-04-26 13:15:50.232580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.391 [2024-04-26 13:15:50.232587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.391 qpair failed and we were unable to recover it. 00:32:45.391 [2024-04-26 13:15:50.232913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.391 [2024-04-26 13:15:50.233241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.391 [2024-04-26 13:15:50.233247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.391 qpair failed and we were unable to recover it. 00:32:45.391 [2024-04-26 13:15:50.233561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.391 [2024-04-26 13:15:50.233890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.391 [2024-04-26 13:15:50.233898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.391 qpair failed and we were unable to recover it. 00:32:45.391 [2024-04-26 13:15:50.234203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.391 [2024-04-26 13:15:50.234507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.391 [2024-04-26 13:15:50.234513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.391 qpair failed and we were unable to recover it. 00:32:45.391 [2024-04-26 13:15:50.234702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.391 [2024-04-26 13:15:50.235097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.391 [2024-04-26 13:15:50.235103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.391 qpair failed and we were unable to recover it. 00:32:45.391 [2024-04-26 13:15:50.235458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.391 [2024-04-26 13:15:50.235763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.391 [2024-04-26 13:15:50.235775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.391 qpair failed and we were unable to recover it. 00:32:45.391 [2024-04-26 13:15:50.236098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.391 [2024-04-26 13:15:50.236398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.391 [2024-04-26 13:15:50.236405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.391 qpair failed and we were unable to recover it. 00:32:45.391 [2024-04-26 13:15:50.236730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.391 [2024-04-26 13:15:50.236952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.391 [2024-04-26 13:15:50.236959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.391 qpair failed and we were unable to recover it. 00:32:45.391 [2024-04-26 13:15:50.237183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.391 [2024-04-26 13:15:50.237411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.391 [2024-04-26 13:15:50.237418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.391 qpair failed and we were unable to recover it. 00:32:45.391 [2024-04-26 13:15:50.237745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.391 [2024-04-26 13:15:50.238066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.391 [2024-04-26 13:15:50.238073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.391 qpair failed and we were unable to recover it. 00:32:45.391 [2024-04-26 13:15:50.238465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.391 [2024-04-26 13:15:50.238773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.391 [2024-04-26 13:15:50.238779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.391 qpair failed and we were unable to recover it. 00:32:45.391 [2024-04-26 13:15:50.238965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.391 [2024-04-26 13:15:50.239274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.391 [2024-04-26 13:15:50.239281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.391 qpair failed and we were unable to recover it. 00:32:45.392 [2024-04-26 13:15:50.239629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.392 [2024-04-26 13:15:50.239829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.392 [2024-04-26 13:15:50.239836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.392 qpair failed and we were unable to recover it. 00:32:45.392 [2024-04-26 13:15:50.240174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.392 [2024-04-26 13:15:50.240246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.392 [2024-04-26 13:15:50.240253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.392 qpair failed and we were unable to recover it. 00:32:45.392 [2024-04-26 13:15:50.240436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.392 [2024-04-26 13:15:50.240773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.392 [2024-04-26 13:15:50.240780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.392 qpair failed and we were unable to recover it. 00:32:45.392 [2024-04-26 13:15:50.241114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.392 [2024-04-26 13:15:50.241273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.392 [2024-04-26 13:15:50.241280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.392 qpair failed and we were unable to recover it. 00:32:45.392 [2024-04-26 13:15:50.241517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.392 [2024-04-26 13:15:50.241735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.392 [2024-04-26 13:15:50.241741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.392 qpair failed and we were unable to recover it. 00:32:45.392 [2024-04-26 13:15:50.242081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.392 [2024-04-26 13:15:50.242394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.392 [2024-04-26 13:15:50.242400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.392 qpair failed and we were unable to recover it. 00:32:45.392 [2024-04-26 13:15:50.242704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.392 [2024-04-26 13:15:50.243047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.392 [2024-04-26 13:15:50.243054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.392 qpair failed and we were unable to recover it. 00:32:45.392 [2024-04-26 13:15:50.243365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.392 [2024-04-26 13:15:50.243685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.392 [2024-04-26 13:15:50.243691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.392 qpair failed and we were unable to recover it. 00:32:45.392 [2024-04-26 13:15:50.243865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.392 [2024-04-26 13:15:50.244175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.392 [2024-04-26 13:15:50.244182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.392 qpair failed and we were unable to recover it. 00:32:45.392 [2024-04-26 13:15:50.244516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.392 [2024-04-26 13:15:50.244722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.392 [2024-04-26 13:15:50.244728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.392 qpair failed and we were unable to recover it. 00:32:45.392 [2024-04-26 13:15:50.244908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.392 [2024-04-26 13:15:50.245068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.392 [2024-04-26 13:15:50.245075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.392 qpair failed and we were unable to recover it. 00:32:45.392 [2024-04-26 13:15:50.245387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.392 [2024-04-26 13:15:50.245715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.392 [2024-04-26 13:15:50.245721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.392 qpair failed and we were unable to recover it. 00:32:45.392 [2024-04-26 13:15:50.246018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.392 [2024-04-26 13:15:50.246350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.392 [2024-04-26 13:15:50.246357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.392 qpair failed and we were unable to recover it. 00:32:45.392 [2024-04-26 13:15:50.246677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.393 [2024-04-26 13:15:50.247032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.393 [2024-04-26 13:15:50.247038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.393 qpair failed and we were unable to recover it. 00:32:45.393 [2024-04-26 13:15:50.247218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.393 [2024-04-26 13:15:50.247389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.393 [2024-04-26 13:15:50.247396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.393 qpair failed and we were unable to recover it. 00:32:45.393 [2024-04-26 13:15:50.247588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.393 [2024-04-26 13:15:50.247917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.393 [2024-04-26 13:15:50.247924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.393 qpair failed and we were unable to recover it. 00:32:45.393 [2024-04-26 13:15:50.248233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.393 [2024-04-26 13:15:50.248542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.393 [2024-04-26 13:15:50.248548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.393 qpair failed and we were unable to recover it. 00:32:45.393 [2024-04-26 13:15:50.248867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.393 [2024-04-26 13:15:50.249157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.393 [2024-04-26 13:15:50.249163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.393 qpair failed and we were unable to recover it. 00:32:45.393 [2024-04-26 13:15:50.249378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.393 [2024-04-26 13:15:50.249725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.393 [2024-04-26 13:15:50.249731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.393 qpair failed and we were unable to recover it. 00:32:45.393 [2024-04-26 13:15:50.249964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.393 [2024-04-26 13:15:50.250289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.393 [2024-04-26 13:15:50.250296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.393 qpair failed and we were unable to recover it. 00:32:45.393 [2024-04-26 13:15:50.250524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.393 [2024-04-26 13:15:50.250852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.393 [2024-04-26 13:15:50.250859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.393 qpair failed and we were unable to recover it. 00:32:45.393 [2024-04-26 13:15:50.251246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.393 [2024-04-26 13:15:50.251562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.393 [2024-04-26 13:15:50.251569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.393 qpair failed and we were unable to recover it. 00:32:45.393 [2024-04-26 13:15:50.251861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.393 [2024-04-26 13:15:50.252078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.393 [2024-04-26 13:15:50.252085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.393 qpair failed and we were unable to recover it. 00:32:45.393 [2024-04-26 13:15:50.252372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.393 [2024-04-26 13:15:50.252554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.393 [2024-04-26 13:15:50.252561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.393 qpair failed and we were unable to recover it. 00:32:45.393 [2024-04-26 13:15:50.252740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.393 [2024-04-26 13:15:50.253044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.393 [2024-04-26 13:15:50.253051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.393 qpair failed and we were unable to recover it. 00:32:45.393 [2024-04-26 13:15:50.253226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.393 [2024-04-26 13:15:50.253507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.393 [2024-04-26 13:15:50.253514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.393 qpair failed and we were unable to recover it. 00:32:45.393 [2024-04-26 13:15:50.253848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.393 [2024-04-26 13:15:50.254173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.393 [2024-04-26 13:15:50.254180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.393 qpair failed and we were unable to recover it. 00:32:45.393 [2024-04-26 13:15:50.254376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.393 [2024-04-26 13:15:50.254649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.393 [2024-04-26 13:15:50.254656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.393 qpair failed and we were unable to recover it. 00:32:45.393 [2024-04-26 13:15:50.254988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.393 [2024-04-26 13:15:50.255186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.393 [2024-04-26 13:15:50.255192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.393 qpair failed and we were unable to recover it. 00:32:45.393 [2024-04-26 13:15:50.255505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.393 [2024-04-26 13:15:50.255791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.393 [2024-04-26 13:15:50.255798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.393 qpair failed and we were unable to recover it. 00:32:45.393 [2024-04-26 13:15:50.256023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.393 [2024-04-26 13:15:50.256309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.393 [2024-04-26 13:15:50.256315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.393 qpair failed and we were unable to recover it. 00:32:45.393 [2024-04-26 13:15:50.256550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.393 [2024-04-26 13:15:50.256710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.393 [2024-04-26 13:15:50.256716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.393 qpair failed and we were unable to recover it. 00:32:45.393 [2024-04-26 13:15:50.256945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.393 [2024-04-26 13:15:50.257289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.393 [2024-04-26 13:15:50.257295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.393 qpair failed and we were unable to recover it. 00:32:45.393 [2024-04-26 13:15:50.257592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.393 [2024-04-26 13:15:50.257912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.393 [2024-04-26 13:15:50.257918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.393 qpair failed and we were unable to recover it. 00:32:45.393 [2024-04-26 13:15:50.258290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.393 [2024-04-26 13:15:50.258616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.393 [2024-04-26 13:15:50.258622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.393 qpair failed and we were unable to recover it. 00:32:45.393 [2024-04-26 13:15:50.259004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.393 [2024-04-26 13:15:50.259186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.393 [2024-04-26 13:15:50.259192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.393 qpair failed and we were unable to recover it. 00:32:45.393 [2024-04-26 13:15:50.259402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.393 [2024-04-26 13:15:50.259597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.393 [2024-04-26 13:15:50.259603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.393 qpair failed and we were unable to recover it. 00:32:45.393 [2024-04-26 13:15:50.259916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.393 [2024-04-26 13:15:50.260263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.394 [2024-04-26 13:15:50.260269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.394 qpair failed and we were unable to recover it. 00:32:45.394 [2024-04-26 13:15:50.260427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.394 [2024-04-26 13:15:50.260733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.394 [2024-04-26 13:15:50.260739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.394 qpair failed and we were unable to recover it. 00:32:45.394 [2024-04-26 13:15:50.261114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.394 [2024-04-26 13:15:50.261303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.394 [2024-04-26 13:15:50.261310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.394 qpair failed and we were unable to recover it. 00:32:45.394 [2024-04-26 13:15:50.261686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.394 [2024-04-26 13:15:50.261873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.394 [2024-04-26 13:15:50.261880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.394 qpair failed and we were unable to recover it. 00:32:45.394 [2024-04-26 13:15:50.262270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.394 [2024-04-26 13:15:50.262588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.394 [2024-04-26 13:15:50.262595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.394 qpair failed and we were unable to recover it. 00:32:45.394 [2024-04-26 13:15:50.262936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.394 [2024-04-26 13:15:50.263152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.394 [2024-04-26 13:15:50.263158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.394 qpair failed and we were unable to recover it. 00:32:45.394 [2024-04-26 13:15:50.263470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.394 [2024-04-26 13:15:50.263655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.394 [2024-04-26 13:15:50.263661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.394 qpair failed and we were unable to recover it. 00:32:45.394 [2024-04-26 13:15:50.263841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.394 [2024-04-26 13:15:50.264017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.394 [2024-04-26 13:15:50.264023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.394 qpair failed and we were unable to recover it. 00:32:45.394 [2024-04-26 13:15:50.264199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.394 [2024-04-26 13:15:50.264484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.394 [2024-04-26 13:15:50.264490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.394 qpair failed and we were unable to recover it. 00:32:45.394 [2024-04-26 13:15:50.264831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.394 [2024-04-26 13:15:50.265149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.394 [2024-04-26 13:15:50.265156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.394 qpair failed and we were unable to recover it. 00:32:45.394 [2024-04-26 13:15:50.265460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.394 [2024-04-26 13:15:50.265809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.394 [2024-04-26 13:15:50.265815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.394 qpair failed and we were unable to recover it. 00:32:45.394 [2024-04-26 13:15:50.266120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.394 [2024-04-26 13:15:50.266162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.394 [2024-04-26 13:15:50.266168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.394 qpair failed and we were unable to recover it. 00:32:45.394 [2024-04-26 13:15:50.266351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.394 [2024-04-26 13:15:50.266535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.394 [2024-04-26 13:15:50.266541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.394 qpair failed and we were unable to recover it. 00:32:45.394 [2024-04-26 13:15:50.266853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.394 [2024-04-26 13:15:50.267024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.394 [2024-04-26 13:15:50.267030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.394 qpair failed and we were unable to recover it. 00:32:45.394 [2024-04-26 13:15:50.267332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.394 [2024-04-26 13:15:50.267670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.394 [2024-04-26 13:15:50.267677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.394 qpair failed and we were unable to recover it. 00:32:45.394 [2024-04-26 13:15:50.267999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.394 [2024-04-26 13:15:50.268306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.394 [2024-04-26 13:15:50.268313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.394 qpair failed and we were unable to recover it. 00:32:45.394 [2024-04-26 13:15:50.268500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.394 [2024-04-26 13:15:50.268832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.394 [2024-04-26 13:15:50.268842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.394 qpair failed and we were unable to recover it. 00:32:45.394 [2024-04-26 13:15:50.269013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.394 [2024-04-26 13:15:50.269236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.394 [2024-04-26 13:15:50.269243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.394 qpair failed and we were unable to recover it. 00:32:45.394 [2024-04-26 13:15:50.269617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.394 [2024-04-26 13:15:50.269795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.394 [2024-04-26 13:15:50.269803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.394 qpair failed and we were unable to recover it. 00:32:45.394 [2024-04-26 13:15:50.269988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.394 [2024-04-26 13:15:50.270299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.394 [2024-04-26 13:15:50.270305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.394 qpair failed and we were unable to recover it. 00:32:45.394 [2024-04-26 13:15:50.270574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.394 [2024-04-26 13:15:50.270899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.394 [2024-04-26 13:15:50.270906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.394 qpair failed and we were unable to recover it. 00:32:45.394 [2024-04-26 13:15:50.271205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.394 [2024-04-26 13:15:50.271526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.394 [2024-04-26 13:15:50.271533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.394 qpair failed and we were unable to recover it. 00:32:45.394 [2024-04-26 13:15:50.271715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.394 [2024-04-26 13:15:50.272055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.394 [2024-04-26 13:15:50.272062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.394 qpair failed and we were unable to recover it. 00:32:45.394 [2024-04-26 13:15:50.272377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.394 [2024-04-26 13:15:50.272691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.394 [2024-04-26 13:15:50.272698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.394 qpair failed and we were unable to recover it. 00:32:45.394 [2024-04-26 13:15:50.272951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.394 [2024-04-26 13:15:50.273293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.394 [2024-04-26 13:15:50.273300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.394 qpair failed and we were unable to recover it. 00:32:45.394 [2024-04-26 13:15:50.273484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.395 [2024-04-26 13:15:50.273825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.395 [2024-04-26 13:15:50.273831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.395 qpair failed and we were unable to recover it. 00:32:45.395 [2024-04-26 13:15:50.274150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.395 [2024-04-26 13:15:50.274498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.395 [2024-04-26 13:15:50.274505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.395 qpair failed and we were unable to recover it. 00:32:45.395 [2024-04-26 13:15:50.274670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.395 [2024-04-26 13:15:50.274975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.395 [2024-04-26 13:15:50.274982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.395 qpair failed and we were unable to recover it. 00:32:45.395 [2024-04-26 13:15:50.275324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.395 [2024-04-26 13:15:50.275673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.395 [2024-04-26 13:15:50.275680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.395 qpair failed and we were unable to recover it. 00:32:45.395 [2024-04-26 13:15:50.275842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.395 [2024-04-26 13:15:50.276035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.395 [2024-04-26 13:15:50.276041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.395 qpair failed and we were unable to recover it. 00:32:45.395 [2024-04-26 13:15:50.276220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.395 [2024-04-26 13:15:50.276549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.395 [2024-04-26 13:15:50.276555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.395 qpair failed and we were unable to recover it. 00:32:45.395 [2024-04-26 13:15:50.276881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.395 [2024-04-26 13:15:50.277233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.395 [2024-04-26 13:15:50.277239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.395 qpair failed and we were unable to recover it. 00:32:45.395 [2024-04-26 13:15:50.277568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.395 [2024-04-26 13:15:50.277887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.395 [2024-04-26 13:15:50.277894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.395 qpair failed and we were unable to recover it. 00:32:45.395 [2024-04-26 13:15:50.278298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.395 [2024-04-26 13:15:50.278643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.395 [2024-04-26 13:15:50.278649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.395 qpair failed and we were unable to recover it. 00:32:45.395 [2024-04-26 13:15:50.278977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.395 [2024-04-26 13:15:50.279175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.395 [2024-04-26 13:15:50.279181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.395 qpair failed and we were unable to recover it. 00:32:45.395 [2024-04-26 13:15:50.279374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.395 [2024-04-26 13:15:50.279691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.395 [2024-04-26 13:15:50.279697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.395 qpair failed and we were unable to recover it. 00:32:45.395 [2024-04-26 13:15:50.280004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.395 [2024-04-26 13:15:50.280213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.395 [2024-04-26 13:15:50.280221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.395 qpair failed and we were unable to recover it. 00:32:45.395 [2024-04-26 13:15:50.280551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.395 [2024-04-26 13:15:50.280592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.395 [2024-04-26 13:15:50.280598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.395 qpair failed and we were unable to recover it. 00:32:45.395 [2024-04-26 13:15:50.280907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.395 [2024-04-26 13:15:50.281081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.395 [2024-04-26 13:15:50.281088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.395 qpair failed and we were unable to recover it. 00:32:45.395 [2024-04-26 13:15:50.281357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.395 [2024-04-26 13:15:50.281552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.395 [2024-04-26 13:15:50.281559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.395 qpair failed and we were unable to recover it. 00:32:45.395 [2024-04-26 13:15:50.281889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.395 [2024-04-26 13:15:50.282224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.395 [2024-04-26 13:15:50.282237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.395 qpair failed and we were unable to recover it. 00:32:45.395 [2024-04-26 13:15:50.282575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.395 [2024-04-26 13:15:50.282897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.395 [2024-04-26 13:15:50.282905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.395 qpair failed and we were unable to recover it. 00:32:45.395 [2024-04-26 13:15:50.283234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.395 [2024-04-26 13:15:50.283550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.395 [2024-04-26 13:15:50.283556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.395 qpair failed and we were unable to recover it. 00:32:45.395 [2024-04-26 13:15:50.283879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.395 [2024-04-26 13:15:50.284179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.395 [2024-04-26 13:15:50.284186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.395 qpair failed and we were unable to recover it. 00:32:45.395 [2024-04-26 13:15:50.284518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.395 [2024-04-26 13:15:50.284709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.395 [2024-04-26 13:15:50.284717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.395 qpair failed and we were unable to recover it. 00:32:45.395 [2024-04-26 13:15:50.284905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.395 [2024-04-26 13:15:50.285097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.395 [2024-04-26 13:15:50.285103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.395 qpair failed and we were unable to recover it. 00:32:45.395 [2024-04-26 13:15:50.285139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.395 [2024-04-26 13:15:50.285461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.395 [2024-04-26 13:15:50.285469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.395 qpair failed and we were unable to recover it. 00:32:45.395 [2024-04-26 13:15:50.285792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.395 [2024-04-26 13:15:50.286122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.395 [2024-04-26 13:15:50.286129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.395 qpair failed and we were unable to recover it. 00:32:45.395 [2024-04-26 13:15:50.286434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.395 [2024-04-26 13:15:50.286751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.395 [2024-04-26 13:15:50.286758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.395 qpair failed and we were unable to recover it. 00:32:45.395 [2024-04-26 13:15:50.286918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.395 [2024-04-26 13:15:50.287217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.395 [2024-04-26 13:15:50.287224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.395 qpair failed and we were unable to recover it. 00:32:45.395 [2024-04-26 13:15:50.287413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.395 [2024-04-26 13:15:50.287769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.395 [2024-04-26 13:15:50.287776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.396 qpair failed and we were unable to recover it. 00:32:45.396 [2024-04-26 13:15:50.288111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.396 [2024-04-26 13:15:50.288402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.396 [2024-04-26 13:15:50.288408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.396 qpair failed and we were unable to recover it. 00:32:45.396 [2024-04-26 13:15:50.288569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.396 [2024-04-26 13:15:50.288986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.396 [2024-04-26 13:15:50.288993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.396 qpair failed and we were unable to recover it. 00:32:45.396 [2024-04-26 13:15:50.289188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.396 [2024-04-26 13:15:50.289539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.396 [2024-04-26 13:15:50.289545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.396 qpair failed and we were unable to recover it. 00:32:45.396 [2024-04-26 13:15:50.289730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.396 [2024-04-26 13:15:50.290011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.396 [2024-04-26 13:15:50.290018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.396 qpair failed and we were unable to recover it. 00:32:45.396 [2024-04-26 13:15:50.290336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.396 [2024-04-26 13:15:50.290548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.396 [2024-04-26 13:15:50.290554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.396 qpair failed and we were unable to recover it. 00:32:45.396 [2024-04-26 13:15:50.290874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.396 [2024-04-26 13:15:50.291085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.396 [2024-04-26 13:15:50.291093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.396 qpair failed and we were unable to recover it. 00:32:45.396 [2024-04-26 13:15:50.291410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.396 [2024-04-26 13:15:50.291732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.396 [2024-04-26 13:15:50.291738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.396 qpair failed and we were unable to recover it. 00:32:45.396 [2024-04-26 13:15:50.292034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.396 [2024-04-26 13:15:50.292205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.396 [2024-04-26 13:15:50.292212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.396 qpair failed and we were unable to recover it. 00:32:45.396 [2024-04-26 13:15:50.292528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.396 [2024-04-26 13:15:50.292714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.396 [2024-04-26 13:15:50.292721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.396 qpair failed and we were unable to recover it. 00:32:45.396 [2024-04-26 13:15:50.293145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.396 [2024-04-26 13:15:50.293295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.396 [2024-04-26 13:15:50.293301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.396 qpair failed and we were unable to recover it. 00:32:45.396 [2024-04-26 13:15:50.293593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.396 [2024-04-26 13:15:50.293923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.396 [2024-04-26 13:15:50.293929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.396 qpair failed and we were unable to recover it. 00:32:45.396 [2024-04-26 13:15:50.294260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.396 [2024-04-26 13:15:50.294588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.396 [2024-04-26 13:15:50.294595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.396 qpair failed and we were unable to recover it. 00:32:45.396 [2024-04-26 13:15:50.294947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.396 [2024-04-26 13:15:50.295127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.396 [2024-04-26 13:15:50.295134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.396 qpair failed and we were unable to recover it. 00:32:45.396 [2024-04-26 13:15:50.295176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.396 [2024-04-26 13:15:50.295374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.396 [2024-04-26 13:15:50.295380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.396 qpair failed and we were unable to recover it. 00:32:45.396 [2024-04-26 13:15:50.295705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.396 [2024-04-26 13:15:50.296014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.396 [2024-04-26 13:15:50.296021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.396 qpair failed and we were unable to recover it. 00:32:45.396 [2024-04-26 13:15:50.296346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.396 [2024-04-26 13:15:50.296664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.396 [2024-04-26 13:15:50.296671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.396 qpair failed and we were unable to recover it. 00:32:45.396 [2024-04-26 13:15:50.296866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.396 [2024-04-26 13:15:50.297234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.396 [2024-04-26 13:15:50.297240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.396 qpair failed and we were unable to recover it. 00:32:45.396 [2024-04-26 13:15:50.297560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.396 [2024-04-26 13:15:50.297884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.396 [2024-04-26 13:15:50.297891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.396 qpair failed and we were unable to recover it. 00:32:45.396 [2024-04-26 13:15:50.298195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.396 [2024-04-26 13:15:50.298388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.396 [2024-04-26 13:15:50.298395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.396 qpair failed and we were unable to recover it. 00:32:45.396 [2024-04-26 13:15:50.298788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.396 [2024-04-26 13:15:50.298972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.396 [2024-04-26 13:15:50.298979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.396 qpair failed and we were unable to recover it. 00:32:45.396 [2024-04-26 13:15:50.299206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.396 [2024-04-26 13:15:50.299411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.396 [2024-04-26 13:15:50.299419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.396 qpair failed and we were unable to recover it. 00:32:45.397 [2024-04-26 13:15:50.299718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.397 [2024-04-26 13:15:50.299907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.397 [2024-04-26 13:15:50.299914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.397 qpair failed and we were unable to recover it. 00:32:45.397 [2024-04-26 13:15:50.300127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.397 [2024-04-26 13:15:50.300317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.397 [2024-04-26 13:15:50.300323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.397 qpair failed and we were unable to recover it. 00:32:45.397 [2024-04-26 13:15:50.300646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.397 [2024-04-26 13:15:50.300951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.397 [2024-04-26 13:15:50.300957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.397 qpair failed and we were unable to recover it. 00:32:45.397 [2024-04-26 13:15:50.301268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.397 [2024-04-26 13:15:50.301590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.397 [2024-04-26 13:15:50.301596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.397 qpair failed and we were unable to recover it. 00:32:45.397 [2024-04-26 13:15:50.301892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.397 [2024-04-26 13:15:50.302175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.397 [2024-04-26 13:15:50.302183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.397 qpair failed and we were unable to recover it. 00:32:45.397 [2024-04-26 13:15:50.302528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.397 [2024-04-26 13:15:50.302686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.397 [2024-04-26 13:15:50.302692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.397 qpair failed and we were unable to recover it. 00:32:45.397 [2024-04-26 13:15:50.302733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.397 [2024-04-26 13:15:50.302908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.397 [2024-04-26 13:15:50.302915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.397 qpair failed and we were unable to recover it. 00:32:45.397 [2024-04-26 13:15:50.303203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.397 [2024-04-26 13:15:50.303399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.397 [2024-04-26 13:15:50.303406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.397 qpair failed and we were unable to recover it. 00:32:45.397 [2024-04-26 13:15:50.303722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.397 [2024-04-26 13:15:50.304068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.397 [2024-04-26 13:15:50.304075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.397 qpair failed and we were unable to recover it. 00:32:45.397 [2024-04-26 13:15:50.304403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.397 [2024-04-26 13:15:50.304763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.397 [2024-04-26 13:15:50.304769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.397 qpair failed and we were unable to recover it. 00:32:45.397 [2024-04-26 13:15:50.305009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.397 [2024-04-26 13:15:50.305305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.397 [2024-04-26 13:15:50.305311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.397 qpair failed and we were unable to recover it. 00:32:45.397 [2024-04-26 13:15:50.305886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.397 [2024-04-26 13:15:50.306211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.397 [2024-04-26 13:15:50.306218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.397 qpair failed and we were unable to recover it. 00:32:45.397 [2024-04-26 13:15:50.306495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.397 [2024-04-26 13:15:50.306825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.397 [2024-04-26 13:15:50.306832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.397 qpair failed and we were unable to recover it. 00:32:45.397 [2024-04-26 13:15:50.307020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.397 [2024-04-26 13:15:50.307213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.397 [2024-04-26 13:15:50.307220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.397 qpair failed and we were unable to recover it. 00:32:45.397 [2024-04-26 13:15:50.307437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.397 [2024-04-26 13:15:50.307591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.397 [2024-04-26 13:15:50.307600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.397 qpair failed and we were unable to recover it. 00:32:45.397 [2024-04-26 13:15:50.307892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.397 [2024-04-26 13:15:50.308091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.397 [2024-04-26 13:15:50.308098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.397 qpair failed and we were unable to recover it. 00:32:45.397 [2024-04-26 13:15:50.308383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.397 [2024-04-26 13:15:50.308734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.397 [2024-04-26 13:15:50.308741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.397 qpair failed and we were unable to recover it. 00:32:45.397 [2024-04-26 13:15:50.309066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.397 [2024-04-26 13:15:50.309402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.397 [2024-04-26 13:15:50.309408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.397 qpair failed and we were unable to recover it. 00:32:45.397 [2024-04-26 13:15:50.309714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.397 [2024-04-26 13:15:50.310036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.397 [2024-04-26 13:15:50.310042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.397 qpair failed and we were unable to recover it. 00:32:45.397 [2024-04-26 13:15:50.310233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.397 [2024-04-26 13:15:50.310572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.397 [2024-04-26 13:15:50.310579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.397 qpair failed and we were unable to recover it. 00:32:45.397 [2024-04-26 13:15:50.310891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.397 [2024-04-26 13:15:50.311062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.397 [2024-04-26 13:15:50.311069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.397 qpair failed and we were unable to recover it. 00:32:45.397 [2024-04-26 13:15:50.311348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.397 [2024-04-26 13:15:50.311704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.397 [2024-04-26 13:15:50.311713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.397 qpair failed and we were unable to recover it. 00:32:45.397 [2024-04-26 13:15:50.311893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.397 [2024-04-26 13:15:50.312237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.397 [2024-04-26 13:15:50.312244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.397 qpair failed and we were unable to recover it. 00:32:45.397 [2024-04-26 13:15:50.312537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.397 [2024-04-26 13:15:50.313044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.397 [2024-04-26 13:15:50.313053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.397 qpair failed and we were unable to recover it. 00:32:45.397 [2024-04-26 13:15:50.313227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.397 [2024-04-26 13:15:50.313611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.397 [2024-04-26 13:15:50.313620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.397 qpair failed and we were unable to recover it. 00:32:45.398 [2024-04-26 13:15:50.314032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.398 [2024-04-26 13:15:50.314243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.398 [2024-04-26 13:15:50.314249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.398 qpair failed and we were unable to recover it. 00:32:45.398 [2024-04-26 13:15:50.314641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.398 [2024-04-26 13:15:50.314844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.398 [2024-04-26 13:15:50.314852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.398 qpair failed and we were unable to recover it. 00:32:45.398 [2024-04-26 13:15:50.315197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.398 [2024-04-26 13:15:50.315608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.398 [2024-04-26 13:15:50.315615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.398 qpair failed and we were unable to recover it. 00:32:45.398 [2024-04-26 13:15:50.315945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.398 [2024-04-26 13:15:50.316287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.398 [2024-04-26 13:15:50.316296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.398 qpair failed and we were unable to recover it. 00:32:45.398 [2024-04-26 13:15:50.316503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.398 [2024-04-26 13:15:50.316851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.398 [2024-04-26 13:15:50.316858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.398 qpair failed and we were unable to recover it. 00:32:45.398 [2024-04-26 13:15:50.317177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.398 [2024-04-26 13:15:50.317493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.398 [2024-04-26 13:15:50.317499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.398 qpair failed and we were unable to recover it. 00:32:45.398 [2024-04-26 13:15:50.317711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.398 [2024-04-26 13:15:50.317882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.398 [2024-04-26 13:15:50.317890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.398 qpair failed and we were unable to recover it. 00:32:45.398 [2024-04-26 13:15:50.318080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.398 [2024-04-26 13:15:50.318278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.398 [2024-04-26 13:15:50.318284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.398 qpair failed and we were unable to recover it. 00:32:45.398 [2024-04-26 13:15:50.318481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.398 [2024-04-26 13:15:50.318830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.398 [2024-04-26 13:15:50.318839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.398 qpair failed and we were unable to recover it. 00:32:45.398 [2024-04-26 13:15:50.319136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.398 [2024-04-26 13:15:50.319335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.398 [2024-04-26 13:15:50.319345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.398 qpair failed and we were unable to recover it. 00:32:45.398 [2024-04-26 13:15:50.319825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.398 [2024-04-26 13:15:50.320184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.398 [2024-04-26 13:15:50.320191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.398 qpair failed and we were unable to recover it. 00:32:45.398 [2024-04-26 13:15:50.320422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.398 [2024-04-26 13:15:50.320678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.398 [2024-04-26 13:15:50.320685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.398 qpair failed and we were unable to recover it. 00:32:45.398 [2024-04-26 13:15:50.320730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.398 [2024-04-26 13:15:50.321059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.398 [2024-04-26 13:15:50.321066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.398 qpair failed and we were unable to recover it. 00:32:45.398 [2024-04-26 13:15:50.321232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.398 [2024-04-26 13:15:50.321573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.398 [2024-04-26 13:15:50.321580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.398 qpair failed and we were unable to recover it. 00:32:45.398 [2024-04-26 13:15:50.321877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.398 [2024-04-26 13:15:50.322204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.398 [2024-04-26 13:15:50.322211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.398 qpair failed and we were unable to recover it. 00:32:45.398 [2024-04-26 13:15:50.322256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.398 [2024-04-26 13:15:50.322586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.398 [2024-04-26 13:15:50.322592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.398 qpair failed and we were unable to recover it. 00:32:45.398 [2024-04-26 13:15:50.322853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.398 [2024-04-26 13:15:50.323189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.398 [2024-04-26 13:15:50.323195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.398 qpair failed and we were unable to recover it. 00:32:45.398 [2024-04-26 13:15:50.323492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.398 [2024-04-26 13:15:50.323822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.398 [2024-04-26 13:15:50.323828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.398 qpair failed and we were unable to recover it. 00:32:45.398 [2024-04-26 13:15:50.324135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.398 [2024-04-26 13:15:50.324295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.398 [2024-04-26 13:15:50.324301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.398 qpair failed and we were unable to recover it. 00:32:45.398 [2024-04-26 13:15:50.324364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.398 [2024-04-26 13:15:50.324629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.398 [2024-04-26 13:15:50.324635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.398 qpair failed and we were unable to recover it. 00:32:45.398 [2024-04-26 13:15:50.324851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.398 [2024-04-26 13:15:50.325162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.398 [2024-04-26 13:15:50.325169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.398 qpair failed and we were unable to recover it. 00:32:45.398 [2024-04-26 13:15:50.325363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.398 [2024-04-26 13:15:50.325702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.398 [2024-04-26 13:15:50.325708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.398 qpair failed and we were unable to recover it. 00:32:45.398 [2024-04-26 13:15:50.326043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.398 [2024-04-26 13:15:50.326385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.398 [2024-04-26 13:15:50.326391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.398 qpair failed and we were unable to recover it. 00:32:45.398 [2024-04-26 13:15:50.326700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.398 [2024-04-26 13:15:50.326986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.398 [2024-04-26 13:15:50.326993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.398 qpair failed and we were unable to recover it. 00:32:45.398 [2024-04-26 13:15:50.327161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.398 [2024-04-26 13:15:50.327322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.398 [2024-04-26 13:15:50.327328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.398 qpair failed and we were unable to recover it. 00:32:45.398 [2024-04-26 13:15:50.327621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.398 [2024-04-26 13:15:50.327906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.398 [2024-04-26 13:15:50.327920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.398 qpair failed and we were unable to recover it. 00:32:45.398 [2024-04-26 13:15:50.328302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.398 [2024-04-26 13:15:50.328622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.398 [2024-04-26 13:15:50.328629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.398 qpair failed and we were unable to recover it. 00:32:45.398 [2024-04-26 13:15:50.328971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.398 [2024-04-26 13:15:50.329150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.399 [2024-04-26 13:15:50.329156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.399 qpair failed and we were unable to recover it. 00:32:45.399 [2024-04-26 13:15:50.329343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.399 [2024-04-26 13:15:50.329672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.399 [2024-04-26 13:15:50.329678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.399 qpair failed and we were unable to recover it. 00:32:45.399 [2024-04-26 13:15:50.329987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.399 [2024-04-26 13:15:50.330185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.399 [2024-04-26 13:15:50.330192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.399 qpair failed and we were unable to recover it. 00:32:45.399 [2024-04-26 13:15:50.330502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.399 [2024-04-26 13:15:50.330683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.399 [2024-04-26 13:15:50.330689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.399 qpair failed and we were unable to recover it. 00:32:45.399 [2024-04-26 13:15:50.330966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.399 [2024-04-26 13:15:50.331312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.399 [2024-04-26 13:15:50.331319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.399 qpair failed and we were unable to recover it. 00:32:45.399 [2024-04-26 13:15:50.331501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.399 [2024-04-26 13:15:50.331803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.399 [2024-04-26 13:15:50.331810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.399 qpair failed and we were unable to recover it. 00:32:45.399 [2024-04-26 13:15:50.332122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.399 [2024-04-26 13:15:50.332436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.399 [2024-04-26 13:15:50.332442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.399 qpair failed and we were unable to recover it. 00:32:45.399 [2024-04-26 13:15:50.332862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.399 [2024-04-26 13:15:50.333069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.399 [2024-04-26 13:15:50.333077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.399 qpair failed and we were unable to recover it. 00:32:45.399 [2024-04-26 13:15:50.333393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.399 [2024-04-26 13:15:50.333557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.399 [2024-04-26 13:15:50.333563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.399 qpair failed and we were unable to recover it. 00:32:45.399 [2024-04-26 13:15:50.333606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.399 [2024-04-26 13:15:50.333943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.399 [2024-04-26 13:15:50.333950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.399 qpair failed and we were unable to recover it. 00:32:45.399 [2024-04-26 13:15:50.334265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.399 [2024-04-26 13:15:50.334446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.399 [2024-04-26 13:15:50.334453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.399 qpair failed and we were unable to recover it. 00:32:45.399 [2024-04-26 13:15:50.334756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.399 [2024-04-26 13:15:50.335120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.399 [2024-04-26 13:15:50.335127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.399 qpair failed and we were unable to recover it. 00:32:45.399 [2024-04-26 13:15:50.335345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.399 [2024-04-26 13:15:50.335707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.399 [2024-04-26 13:15:50.335714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.399 qpair failed and we were unable to recover it. 00:32:45.399 [2024-04-26 13:15:50.336142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.399 [2024-04-26 13:15:50.336310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.399 [2024-04-26 13:15:50.336317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.399 qpair failed and we were unable to recover it. 00:32:45.399 [2024-04-26 13:15:50.336612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.399 [2024-04-26 13:15:50.336957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.399 [2024-04-26 13:15:50.336964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.399 qpair failed and we were unable to recover it. 00:32:45.399 [2024-04-26 13:15:50.337142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.399 [2024-04-26 13:15:50.337426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.399 [2024-04-26 13:15:50.337433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.399 qpair failed and we were unable to recover it. 00:32:45.399 [2024-04-26 13:15:50.337756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.399 [2024-04-26 13:15:50.338091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.399 [2024-04-26 13:15:50.338098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.399 qpair failed and we were unable to recover it. 00:32:45.399 [2024-04-26 13:15:50.338264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.399 [2024-04-26 13:15:50.338457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.399 [2024-04-26 13:15:50.338464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.399 qpair failed and we were unable to recover it. 00:32:45.399 [2024-04-26 13:15:50.338737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.399 [2024-04-26 13:15:50.339067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.399 [2024-04-26 13:15:50.339074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.399 qpair failed and we were unable to recover it. 00:32:45.399 [2024-04-26 13:15:50.339300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.399 [2024-04-26 13:15:50.339630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.399 [2024-04-26 13:15:50.339637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.399 qpair failed and we were unable to recover it. 00:32:45.399 [2024-04-26 13:15:50.339935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.399 [2024-04-26 13:15:50.340124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.399 [2024-04-26 13:15:50.340131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.399 qpair failed and we were unable to recover it. 00:32:45.399 [2024-04-26 13:15:50.340439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.399 [2024-04-26 13:15:50.340791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.399 [2024-04-26 13:15:50.340798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.399 qpair failed and we were unable to recover it. 00:32:45.399 [2024-04-26 13:15:50.341141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.399 [2024-04-26 13:15:50.341312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.399 [2024-04-26 13:15:50.341318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.399 qpair failed and we were unable to recover it. 00:32:45.399 [2024-04-26 13:15:50.341683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.399 [2024-04-26 13:15:50.341968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.399 [2024-04-26 13:15:50.341975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.399 qpair failed and we were unable to recover it. 00:32:45.399 [2024-04-26 13:15:50.342150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.399 [2024-04-26 13:15:50.342435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.399 [2024-04-26 13:15:50.342442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.399 qpair failed and we were unable to recover it. 00:32:45.399 [2024-04-26 13:15:50.342670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.399 [2024-04-26 13:15:50.343011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.399 [2024-04-26 13:15:50.343018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.399 qpair failed and we were unable to recover it. 00:32:45.399 [2024-04-26 13:15:50.343320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.399 [2024-04-26 13:15:50.343636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.399 [2024-04-26 13:15:50.343643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.399 qpair failed and we were unable to recover it. 00:32:45.399 [2024-04-26 13:15:50.344002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.400 [2024-04-26 13:15:50.344202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.400 [2024-04-26 13:15:50.344209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.400 qpair failed and we were unable to recover it. 00:32:45.400 [2024-04-26 13:15:50.344547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.400 [2024-04-26 13:15:50.344736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.400 [2024-04-26 13:15:50.344742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.400 qpair failed and we were unable to recover it. 00:32:45.400 [2024-04-26 13:15:50.345033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.400 [2024-04-26 13:15:50.345212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.400 [2024-04-26 13:15:50.345218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.400 qpair failed and we were unable to recover it. 00:32:45.400 [2024-04-26 13:15:50.345547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.400 [2024-04-26 13:15:50.345880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.400 [2024-04-26 13:15:50.345888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.400 qpair failed and we were unable to recover it. 00:32:45.400 [2024-04-26 13:15:50.346063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.400 [2024-04-26 13:15:50.346349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.400 [2024-04-26 13:15:50.346355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.400 qpair failed and we were unable to recover it. 00:32:45.400 [2024-04-26 13:15:50.346697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.400 [2024-04-26 13:15:50.347025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.400 [2024-04-26 13:15:50.347031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.400 qpair failed and we were unable to recover it. 00:32:45.400 [2024-04-26 13:15:50.347219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.400 [2024-04-26 13:15:50.347453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.400 [2024-04-26 13:15:50.347459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.400 qpair failed and we were unable to recover it. 00:32:45.400 [2024-04-26 13:15:50.347505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.400 [2024-04-26 13:15:50.347711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.400 [2024-04-26 13:15:50.347717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.400 qpair failed and we were unable to recover it. 00:32:45.400 [2024-04-26 13:15:50.347988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.400 [2024-04-26 13:15:50.348290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.400 [2024-04-26 13:15:50.348297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.400 qpair failed and we were unable to recover it. 00:32:45.400 [2024-04-26 13:15:50.348461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.400 [2024-04-26 13:15:50.348752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.400 [2024-04-26 13:15:50.348759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.400 qpair failed and we were unable to recover it. 00:32:45.400 [2024-04-26 13:15:50.348976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.400 [2024-04-26 13:15:50.349311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.400 [2024-04-26 13:15:50.349317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.400 qpair failed and we were unable to recover it. 00:32:45.400 [2024-04-26 13:15:50.349614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.400 [2024-04-26 13:15:50.349935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.400 [2024-04-26 13:15:50.349942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.400 qpair failed and we were unable to recover it. 00:32:45.400 [2024-04-26 13:15:50.350133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.400 [2024-04-26 13:15:50.350422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.400 [2024-04-26 13:15:50.350428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.400 qpair failed and we were unable to recover it. 00:32:45.400 [2024-04-26 13:15:50.350746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.400 [2024-04-26 13:15:50.351138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.400 [2024-04-26 13:15:50.351145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.400 qpair failed and we were unable to recover it. 00:32:45.400 [2024-04-26 13:15:50.351455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.400 [2024-04-26 13:15:50.351537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.400 [2024-04-26 13:15:50.351543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.400 qpair failed and we were unable to recover it. 00:32:45.400 [2024-04-26 13:15:50.351860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.400 [2024-04-26 13:15:50.352060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.400 [2024-04-26 13:15:50.352066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.400 qpair failed and we were unable to recover it. 00:32:45.400 [2024-04-26 13:15:50.352363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.400 [2024-04-26 13:15:50.352680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.400 [2024-04-26 13:15:50.352687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.400 qpair failed and we were unable to recover it. 00:32:45.400 [2024-04-26 13:15:50.353008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.400 [2024-04-26 13:15:50.353412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.400 [2024-04-26 13:15:50.353419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.400 qpair failed and we were unable to recover it. 00:32:45.400 [2024-04-26 13:15:50.353607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.400 [2024-04-26 13:15:50.353929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.400 [2024-04-26 13:15:50.353936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.400 qpair failed and we were unable to recover it. 00:32:45.400 [2024-04-26 13:15:50.354258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.400 [2024-04-26 13:15:50.354584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.400 [2024-04-26 13:15:50.354590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.400 qpair failed and we were unable to recover it. 00:32:45.400 [2024-04-26 13:15:50.354937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.400 [2024-04-26 13:15:50.355223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.400 [2024-04-26 13:15:50.355229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.400 qpair failed and we were unable to recover it. 00:32:45.400 [2024-04-26 13:15:50.355563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.400 [2024-04-26 13:15:50.355864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.400 [2024-04-26 13:15:50.355871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.400 qpair failed and we were unable to recover it. 00:32:45.400 [2024-04-26 13:15:50.356197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.400 [2024-04-26 13:15:50.356522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.400 [2024-04-26 13:15:50.356528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.400 qpair failed and we were unable to recover it. 00:32:45.400 [2024-04-26 13:15:50.356692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.400 [2024-04-26 13:15:50.356981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.400 [2024-04-26 13:15:50.356989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.400 qpair failed and we were unable to recover it. 00:32:45.400 [2024-04-26 13:15:50.357143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.400 [2024-04-26 13:15:50.357326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.400 [2024-04-26 13:15:50.357332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.400 qpair failed and we were unable to recover it. 00:32:45.401 [2024-04-26 13:15:50.357648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.401 [2024-04-26 13:15:50.357962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.401 [2024-04-26 13:15:50.357969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.401 qpair failed and we were unable to recover it. 00:32:45.401 [2024-04-26 13:15:50.358330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.401 [2024-04-26 13:15:50.358678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.401 [2024-04-26 13:15:50.358685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.401 qpair failed and we were unable to recover it. 00:32:45.401 [2024-04-26 13:15:50.358989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.401 [2024-04-26 13:15:50.359304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.401 [2024-04-26 13:15:50.359311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.401 qpair failed and we were unable to recover it. 00:32:45.401 [2024-04-26 13:15:50.359608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.401 [2024-04-26 13:15:50.359802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.401 [2024-04-26 13:15:50.359808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.401 qpair failed and we were unable to recover it. 00:32:45.401 [2024-04-26 13:15:50.360044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.401 [2024-04-26 13:15:50.360436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.401 [2024-04-26 13:15:50.360443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.401 qpair failed and we were unable to recover it. 00:32:45.401 [2024-04-26 13:15:50.360642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.401 [2024-04-26 13:15:50.360857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.401 [2024-04-26 13:15:50.360870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.401 qpair failed and we were unable to recover it. 00:32:45.401 [2024-04-26 13:15:50.361057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.401 [2024-04-26 13:15:50.361420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.401 [2024-04-26 13:15:50.361426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.401 qpair failed and we were unable to recover it. 00:32:45.401 [2024-04-26 13:15:50.361771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.401 [2024-04-26 13:15:50.362112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.401 [2024-04-26 13:15:50.362118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.401 qpair failed and we were unable to recover it. 00:32:45.401 [2024-04-26 13:15:50.362284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.401 [2024-04-26 13:15:50.362447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.401 [2024-04-26 13:15:50.362453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.401 qpair failed and we were unable to recover it. 00:32:45.401 [2024-04-26 13:15:50.362785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.401 [2024-04-26 13:15:50.363132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.401 [2024-04-26 13:15:50.363138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.401 qpair failed and we were unable to recover it. 00:32:45.401 [2024-04-26 13:15:50.363316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.401 [2024-04-26 13:15:50.363495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.401 [2024-04-26 13:15:50.363501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.401 qpair failed and we were unable to recover it. 00:32:45.401 [2024-04-26 13:15:50.363770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.401 [2024-04-26 13:15:50.364092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.401 [2024-04-26 13:15:50.364099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.401 qpair failed and we were unable to recover it. 00:32:45.401 [2024-04-26 13:15:50.364367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.401 [2024-04-26 13:15:50.364560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.401 [2024-04-26 13:15:50.364567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.401 qpair failed and we were unable to recover it. 00:32:45.401 [2024-04-26 13:15:50.364764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.401 [2024-04-26 13:15:50.365069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.401 [2024-04-26 13:15:50.365076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.401 qpair failed and we were unable to recover it. 00:32:45.401 [2024-04-26 13:15:50.365306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.401 [2024-04-26 13:15:50.365480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.401 [2024-04-26 13:15:50.365487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.401 qpair failed and we were unable to recover it. 00:32:45.401 [2024-04-26 13:15:50.365796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.401 [2024-04-26 13:15:50.366090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.401 [2024-04-26 13:15:50.366098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.401 qpair failed and we were unable to recover it. 00:32:45.401 [2024-04-26 13:15:50.366416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.401 [2024-04-26 13:15:50.366604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.401 [2024-04-26 13:15:50.366611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.401 qpair failed and we were unable to recover it. 00:32:45.401 [2024-04-26 13:15:50.366790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.401 [2024-04-26 13:15:50.367093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.401 [2024-04-26 13:15:50.367100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.401 qpair failed and we were unable to recover it. 00:32:45.401 [2024-04-26 13:15:50.367451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.401 [2024-04-26 13:15:50.367534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.401 [2024-04-26 13:15:50.367539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.401 qpair failed and we were unable to recover it. 00:32:45.401 [2024-04-26 13:15:50.367744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.401 [2024-04-26 13:15:50.367915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.401 [2024-04-26 13:15:50.367923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.401 qpair failed and we were unable to recover it. 00:32:45.401 [2024-04-26 13:15:50.368261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.401 [2024-04-26 13:15:50.368564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.401 [2024-04-26 13:15:50.368570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.401 qpair failed and we were unable to recover it. 00:32:45.401 [2024-04-26 13:15:50.368881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.401 [2024-04-26 13:15:50.369261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.401 [2024-04-26 13:15:50.369267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.401 qpair failed and we were unable to recover it. 00:32:45.401 [2024-04-26 13:15:50.369324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.402 [2024-04-26 13:15:50.369530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.402 [2024-04-26 13:15:50.369537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.402 qpair failed and we were unable to recover it. 00:32:45.402 [2024-04-26 13:15:50.369822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.402 [2024-04-26 13:15:50.370129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.402 [2024-04-26 13:15:50.370137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.402 qpair failed and we were unable to recover it. 00:32:45.402 [2024-04-26 13:15:50.370480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.402 [2024-04-26 13:15:50.370671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.402 [2024-04-26 13:15:50.370677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.402 qpair failed and we were unable to recover it. 00:32:45.402 [2024-04-26 13:15:50.370816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.402 [2024-04-26 13:15:50.371036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.402 [2024-04-26 13:15:50.371043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.402 qpair failed and we were unable to recover it. 00:32:45.402 [2024-04-26 13:15:50.371370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.402 [2024-04-26 13:15:50.371673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.402 [2024-04-26 13:15:50.371680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.402 qpair failed and we were unable to recover it. 00:32:45.402 [2024-04-26 13:15:50.371872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.402 [2024-04-26 13:15:50.372190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.402 [2024-04-26 13:15:50.372197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.402 qpair failed and we were unable to recover it. 00:32:45.402 [2024-04-26 13:15:50.372396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.402 [2024-04-26 13:15:50.372614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.402 [2024-04-26 13:15:50.372621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.402 qpair failed and we were unable to recover it. 00:32:45.402 [2024-04-26 13:15:50.372976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.402 [2024-04-26 13:15:50.373318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.402 [2024-04-26 13:15:50.373324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.402 qpair failed and we were unable to recover it. 00:32:45.402 [2024-04-26 13:15:50.373650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.402 [2024-04-26 13:15:50.373815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.402 [2024-04-26 13:15:50.373822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.402 qpair failed and we were unable to recover it. 00:32:45.402 [2024-04-26 13:15:50.374021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.402 [2024-04-26 13:15:50.374067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.402 [2024-04-26 13:15:50.374073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.402 qpair failed and we were unable to recover it. 00:32:45.402 [2024-04-26 13:15:50.374405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.402 [2024-04-26 13:15:50.374746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.402 [2024-04-26 13:15:50.374753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.402 qpair failed and we were unable to recover it. 00:32:45.402 [2024-04-26 13:15:50.375064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.402 [2024-04-26 13:15:50.375355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.402 [2024-04-26 13:15:50.375362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.402 qpair failed and we were unable to recover it. 00:32:45.402 [2024-04-26 13:15:50.375529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.402 [2024-04-26 13:15:50.375569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.402 [2024-04-26 13:15:50.375575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.402 qpair failed and we were unable to recover it. 00:32:45.402 [2024-04-26 13:15:50.375868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.402 [2024-04-26 13:15:50.376212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.402 [2024-04-26 13:15:50.376218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.402 qpair failed and we were unable to recover it. 00:32:45.402 [2024-04-26 13:15:50.376515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.402 [2024-04-26 13:15:50.376808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.402 [2024-04-26 13:15:50.376814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.402 qpair failed and we were unable to recover it. 00:32:45.402 [2024-04-26 13:15:50.377124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.402 [2024-04-26 13:15:50.377449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.402 [2024-04-26 13:15:50.377456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.402 qpair failed and we were unable to recover it. 00:32:45.402 [2024-04-26 13:15:50.377753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.402 [2024-04-26 13:15:50.377960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.402 [2024-04-26 13:15:50.377967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.402 qpair failed and we were unable to recover it. 00:32:45.402 [2024-04-26 13:15:50.378009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.402 [2024-04-26 13:15:50.378204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.402 [2024-04-26 13:15:50.378211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.402 qpair failed and we were unable to recover it. 00:32:45.402 [2024-04-26 13:15:50.378380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.402 [2024-04-26 13:15:50.378583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.402 [2024-04-26 13:15:50.378589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.402 qpair failed and we were unable to recover it. 00:32:45.402 [2024-04-26 13:15:50.378770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.402 [2024-04-26 13:15:50.379101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.402 [2024-04-26 13:15:50.379108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.402 qpair failed and we were unable to recover it. 00:32:45.402 [2024-04-26 13:15:50.379298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.402 [2024-04-26 13:15:50.379627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.402 [2024-04-26 13:15:50.379635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.402 qpair failed and we were unable to recover it. 00:32:45.402 [2024-04-26 13:15:50.379835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.402 [2024-04-26 13:15:50.380195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.402 [2024-04-26 13:15:50.380203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.402 qpair failed and we were unable to recover it. 00:32:45.402 [2024-04-26 13:15:50.380389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.402 [2024-04-26 13:15:50.380714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.402 [2024-04-26 13:15:50.380720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.402 qpair failed and we were unable to recover it. 00:32:45.402 [2024-04-26 13:15:50.380888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.402 [2024-04-26 13:15:50.381168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.402 [2024-04-26 13:15:50.381175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.402 qpair failed and we were unable to recover it. 00:32:45.402 [2024-04-26 13:15:50.381496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.402 [2024-04-26 13:15:50.381819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.402 [2024-04-26 13:15:50.381825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.402 qpair failed and we were unable to recover it. 00:32:45.402 [2024-04-26 13:15:50.382134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.402 [2024-04-26 13:15:50.382340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.402 [2024-04-26 13:15:50.382347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.402 qpair failed and we were unable to recover it. 00:32:45.402 [2024-04-26 13:15:50.382509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.402 [2024-04-26 13:15:50.382748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.402 [2024-04-26 13:15:50.382754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.402 qpair failed and we were unable to recover it. 00:32:45.402 [2024-04-26 13:15:50.383082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.402 [2024-04-26 13:15:50.383250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.402 [2024-04-26 13:15:50.383256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.402 qpair failed and we were unable to recover it. 00:32:45.402 [2024-04-26 13:15:50.383489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.402 [2024-04-26 13:15:50.383784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.403 [2024-04-26 13:15:50.383790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.403 qpair failed and we were unable to recover it. 00:32:45.403 [2024-04-26 13:15:50.384150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.403 [2024-04-26 13:15:50.384497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.403 [2024-04-26 13:15:50.384504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.403 qpair failed and we were unable to recover it. 00:32:45.403 [2024-04-26 13:15:50.384897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.403 [2024-04-26 13:15:50.385181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.403 [2024-04-26 13:15:50.385187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.403 qpair failed and we were unable to recover it. 00:32:45.403 [2024-04-26 13:15:50.385387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.403 [2024-04-26 13:15:50.385757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.403 [2024-04-26 13:15:50.385764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.403 qpair failed and we were unable to recover it. 00:32:45.403 [2024-04-26 13:15:50.386066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.403 [2024-04-26 13:15:50.386350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.403 [2024-04-26 13:15:50.386357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.403 qpair failed and we were unable to recover it. 00:32:45.403 [2024-04-26 13:15:50.386651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.403 [2024-04-26 13:15:50.386847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.403 [2024-04-26 13:15:50.386854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.403 qpair failed and we were unable to recover it. 00:32:45.403 [2024-04-26 13:15:50.387189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.403 [2024-04-26 13:15:50.387508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.403 [2024-04-26 13:15:50.387514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.403 qpair failed and we were unable to recover it. 00:32:45.403 [2024-04-26 13:15:50.387850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.403 [2024-04-26 13:15:50.388181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.403 [2024-04-26 13:15:50.388187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.403 qpair failed and we were unable to recover it. 00:32:45.403 [2024-04-26 13:15:50.388385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.403 [2024-04-26 13:15:50.388696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.403 [2024-04-26 13:15:50.388702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.403 qpair failed and we were unable to recover it. 00:32:45.403 [2024-04-26 13:15:50.389044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.403 [2024-04-26 13:15:50.389368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.403 [2024-04-26 13:15:50.389374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.403 qpair failed and we were unable to recover it. 00:32:45.403 [2024-04-26 13:15:50.389532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.403 [2024-04-26 13:15:50.389810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.403 [2024-04-26 13:15:50.389817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.403 qpair failed and we were unable to recover it. 00:32:45.403 [2024-04-26 13:15:50.390137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.403 [2024-04-26 13:15:50.390179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.403 [2024-04-26 13:15:50.390187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.403 qpair failed and we were unable to recover it. 00:32:45.403 [2024-04-26 13:15:50.390343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.403 [2024-04-26 13:15:50.390533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.403 [2024-04-26 13:15:50.390540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.403 qpair failed and we were unable to recover it. 00:32:45.403 [2024-04-26 13:15:50.390742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.403 [2024-04-26 13:15:50.391048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.403 [2024-04-26 13:15:50.391055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.403 qpair failed and we were unable to recover it. 00:32:45.403 [2024-04-26 13:15:50.391382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.403 [2024-04-26 13:15:50.391564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.403 [2024-04-26 13:15:50.391570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.403 qpair failed and we were unable to recover it. 00:32:45.403 [2024-04-26 13:15:50.391897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.403 [2024-04-26 13:15:50.392193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.403 [2024-04-26 13:15:50.392199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.403 qpair failed and we were unable to recover it. 00:32:45.403 [2024-04-26 13:15:50.392480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.403 [2024-04-26 13:15:50.392809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.403 [2024-04-26 13:15:50.392815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.403 qpair failed and we were unable to recover it. 00:32:45.403 [2024-04-26 13:15:50.393159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.403 [2024-04-26 13:15:50.393470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.403 [2024-04-26 13:15:50.393477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.403 qpair failed and we were unable to recover it. 00:32:45.403 [2024-04-26 13:15:50.393637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.403 [2024-04-26 13:15:50.393818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.403 [2024-04-26 13:15:50.393824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.403 qpair failed and we were unable to recover it. 00:32:45.403 [2024-04-26 13:15:50.394005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.403 [2024-04-26 13:15:50.394304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.403 [2024-04-26 13:15:50.394310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.403 qpair failed and we were unable to recover it. 00:32:45.403 [2024-04-26 13:15:50.394465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.403 [2024-04-26 13:15:50.394636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.403 [2024-04-26 13:15:50.394643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.403 qpair failed and we were unable to recover it. 00:32:45.403 [2024-04-26 13:15:50.394818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.403 [2024-04-26 13:15:50.395000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.403 [2024-04-26 13:15:50.395008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.403 qpair failed and we were unable to recover it. 00:32:45.403 [2024-04-26 13:15:50.395289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.403 [2024-04-26 13:15:50.395632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.403 [2024-04-26 13:15:50.395640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.403 qpair failed and we were unable to recover it. 00:32:45.403 [2024-04-26 13:15:50.395984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.403 [2024-04-26 13:15:50.396214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.403 [2024-04-26 13:15:50.396221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.403 qpair failed and we were unable to recover it. 00:32:45.403 [2024-04-26 13:15:50.396539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.403 [2024-04-26 13:15:50.396704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.403 [2024-04-26 13:15:50.396710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.403 qpair failed and we were unable to recover it. 00:32:45.403 [2024-04-26 13:15:50.397085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.403 [2024-04-26 13:15:50.397394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.403 [2024-04-26 13:15:50.397401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.403 qpair failed and we were unable to recover it. 00:32:45.403 [2024-04-26 13:15:50.397717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.403 [2024-04-26 13:15:50.398054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.403 [2024-04-26 13:15:50.398061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.403 qpair failed and we were unable to recover it. 00:32:45.403 [2024-04-26 13:15:50.398238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.403 [2024-04-26 13:15:50.398524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.403 [2024-04-26 13:15:50.398531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.403 qpair failed and we were unable to recover it. 00:32:45.403 [2024-04-26 13:15:50.398834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.403 [2024-04-26 13:15:50.399200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.403 [2024-04-26 13:15:50.399206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.403 qpair failed and we were unable to recover it. 00:32:45.403 [2024-04-26 13:15:50.399434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.404 [2024-04-26 13:15:50.399756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.404 [2024-04-26 13:15:50.399763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.404 qpair failed and we were unable to recover it. 00:32:45.404 [2024-04-26 13:15:50.400062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.404 [2024-04-26 13:15:50.400378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.404 [2024-04-26 13:15:50.400385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.404 qpair failed and we were unable to recover it. 00:32:45.404 [2024-04-26 13:15:50.400531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.404 [2024-04-26 13:15:50.400815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.404 [2024-04-26 13:15:50.400823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.404 qpair failed and we were unable to recover it. 00:32:45.404 [2024-04-26 13:15:50.400996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.404 [2024-04-26 13:15:50.401266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.404 [2024-04-26 13:15:50.401273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.404 qpair failed and we were unable to recover it. 00:32:45.404 [2024-04-26 13:15:50.401543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.404 [2024-04-26 13:15:50.401848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.404 [2024-04-26 13:15:50.401854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.404 qpair failed and we were unable to recover it. 00:32:45.404 [2024-04-26 13:15:50.402071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.404 [2024-04-26 13:15:50.402359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.404 [2024-04-26 13:15:50.402367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.404 qpair failed and we were unable to recover it. 00:32:45.404 [2024-04-26 13:15:50.402525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.404 [2024-04-26 13:15:50.402830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.404 [2024-04-26 13:15:50.402840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.404 qpair failed and we were unable to recover it. 00:32:45.404 [2024-04-26 13:15:50.403012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.404 [2024-04-26 13:15:50.403303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.404 [2024-04-26 13:15:50.403310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.404 qpair failed and we were unable to recover it. 00:32:45.404 [2024-04-26 13:15:50.403477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.404 [2024-04-26 13:15:50.403739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.404 [2024-04-26 13:15:50.403745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.404 qpair failed and we were unable to recover it. 00:32:45.404 [2024-04-26 13:15:50.404080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.404 [2024-04-26 13:15:50.404419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.404 [2024-04-26 13:15:50.404426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.404 qpair failed and we were unable to recover it. 00:32:45.404 [2024-04-26 13:15:50.404739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.404 [2024-04-26 13:15:50.404899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.404 [2024-04-26 13:15:50.404906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.404 qpair failed and we were unable to recover it. 00:32:45.404 [2024-04-26 13:15:50.405060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.404 [2024-04-26 13:15:50.405366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.404 [2024-04-26 13:15:50.405372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.404 qpair failed and we were unable to recover it. 00:32:45.404 [2024-04-26 13:15:50.405670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.404 [2024-04-26 13:15:50.406010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.404 [2024-04-26 13:15:50.406018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.404 qpair failed and we were unable to recover it. 00:32:45.404 [2024-04-26 13:15:50.406229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.404 [2024-04-26 13:15:50.406533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.404 [2024-04-26 13:15:50.406539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.404 qpair failed and we were unable to recover it. 00:32:45.404 [2024-04-26 13:15:50.406834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.404 [2024-04-26 13:15:50.407135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.404 [2024-04-26 13:15:50.407142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.404 qpair failed and we were unable to recover it. 00:32:45.404 [2024-04-26 13:15:50.407502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.404 [2024-04-26 13:15:50.407817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.404 [2024-04-26 13:15:50.407824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.404 qpair failed and we were unable to recover it. 00:32:45.404 [2024-04-26 13:15:50.408139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.404 [2024-04-26 13:15:50.408446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.404 [2024-04-26 13:15:50.408452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.404 qpair failed and we were unable to recover it. 00:32:45.404 [2024-04-26 13:15:50.408641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.404 [2024-04-26 13:15:50.408866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.404 [2024-04-26 13:15:50.408874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.404 qpair failed and we were unable to recover it. 00:32:45.404 [2024-04-26 13:15:50.409060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.404 [2024-04-26 13:15:50.409339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.404 [2024-04-26 13:15:50.409346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.404 qpair failed and we were unable to recover it. 00:32:45.404 [2024-04-26 13:15:50.409657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.404 [2024-04-26 13:15:50.409882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.404 [2024-04-26 13:15:50.409889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.404 qpair failed and we were unable to recover it. 00:32:45.404 [2024-04-26 13:15:50.410201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.404 [2024-04-26 13:15:50.410505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.404 [2024-04-26 13:15:50.410512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.404 qpair failed and we were unable to recover it. 00:32:45.404 [2024-04-26 13:15:50.410828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.404 [2024-04-26 13:15:50.411156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.404 [2024-04-26 13:15:50.411163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.404 qpair failed and we were unable to recover it. 00:32:45.404 [2024-04-26 13:15:50.411472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.404 [2024-04-26 13:15:50.411824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.404 [2024-04-26 13:15:50.411831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.404 qpair failed and we were unable to recover it. 00:32:45.404 [2024-04-26 13:15:50.412174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.404 [2024-04-26 13:15:50.412493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.404 [2024-04-26 13:15:50.412500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.404 qpair failed and we were unable to recover it. 00:32:45.404 [2024-04-26 13:15:50.412857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.404 [2024-04-26 13:15:50.413156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.404 [2024-04-26 13:15:50.413163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.404 qpair failed and we were unable to recover it. 00:32:45.404 [2024-04-26 13:15:50.413367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.404 [2024-04-26 13:15:50.413588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.404 [2024-04-26 13:15:50.413594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.404 qpair failed and we were unable to recover it. 00:32:45.404 [2024-04-26 13:15:50.413883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.404 [2024-04-26 13:15:50.414217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.404 [2024-04-26 13:15:50.414223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.404 qpair failed and we were unable to recover it. 00:32:45.404 [2024-04-26 13:15:50.414262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.404 [2024-04-26 13:15:50.414619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.404 [2024-04-26 13:15:50.414626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.404 qpair failed and we were unable to recover it. 00:32:45.404 [2024-04-26 13:15:50.414946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.404 [2024-04-26 13:15:50.415111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.405 [2024-04-26 13:15:50.415117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.405 qpair failed and we were unable to recover it. 00:32:45.405 [2024-04-26 13:15:50.415400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.405 [2024-04-26 13:15:50.415582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.405 [2024-04-26 13:15:50.415588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.405 qpair failed and we were unable to recover it. 00:32:45.405 [2024-04-26 13:15:50.415882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.405 [2024-04-26 13:15:50.416185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.405 [2024-04-26 13:15:50.416191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.405 qpair failed and we were unable to recover it. 00:32:45.405 [2024-04-26 13:15:50.416511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.405 [2024-04-26 13:15:50.416813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.405 [2024-04-26 13:15:50.416820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.405 qpair failed and we were unable to recover it. 00:32:45.405 [2024-04-26 13:15:50.417009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.405 [2024-04-26 13:15:50.417285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.405 [2024-04-26 13:15:50.417292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.405 qpair failed and we were unable to recover it. 00:32:45.405 [2024-04-26 13:15:50.417339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.405 [2024-04-26 13:15:50.417656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.405 [2024-04-26 13:15:50.417662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.405 qpair failed and we were unable to recover it. 00:32:45.405 [2024-04-26 13:15:50.417853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.405 [2024-04-26 13:15:50.418186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.405 [2024-04-26 13:15:50.418192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.405 qpair failed and we were unable to recover it. 00:32:45.405 [2024-04-26 13:15:50.418507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.405 [2024-04-26 13:15:50.418651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.405 [2024-04-26 13:15:50.418657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.405 qpair failed and we were unable to recover it. 00:32:45.405 [2024-04-26 13:15:50.418892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.405 [2024-04-26 13:15:50.419085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.405 [2024-04-26 13:15:50.419091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.405 qpair failed and we were unable to recover it. 00:32:45.405 [2024-04-26 13:15:50.419389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.405 [2024-04-26 13:15:50.419692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.405 [2024-04-26 13:15:50.419698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.405 qpair failed and we were unable to recover it. 00:32:45.405 [2024-04-26 13:15:50.419884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.405 [2024-04-26 13:15:50.420192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.405 [2024-04-26 13:15:50.420199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.405 qpair failed and we were unable to recover it. 00:32:45.405 [2024-04-26 13:15:50.420392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.405 [2024-04-26 13:15:50.420697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.405 [2024-04-26 13:15:50.420704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.405 qpair failed and we were unable to recover it. 00:32:45.405 [2024-04-26 13:15:50.421013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.405 [2024-04-26 13:15:50.421196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.405 [2024-04-26 13:15:50.421203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.405 qpair failed and we were unable to recover it. 00:32:45.405 [2024-04-26 13:15:50.421390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.405 [2024-04-26 13:15:50.421682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.405 [2024-04-26 13:15:50.421689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.405 qpair failed and we were unable to recover it. 00:32:45.405 [2024-04-26 13:15:50.421991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.405 [2024-04-26 13:15:50.422308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.405 [2024-04-26 13:15:50.422314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.405 qpair failed and we were unable to recover it. 00:32:45.405 [2024-04-26 13:15:50.422632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.405 [2024-04-26 13:15:50.422861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.405 [2024-04-26 13:15:50.422867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.405 qpair failed and we were unable to recover it. 00:32:45.405 [2024-04-26 13:15:50.423179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.405 [2024-04-26 13:15:50.423346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.405 [2024-04-26 13:15:50.423352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.405 qpair failed and we were unable to recover it. 00:32:45.405 [2024-04-26 13:15:50.423547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.405 [2024-04-26 13:15:50.423701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.405 [2024-04-26 13:15:50.423707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.405 qpair failed and we were unable to recover it. 00:32:45.405 [2024-04-26 13:15:50.424022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.405 [2024-04-26 13:15:50.424357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.405 [2024-04-26 13:15:50.424364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.405 qpair failed and we were unable to recover it. 00:32:45.405 [2024-04-26 13:15:50.424730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.405 [2024-04-26 13:15:50.425051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.405 [2024-04-26 13:15:50.425058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.405 qpair failed and we were unable to recover it. 00:32:45.405 [2024-04-26 13:15:50.425216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.405 [2024-04-26 13:15:50.425368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.405 [2024-04-26 13:15:50.425375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.405 qpair failed and we were unable to recover it. 00:32:45.405 [2024-04-26 13:15:50.425547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.405 [2024-04-26 13:15:50.425767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.405 [2024-04-26 13:15:50.425773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.405 qpair failed and we were unable to recover it. 00:32:45.405 [2024-04-26 13:15:50.426088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.405 [2024-04-26 13:15:50.426285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.405 [2024-04-26 13:15:50.426291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.405 qpair failed and we were unable to recover it. 00:32:45.405 [2024-04-26 13:15:50.426594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.405 [2024-04-26 13:15:50.426921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.405 [2024-04-26 13:15:50.426929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.405 qpair failed and we were unable to recover it. 00:32:45.405 [2024-04-26 13:15:50.427230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.405 [2024-04-26 13:15:50.427597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.405 [2024-04-26 13:15:50.427604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.405 qpair failed and we were unable to recover it. 00:32:45.406 [2024-04-26 13:15:50.427967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.406 [2024-04-26 13:15:50.428168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.406 [2024-04-26 13:15:50.428175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.406 qpair failed and we were unable to recover it. 00:32:45.406 [2024-04-26 13:15:50.428484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.406 [2024-04-26 13:15:50.428794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.406 [2024-04-26 13:15:50.428801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.406 qpair failed and we were unable to recover it. 00:32:45.406 [2024-04-26 13:15:50.428966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.406 [2024-04-26 13:15:50.429237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.406 [2024-04-26 13:15:50.429245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.406 qpair failed and we were unable to recover it. 00:32:45.406 [2024-04-26 13:15:50.429434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.406 [2024-04-26 13:15:50.429740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.406 [2024-04-26 13:15:50.429747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.406 qpair failed and we were unable to recover it. 00:32:45.406 [2024-04-26 13:15:50.429933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.406 [2024-04-26 13:15:50.430215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.406 [2024-04-26 13:15:50.430222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.406 qpair failed and we were unable to recover it. 00:32:45.406 [2024-04-26 13:15:50.430401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.406 [2024-04-26 13:15:50.430708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.406 [2024-04-26 13:15:50.430714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.406 qpair failed and we were unable to recover it. 00:32:45.406 [2024-04-26 13:15:50.431015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.406 [2024-04-26 13:15:50.431114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.406 [2024-04-26 13:15:50.431121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.406 qpair failed and we were unable to recover it. 00:32:45.406 [2024-04-26 13:15:50.431275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.406 [2024-04-26 13:15:50.431552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.406 [2024-04-26 13:15:50.431559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.406 qpair failed and we were unable to recover it. 00:32:45.406 [2024-04-26 13:15:50.431873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.406 [2024-04-26 13:15:50.432189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.406 [2024-04-26 13:15:50.432197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.406 qpair failed and we were unable to recover it. 00:32:45.406 [2024-04-26 13:15:50.432486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.406 [2024-04-26 13:15:50.432526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.406 [2024-04-26 13:15:50.432532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.406 qpair failed and we were unable to recover it. 00:32:45.406 [2024-04-26 13:15:50.432574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.406 [2024-04-26 13:15:50.432784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.406 [2024-04-26 13:15:50.432790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.406 qpair failed and we were unable to recover it. 00:32:45.406 [2024-04-26 13:15:50.433076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.406 [2024-04-26 13:15:50.433421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.406 [2024-04-26 13:15:50.433427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.406 qpair failed and we were unable to recover it. 00:32:45.406 [2024-04-26 13:15:50.433653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.406 [2024-04-26 13:15:50.433986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.406 [2024-04-26 13:15:50.433992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.406 qpair failed and we were unable to recover it. 00:32:45.406 [2024-04-26 13:15:50.434347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.406 [2024-04-26 13:15:50.434671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.406 [2024-04-26 13:15:50.434678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.406 qpair failed and we were unable to recover it. 00:32:45.406 [2024-04-26 13:15:50.434862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.406 [2024-04-26 13:15:50.435185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.406 [2024-04-26 13:15:50.435193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.406 qpair failed and we were unable to recover it. 00:32:45.406 [2024-04-26 13:15:50.435526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.406 [2024-04-26 13:15:50.435847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.406 [2024-04-26 13:15:50.435855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.406 qpair failed and we were unable to recover it. 00:32:45.406 [2024-04-26 13:15:50.436183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.406 [2024-04-26 13:15:50.436356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.406 [2024-04-26 13:15:50.436363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.406 qpair failed and we were unable to recover it. 00:32:45.682 [2024-04-26 13:15:50.436543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.682 [2024-04-26 13:15:50.436834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.682 [2024-04-26 13:15:50.436847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.682 qpair failed and we were unable to recover it. 00:32:45.682 [2024-04-26 13:15:50.437134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.682 [2024-04-26 13:15:50.437455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.682 [2024-04-26 13:15:50.437461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.682 qpair failed and we were unable to recover it. 00:32:45.682 [2024-04-26 13:15:50.437780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.682 [2024-04-26 13:15:50.437947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.682 [2024-04-26 13:15:50.437954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.682 qpair failed and we were unable to recover it. 00:32:45.682 [2024-04-26 13:15:50.437994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.682 [2024-04-26 13:15:50.438297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.682 [2024-04-26 13:15:50.438303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.682 qpair failed and we were unable to recover it. 00:32:45.682 [2024-04-26 13:15:50.438465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.682 [2024-04-26 13:15:50.438630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.682 [2024-04-26 13:15:50.438637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.682 qpair failed and we were unable to recover it. 00:32:45.682 [2024-04-26 13:15:50.438810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.682 [2024-04-26 13:15:50.439151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.682 [2024-04-26 13:15:50.439158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.682 qpair failed and we were unable to recover it. 00:32:45.682 [2024-04-26 13:15:50.439454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.682 [2024-04-26 13:15:50.439739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.682 [2024-04-26 13:15:50.439746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.682 qpair failed and we were unable to recover it. 00:32:45.682 [2024-04-26 13:15:50.440034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.682 [2024-04-26 13:15:50.440329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.682 [2024-04-26 13:15:50.440336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.682 qpair failed and we were unable to recover it. 00:32:45.682 [2024-04-26 13:15:50.440648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.682 [2024-04-26 13:15:50.440978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.682 [2024-04-26 13:15:50.440985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.682 qpair failed and we were unable to recover it. 00:32:45.682 [2024-04-26 13:15:50.441293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.682 [2024-04-26 13:15:50.441606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.682 [2024-04-26 13:15:50.441612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.682 qpair failed and we were unable to recover it. 00:32:45.682 [2024-04-26 13:15:50.441918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.682 [2024-04-26 13:15:50.442115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.682 [2024-04-26 13:15:50.442121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.682 qpair failed and we were unable to recover it. 00:32:45.682 [2024-04-26 13:15:50.442441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.682 [2024-04-26 13:15:50.442634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.682 [2024-04-26 13:15:50.442640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.682 qpair failed and we were unable to recover it. 00:32:45.682 [2024-04-26 13:15:50.442680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.682 [2024-04-26 13:15:50.442994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.682 [2024-04-26 13:15:50.443001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.682 qpair failed and we were unable to recover it. 00:32:45.682 [2024-04-26 13:15:50.443324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.682 [2024-04-26 13:15:50.443644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.682 [2024-04-26 13:15:50.443650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.682 qpair failed and we were unable to recover it. 00:32:45.682 [2024-04-26 13:15:50.443944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.682 [2024-04-26 13:15:50.444127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.682 [2024-04-26 13:15:50.444133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.682 qpair failed and we were unable to recover it. 00:32:45.682 [2024-04-26 13:15:50.444337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.682 [2024-04-26 13:15:50.444640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.682 [2024-04-26 13:15:50.444648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.682 qpair failed and we were unable to recover it. 00:32:45.682 [2024-04-26 13:15:50.444873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.682 [2024-04-26 13:15:50.445047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.682 [2024-04-26 13:15:50.445053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.682 qpair failed and we were unable to recover it. 00:32:45.682 [2024-04-26 13:15:50.445361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.682 [2024-04-26 13:15:50.445702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.682 [2024-04-26 13:15:50.445708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.682 qpair failed and we were unable to recover it. 00:32:45.682 [2024-04-26 13:15:50.446014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.682 [2024-04-26 13:15:50.446203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.683 [2024-04-26 13:15:50.446210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.683 qpair failed and we were unable to recover it. 00:32:45.683 [2024-04-26 13:15:50.446498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.683 [2024-04-26 13:15:50.446800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.683 [2024-04-26 13:15:50.446807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.683 qpair failed and we were unable to recover it. 00:32:45.683 [2024-04-26 13:15:50.447192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.683 [2024-04-26 13:15:50.447229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.683 [2024-04-26 13:15:50.447236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.683 qpair failed and we were unable to recover it. 00:32:45.683 [2024-04-26 13:15:50.447561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.683 [2024-04-26 13:15:50.447898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.683 [2024-04-26 13:15:50.447905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.683 qpair failed and we were unable to recover it. 00:32:45.683 [2024-04-26 13:15:50.448283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.683 [2024-04-26 13:15:50.448635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.683 [2024-04-26 13:15:50.448641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.683 qpair failed and we were unable to recover it. 00:32:45.683 [2024-04-26 13:15:50.448966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.683 [2024-04-26 13:15:50.449308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.683 [2024-04-26 13:15:50.449316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.683 qpair failed and we were unable to recover it. 00:32:45.683 [2024-04-26 13:15:50.449660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.683 [2024-04-26 13:15:50.449971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.683 [2024-04-26 13:15:50.449978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.683 qpair failed and we were unable to recover it. 00:32:45.683 [2024-04-26 13:15:50.450272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.683 [2024-04-26 13:15:50.450469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.683 [2024-04-26 13:15:50.450476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.683 qpair failed and we were unable to recover it. 00:32:45.683 [2024-04-26 13:15:50.450795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.683 [2024-04-26 13:15:50.450964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.683 [2024-04-26 13:15:50.450971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.683 qpair failed and we were unable to recover it. 00:32:45.683 [2024-04-26 13:15:50.451273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.683 [2024-04-26 13:15:50.451454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.683 [2024-04-26 13:15:50.451461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.683 qpair failed and we were unable to recover it. 00:32:45.683 [2024-04-26 13:15:50.451747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.683 [2024-04-26 13:15:50.451959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.683 [2024-04-26 13:15:50.451966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.683 qpair failed and we were unable to recover it. 00:32:45.683 [2024-04-26 13:15:50.452306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.683 [2024-04-26 13:15:50.452643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.683 [2024-04-26 13:15:50.452649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.683 qpair failed and we were unable to recover it. 00:32:45.683 [2024-04-26 13:15:50.452945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.683 [2024-04-26 13:15:50.453264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.683 [2024-04-26 13:15:50.453271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.683 qpair failed and we were unable to recover it. 00:32:45.683 [2024-04-26 13:15:50.453306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.683 [2024-04-26 13:15:50.453467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.683 [2024-04-26 13:15:50.453474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.683 qpair failed and we were unable to recover it. 00:32:45.683 [2024-04-26 13:15:50.453836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.683 [2024-04-26 13:15:50.454136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.683 [2024-04-26 13:15:50.454143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.683 qpair failed and we were unable to recover it. 00:32:45.683 [2024-04-26 13:15:50.454309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.683 [2024-04-26 13:15:50.454506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.683 [2024-04-26 13:15:50.454513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.683 qpair failed and we were unable to recover it. 00:32:45.683 [2024-04-26 13:15:50.454704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.683 [2024-04-26 13:15:50.454952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.683 [2024-04-26 13:15:50.454959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.683 qpair failed and we were unable to recover it. 00:32:45.683 [2024-04-26 13:15:50.455271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.683 [2024-04-26 13:15:50.455475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.683 [2024-04-26 13:15:50.455481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.683 qpair failed and we were unable to recover it. 00:32:45.683 [2024-04-26 13:15:50.455811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.683 [2024-04-26 13:15:50.456181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.683 [2024-04-26 13:15:50.456187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.683 qpair failed and we were unable to recover it. 00:32:45.683 [2024-04-26 13:15:50.456371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.683 [2024-04-26 13:15:50.456747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.683 [2024-04-26 13:15:50.456754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.683 qpair failed and we were unable to recover it. 00:32:45.683 [2024-04-26 13:15:50.457075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.683 [2024-04-26 13:15:50.457368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.683 [2024-04-26 13:15:50.457374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.683 qpair failed and we were unable to recover it. 00:32:45.683 [2024-04-26 13:15:50.457556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.683 [2024-04-26 13:15:50.457869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.683 [2024-04-26 13:15:50.457876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.683 qpair failed and we were unable to recover it. 00:32:45.683 [2024-04-26 13:15:50.458249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.683 [2024-04-26 13:15:50.458560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.683 [2024-04-26 13:15:50.458573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.683 qpair failed and we were unable to recover it. 00:32:45.683 [2024-04-26 13:15:50.458899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.683 [2024-04-26 13:15:50.458941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.683 [2024-04-26 13:15:50.458947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.683 qpair failed and we were unable to recover it. 00:32:45.683 [2024-04-26 13:15:50.459201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.683 [2024-04-26 13:15:50.459396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.683 [2024-04-26 13:15:50.459402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.683 qpair failed and we were unable to recover it. 00:32:45.683 [2024-04-26 13:15:50.459615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.683 [2024-04-26 13:15:50.459829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.683 [2024-04-26 13:15:50.459836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.683 qpair failed and we were unable to recover it. 00:32:45.683 [2024-04-26 13:15:50.460055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.683 [2024-04-26 13:15:50.460416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.683 [2024-04-26 13:15:50.460423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.683 qpair failed and we were unable to recover it. 00:32:45.683 [2024-04-26 13:15:50.460606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.683 [2024-04-26 13:15:50.460957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.683 [2024-04-26 13:15:50.460964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.684 qpair failed and we were unable to recover it. 00:32:45.684 [2024-04-26 13:15:50.461313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.684 [2024-04-26 13:15:50.461608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.684 [2024-04-26 13:15:50.461614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.684 qpair failed and we were unable to recover it. 00:32:45.684 [2024-04-26 13:15:50.461789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.684 [2024-04-26 13:15:50.462085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.684 [2024-04-26 13:15:50.462092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.684 qpair failed and we were unable to recover it. 00:32:45.684 [2024-04-26 13:15:50.462393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.684 [2024-04-26 13:15:50.462685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.684 [2024-04-26 13:15:50.462691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.684 qpair failed and we were unable to recover it. 00:32:45.684 [2024-04-26 13:15:50.463048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.684 [2024-04-26 13:15:50.463386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.684 [2024-04-26 13:15:50.463393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.684 qpair failed and we were unable to recover it. 00:32:45.684 [2024-04-26 13:15:50.463600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.684 [2024-04-26 13:15:50.463780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.684 [2024-04-26 13:15:50.463786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.684 qpair failed and we were unable to recover it. 00:32:45.684 [2024-04-26 13:15:50.464102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.684 [2024-04-26 13:15:50.464292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.684 [2024-04-26 13:15:50.464299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.684 qpair failed and we were unable to recover it. 00:32:45.684 [2024-04-26 13:15:50.464621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.684 [2024-04-26 13:15:50.464918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.684 [2024-04-26 13:15:50.464925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.684 qpair failed and we were unable to recover it. 00:32:45.684 [2024-04-26 13:15:50.465299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.684 [2024-04-26 13:15:50.465623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.684 [2024-04-26 13:15:50.465631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.684 qpair failed and we were unable to recover it. 00:32:45.684 [2024-04-26 13:15:50.465973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.684 [2024-04-26 13:15:50.466154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.684 [2024-04-26 13:15:50.466161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.684 qpair failed and we were unable to recover it. 00:32:45.684 [2024-04-26 13:15:50.466379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.684 [2024-04-26 13:15:50.466676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.684 [2024-04-26 13:15:50.466683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.684 qpair failed and we were unable to recover it. 00:32:45.684 [2024-04-26 13:15:50.466895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.684 [2024-04-26 13:15:50.467211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.684 [2024-04-26 13:15:50.467217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.684 qpair failed and we were unable to recover it. 00:32:45.684 [2024-04-26 13:15:50.467504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.684 [2024-04-26 13:15:50.467784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.684 [2024-04-26 13:15:50.467790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.684 qpair failed and we were unable to recover it. 00:32:45.684 [2024-04-26 13:15:50.468154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.684 [2024-04-26 13:15:50.468481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.684 [2024-04-26 13:15:50.468488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.684 qpair failed and we were unable to recover it. 00:32:45.684 [2024-04-26 13:15:50.468810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.684 [2024-04-26 13:15:50.469203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.684 [2024-04-26 13:15:50.469210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.684 qpair failed and we were unable to recover it. 00:32:45.684 [2024-04-26 13:15:50.469509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.684 [2024-04-26 13:15:50.469842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.684 [2024-04-26 13:15:50.469849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.684 qpair failed and we were unable to recover it. 00:32:45.684 [2024-04-26 13:15:50.470029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.684 [2024-04-26 13:15:50.470074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.684 [2024-04-26 13:15:50.470079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.684 qpair failed and we were unable to recover it. 00:32:45.684 [2024-04-26 13:15:50.470413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.684 [2024-04-26 13:15:50.470739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.684 [2024-04-26 13:15:50.470747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.684 qpair failed and we were unable to recover it. 00:32:45.684 [2024-04-26 13:15:50.471045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.684 [2024-04-26 13:15:50.471389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.684 [2024-04-26 13:15:50.471397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.684 qpair failed and we were unable to recover it. 00:32:45.684 [2024-04-26 13:15:50.471667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.684 [2024-04-26 13:15:50.471982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.684 [2024-04-26 13:15:50.471989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.684 qpair failed and we were unable to recover it. 00:32:45.684 [2024-04-26 13:15:50.472325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.684 [2024-04-26 13:15:50.472631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.684 [2024-04-26 13:15:50.472638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.684 qpair failed and we were unable to recover it. 00:32:45.684 [2024-04-26 13:15:50.472709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.684 [2024-04-26 13:15:50.472997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.684 [2024-04-26 13:15:50.473004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.684 qpair failed and we were unable to recover it. 00:32:45.684 [2024-04-26 13:15:50.473232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.684 [2024-04-26 13:15:50.473419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.684 [2024-04-26 13:15:50.473427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.684 qpair failed and we were unable to recover it. 00:32:45.684 [2024-04-26 13:15:50.473468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.684 [2024-04-26 13:15:50.473668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.684 [2024-04-26 13:15:50.473676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.684 qpair failed and we were unable to recover it. 00:32:45.684 [2024-04-26 13:15:50.473995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.684 [2024-04-26 13:15:50.474198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.684 [2024-04-26 13:15:50.474205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.684 qpair failed and we were unable to recover it. 00:32:45.684 [2024-04-26 13:15:50.474518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.684 [2024-04-26 13:15:50.474857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.684 [2024-04-26 13:15:50.474864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.684 qpair failed and we were unable to recover it. 00:32:45.684 [2024-04-26 13:15:50.475224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.684 [2024-04-26 13:15:50.475526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.684 [2024-04-26 13:15:50.475533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.684 qpair failed and we were unable to recover it. 00:32:45.684 [2024-04-26 13:15:50.475850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.685 [2024-04-26 13:15:50.476027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.685 [2024-04-26 13:15:50.476033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.685 qpair failed and we were unable to recover it. 00:32:45.685 [2024-04-26 13:15:50.476372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.685 [2024-04-26 13:15:50.476661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.685 [2024-04-26 13:15:50.476669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.685 qpair failed and we were unable to recover it. 00:32:45.685 [2024-04-26 13:15:50.477073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.685 [2024-04-26 13:15:50.477405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.685 [2024-04-26 13:15:50.477412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.685 qpair failed and we were unable to recover it. 00:32:45.685 [2024-04-26 13:15:50.477638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.685 [2024-04-26 13:15:50.477929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.685 [2024-04-26 13:15:50.477936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.685 qpair failed and we were unable to recover it. 00:32:45.685 [2024-04-26 13:15:50.478235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.685 [2024-04-26 13:15:50.478424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.685 [2024-04-26 13:15:50.478430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.685 qpair failed and we were unable to recover it. 00:32:45.685 [2024-04-26 13:15:50.478765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.685 [2024-04-26 13:15:50.479064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.685 [2024-04-26 13:15:50.479071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.685 qpair failed and we were unable to recover it. 00:32:45.685 [2024-04-26 13:15:50.479396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.685 [2024-04-26 13:15:50.479660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.685 [2024-04-26 13:15:50.479666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.685 qpair failed and we were unable to recover it. 00:32:45.685 [2024-04-26 13:15:50.479897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.685 [2024-04-26 13:15:50.480082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.685 [2024-04-26 13:15:50.480088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.685 qpair failed and we were unable to recover it. 00:32:45.685 [2024-04-26 13:15:50.480242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.685 [2024-04-26 13:15:50.480559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.685 [2024-04-26 13:15:50.480566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.685 qpair failed and we were unable to recover it. 00:32:45.685 [2024-04-26 13:15:50.480726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.685 [2024-04-26 13:15:50.481044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.685 [2024-04-26 13:15:50.481051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.685 qpair failed and we were unable to recover it. 00:32:45.685 [2024-04-26 13:15:50.481363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.685 [2024-04-26 13:15:50.481538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.685 [2024-04-26 13:15:50.481545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.685 qpair failed and we were unable to recover it. 00:32:45.685 [2024-04-26 13:15:50.481876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.685 [2024-04-26 13:15:50.482190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.685 [2024-04-26 13:15:50.482198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.685 qpair failed and we were unable to recover it. 00:32:45.685 [2024-04-26 13:15:50.482416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.685 [2024-04-26 13:15:50.482749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.685 [2024-04-26 13:15:50.482755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.685 qpair failed and we were unable to recover it. 00:32:45.685 [2024-04-26 13:15:50.482925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.685 [2024-04-26 13:15:50.483163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.685 [2024-04-26 13:15:50.483170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.685 qpair failed and we were unable to recover it. 00:32:45.685 [2024-04-26 13:15:50.483354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.685 [2024-04-26 13:15:50.483543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.685 [2024-04-26 13:15:50.483549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.685 qpair failed and we were unable to recover it. 00:32:45.685 [2024-04-26 13:15:50.483845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.685 [2024-04-26 13:15:50.484142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.685 [2024-04-26 13:15:50.484148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.685 qpair failed and we were unable to recover it. 00:32:45.685 [2024-04-26 13:15:50.484463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.685 [2024-04-26 13:15:50.484677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.685 [2024-04-26 13:15:50.484683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.685 qpair failed and we were unable to recover it. 00:32:45.685 [2024-04-26 13:15:50.485003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.685 [2024-04-26 13:15:50.485044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.685 [2024-04-26 13:15:50.485050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.685 qpair failed and we were unable to recover it. 00:32:45.685 [2024-04-26 13:15:50.485203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.685 [2024-04-26 13:15:50.485396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.685 [2024-04-26 13:15:50.485403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.685 qpair failed and we were unable to recover it. 00:32:45.685 [2024-04-26 13:15:50.485604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.685 [2024-04-26 13:15:50.485871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.685 [2024-04-26 13:15:50.485878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.685 qpair failed and we were unable to recover it. 00:32:45.685 [2024-04-26 13:15:50.486188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.685 [2024-04-26 13:15:50.486530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.685 [2024-04-26 13:15:50.486536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.685 qpair failed and we were unable to recover it. 00:32:45.685 [2024-04-26 13:15:50.486777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.685 [2024-04-26 13:15:50.486983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.685 [2024-04-26 13:15:50.486992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.685 qpair failed and we were unable to recover it. 00:32:45.685 [2024-04-26 13:15:50.487344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.685 [2024-04-26 13:15:50.487395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.685 [2024-04-26 13:15:50.487401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.685 qpair failed and we were unable to recover it. 00:32:45.685 [2024-04-26 13:15:50.487717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.685 [2024-04-26 13:15:50.488050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.685 [2024-04-26 13:15:50.488057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.685 qpair failed and we were unable to recover it. 00:32:45.685 [2024-04-26 13:15:50.488395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.685 [2024-04-26 13:15:50.488705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.685 [2024-04-26 13:15:50.488712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.685 qpair failed and we were unable to recover it. 00:32:45.685 [2024-04-26 13:15:50.489060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.685 [2024-04-26 13:15:50.489412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.686 [2024-04-26 13:15:50.489418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.686 qpair failed and we were unable to recover it. 00:32:45.686 [2024-04-26 13:15:50.489753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.686 [2024-04-26 13:15:50.489957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.686 [2024-04-26 13:15:50.489963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.686 qpair failed and we were unable to recover it. 00:32:45.686 [2024-04-26 13:15:50.490152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.686 [2024-04-26 13:15:50.490318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.686 [2024-04-26 13:15:50.490325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.686 qpair failed and we were unable to recover it. 00:32:45.686 [2024-04-26 13:15:50.490687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.686 [2024-04-26 13:15:50.491016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.686 [2024-04-26 13:15:50.491023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.686 qpair failed and we were unable to recover it. 00:32:45.686 [2024-04-26 13:15:50.491369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.686 [2024-04-26 13:15:50.491569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.686 [2024-04-26 13:15:50.491575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.686 qpair failed and we were unable to recover it. 00:32:45.686 [2024-04-26 13:15:50.491897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.686 [2024-04-26 13:15:50.492133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.686 [2024-04-26 13:15:50.492140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.686 qpair failed and we were unable to recover it. 00:32:45.686 [2024-04-26 13:15:50.492570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.686 [2024-04-26 13:15:50.492925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.686 [2024-04-26 13:15:50.492932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.686 qpair failed and we were unable to recover it. 00:32:45.686 [2024-04-26 13:15:50.493259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.686 [2024-04-26 13:15:50.493570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.686 [2024-04-26 13:15:50.493576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.686 qpair failed and we were unable to recover it. 00:32:45.686 [2024-04-26 13:15:50.493877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.686 [2024-04-26 13:15:50.494237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.686 [2024-04-26 13:15:50.494245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.686 qpair failed and we were unable to recover it. 00:32:45.686 [2024-04-26 13:15:50.494426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.686 [2024-04-26 13:15:50.494586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.686 [2024-04-26 13:15:50.494592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.686 qpair failed and we were unable to recover it. 00:32:45.686 [2024-04-26 13:15:50.494890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.686 [2024-04-26 13:15:50.494928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.686 [2024-04-26 13:15:50.494935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.686 qpair failed and we were unable to recover it. 00:32:45.686 [2024-04-26 13:15:50.495246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.686 [2024-04-26 13:15:50.495553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.686 [2024-04-26 13:15:50.495559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.686 qpair failed and we were unable to recover it. 00:32:45.686 [2024-04-26 13:15:50.495918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.686 [2024-04-26 13:15:50.496242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.686 [2024-04-26 13:15:50.496248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.686 qpair failed and we were unable to recover it. 00:32:45.686 [2024-04-26 13:15:50.496482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.686 [2024-04-26 13:15:50.496850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.686 [2024-04-26 13:15:50.496857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.686 qpair failed and we were unable to recover it. 00:32:45.686 [2024-04-26 13:15:50.497237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.686 [2024-04-26 13:15:50.497465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.686 [2024-04-26 13:15:50.497472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.686 qpair failed and we were unable to recover it. 00:32:45.686 [2024-04-26 13:15:50.497827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.686 [2024-04-26 13:15:50.498143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.686 [2024-04-26 13:15:50.498150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.686 qpair failed and we were unable to recover it. 00:32:45.686 [2024-04-26 13:15:50.498346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.686 [2024-04-26 13:15:50.498521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.686 [2024-04-26 13:15:50.498527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.686 qpair failed and we were unable to recover it. 00:32:45.686 [2024-04-26 13:15:50.498605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.686 [2024-04-26 13:15:50.498751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.686 [2024-04-26 13:15:50.498758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.686 qpair failed and we were unable to recover it. 00:32:45.686 [2024-04-26 13:15:50.499091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.686 [2024-04-26 13:15:50.499421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.686 [2024-04-26 13:15:50.499428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.686 qpair failed and we were unable to recover it. 00:32:45.686 [2024-04-26 13:15:50.499748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.686 [2024-04-26 13:15:50.499823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.686 [2024-04-26 13:15:50.499830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.686 qpair failed and we were unable to recover it. 00:32:45.686 [2024-04-26 13:15:50.500166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.686 [2024-04-26 13:15:50.500207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.686 [2024-04-26 13:15:50.500212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.686 qpair failed and we were unable to recover it. 00:32:45.686 [2024-04-26 13:15:50.500538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.686 [2024-04-26 13:15:50.500873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.686 [2024-04-26 13:15:50.500880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.686 qpair failed and we were unable to recover it. 00:32:45.686 [2024-04-26 13:15:50.501234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.686 [2024-04-26 13:15:50.501565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.686 [2024-04-26 13:15:50.501571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.686 qpair failed and we were unable to recover it. 00:32:45.686 [2024-04-26 13:15:50.501889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.686 [2024-04-26 13:15:50.502197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.686 [2024-04-26 13:15:50.502203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.686 qpair failed and we were unable to recover it. 00:32:45.686 [2024-04-26 13:15:50.502420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.686 [2024-04-26 13:15:50.502603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.686 [2024-04-26 13:15:50.502609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.686 qpair failed and we were unable to recover it. 00:32:45.686 [2024-04-26 13:15:50.502924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.686 [2024-04-26 13:15:50.503287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.686 [2024-04-26 13:15:50.503293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.686 qpair failed and we were unable to recover it. 00:32:45.686 [2024-04-26 13:15:50.503470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.686 [2024-04-26 13:15:50.503812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.686 [2024-04-26 13:15:50.503818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.686 qpair failed and we were unable to recover it. 00:32:45.686 [2024-04-26 13:15:50.504131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.686 [2024-04-26 13:15:50.504296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.687 [2024-04-26 13:15:50.504303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.687 qpair failed and we were unable to recover it. 00:32:45.687 [2024-04-26 13:15:50.504630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.687 [2024-04-26 13:15:50.504966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.687 [2024-04-26 13:15:50.504972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.687 qpair failed and we were unable to recover it. 00:32:45.687 [2024-04-26 13:15:50.505324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.687 [2024-04-26 13:15:50.505488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.687 [2024-04-26 13:15:50.505494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.687 qpair failed and we were unable to recover it. 00:32:45.687 [2024-04-26 13:15:50.505712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.687 [2024-04-26 13:15:50.506077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.687 [2024-04-26 13:15:50.506083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.687 qpair failed and we were unable to recover it. 00:32:45.687 [2024-04-26 13:15:50.506423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.687 [2024-04-26 13:15:50.506776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.687 [2024-04-26 13:15:50.506782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.687 qpair failed and we were unable to recover it. 00:32:45.687 [2024-04-26 13:15:50.506967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.687 [2024-04-26 13:15:50.507353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.687 [2024-04-26 13:15:50.507359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.687 qpair failed and we were unable to recover it. 00:32:45.687 [2024-04-26 13:15:50.507688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.687 [2024-04-26 13:15:50.508008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.687 [2024-04-26 13:15:50.508014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.687 qpair failed and we were unable to recover it. 00:32:45.687 [2024-04-26 13:15:50.508057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.687 [2024-04-26 13:15:50.508237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.687 [2024-04-26 13:15:50.508244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.687 qpair failed and we were unable to recover it. 00:32:45.687 [2024-04-26 13:15:50.508612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.687 [2024-04-26 13:15:50.508790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.687 [2024-04-26 13:15:50.508797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.687 qpair failed and we were unable to recover it. 00:32:45.687 [2024-04-26 13:15:50.508967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.687 [2024-04-26 13:15:50.509309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.687 [2024-04-26 13:15:50.509315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.687 qpair failed and we were unable to recover it. 00:32:45.687 [2024-04-26 13:15:50.509516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.687 [2024-04-26 13:15:50.509723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.687 [2024-04-26 13:15:50.509730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.687 qpair failed and we were unable to recover it. 00:32:45.687 [2024-04-26 13:15:50.509892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.687 [2024-04-26 13:15:50.510065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.687 [2024-04-26 13:15:50.510072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.687 qpair failed and we were unable to recover it. 00:32:45.687 [2024-04-26 13:15:50.510400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.687 [2024-04-26 13:15:50.510720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.687 [2024-04-26 13:15:50.510727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.687 qpair failed and we were unable to recover it. 00:32:45.687 [2024-04-26 13:15:50.511017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.687 [2024-04-26 13:15:50.511211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.687 [2024-04-26 13:15:50.511219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.687 qpair failed and we were unable to recover it. 00:32:45.687 [2024-04-26 13:15:50.511527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.687 [2024-04-26 13:15:50.511702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.687 [2024-04-26 13:15:50.511709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.687 qpair failed and we were unable to recover it. 00:32:45.687 [2024-04-26 13:15:50.512064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.687 [2024-04-26 13:15:50.512369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.687 [2024-04-26 13:15:50.512376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.687 qpair failed and we were unable to recover it. 00:32:45.687 [2024-04-26 13:15:50.512718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.687 [2024-04-26 13:15:50.513056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.687 [2024-04-26 13:15:50.513062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.687 qpair failed and we were unable to recover it. 00:32:45.687 [2024-04-26 13:15:50.513446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.687 [2024-04-26 13:15:50.513599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.687 [2024-04-26 13:15:50.513605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.687 qpair failed and we were unable to recover it. 00:32:45.687 [2024-04-26 13:15:50.513920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.687 [2024-04-26 13:15:50.514124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.687 [2024-04-26 13:15:50.514131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.687 qpair failed and we were unable to recover it. 00:32:45.687 [2024-04-26 13:15:50.514510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.687 [2024-04-26 13:15:50.514846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.688 [2024-04-26 13:15:50.514853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.688 qpair failed and we were unable to recover it. 00:32:45.688 [2024-04-26 13:15:50.515194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.688 [2024-04-26 13:15:50.515535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.688 [2024-04-26 13:15:50.515542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.688 qpair failed and we were unable to recover it. 00:32:45.688 [2024-04-26 13:15:50.515907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.688 [2024-04-26 13:15:50.516207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.688 [2024-04-26 13:15:50.516213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.688 qpair failed and we were unable to recover it. 00:32:45.688 [2024-04-26 13:15:50.516389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.688 [2024-04-26 13:15:50.516848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.688 [2024-04-26 13:15:50.516855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.688 qpair failed and we were unable to recover it. 00:32:45.688 [2024-04-26 13:15:50.517245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.688 [2024-04-26 13:15:50.517290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.688 [2024-04-26 13:15:50.517295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.688 qpair failed and we were unable to recover it. 00:32:45.688 [2024-04-26 13:15:50.517627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.688 [2024-04-26 13:15:50.518027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.688 [2024-04-26 13:15:50.518034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.688 qpair failed and we were unable to recover it. 00:32:45.688 [2024-04-26 13:15:50.518233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.688 [2024-04-26 13:15:50.518394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.688 [2024-04-26 13:15:50.518402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.688 qpair failed and we were unable to recover it. 00:32:45.688 [2024-04-26 13:15:50.518713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.688 [2024-04-26 13:15:50.519022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.688 [2024-04-26 13:15:50.519028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.688 qpair failed and we were unable to recover it. 00:32:45.688 [2024-04-26 13:15:50.519401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.688 [2024-04-26 13:15:50.519709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.688 [2024-04-26 13:15:50.519716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.688 qpair failed and we were unable to recover it. 00:32:45.688 [2024-04-26 13:15:50.520024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.688 [2024-04-26 13:15:50.520232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.688 [2024-04-26 13:15:50.520239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.688 qpair failed and we were unable to recover it. 00:32:45.688 [2024-04-26 13:15:50.520571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.688 [2024-04-26 13:15:50.520763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.688 [2024-04-26 13:15:50.520770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.688 qpair failed and we were unable to recover it. 00:32:45.688 [2024-04-26 13:15:50.521121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.688 [2024-04-26 13:15:50.521393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.688 [2024-04-26 13:15:50.521400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.688 qpair failed and we were unable to recover it. 00:32:45.688 [2024-04-26 13:15:50.521696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.688 [2024-04-26 13:15:50.521880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.688 [2024-04-26 13:15:50.521887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.688 qpair failed and we were unable to recover it. 00:32:45.688 [2024-04-26 13:15:50.522056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.688 [2024-04-26 13:15:50.522418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.688 [2024-04-26 13:15:50.522424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.688 qpair failed and we were unable to recover it. 00:32:45.688 [2024-04-26 13:15:50.522731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.688 [2024-04-26 13:15:50.522897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.688 [2024-04-26 13:15:50.522904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.688 qpair failed and we were unable to recover it. 00:32:45.688 [2024-04-26 13:15:50.523153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.688 [2024-04-26 13:15:50.523341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.688 [2024-04-26 13:15:50.523349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.688 qpair failed and we were unable to recover it. 00:32:45.688 [2024-04-26 13:15:50.523675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.688 [2024-04-26 13:15:50.523713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.688 [2024-04-26 13:15:50.523719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.688 qpair failed and we were unable to recover it. 00:32:45.688 [2024-04-26 13:15:50.524015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.688 [2024-04-26 13:15:50.524364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.688 [2024-04-26 13:15:50.524370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.688 qpair failed and we were unable to recover it. 00:32:45.688 [2024-04-26 13:15:50.524700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.688 [2024-04-26 13:15:50.525022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.688 [2024-04-26 13:15:50.525029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.688 qpair failed and we were unable to recover it. 00:32:45.688 [2024-04-26 13:15:50.525345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.688 [2024-04-26 13:15:50.525692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.688 [2024-04-26 13:15:50.525698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.688 qpair failed and we were unable to recover it. 00:32:45.688 [2024-04-26 13:15:50.526053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.688 [2024-04-26 13:15:50.526408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.688 [2024-04-26 13:15:50.526416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.688 qpair failed and we were unable to recover it. 00:32:45.688 [2024-04-26 13:15:50.526742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.688 [2024-04-26 13:15:50.527067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.688 [2024-04-26 13:15:50.527074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.688 qpair failed and we were unable to recover it. 00:32:45.688 [2024-04-26 13:15:50.527330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.688 [2024-04-26 13:15:50.527518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.688 [2024-04-26 13:15:50.527524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.688 qpair failed and we were unable to recover it. 00:32:45.688 [2024-04-26 13:15:50.527855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.688 [2024-04-26 13:15:50.528180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.688 [2024-04-26 13:15:50.528187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.688 qpair failed and we were unable to recover it. 00:32:45.688 [2024-04-26 13:15:50.528537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.688 [2024-04-26 13:15:50.528903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.688 [2024-04-26 13:15:50.528909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.688 qpair failed and we were unable to recover it. 00:32:45.688 [2024-04-26 13:15:50.529097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.688 [2024-04-26 13:15:50.529253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.688 [2024-04-26 13:15:50.529259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.688 qpair failed and we were unable to recover it. 00:32:45.688 [2024-04-26 13:15:50.529329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.688 [2024-04-26 13:15:50.529410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.689 [2024-04-26 13:15:50.529416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.689 qpair failed and we were unable to recover it. 00:32:45.689 [2024-04-26 13:15:50.529660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.689 [2024-04-26 13:15:50.529700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.689 [2024-04-26 13:15:50.529706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.689 qpair failed and we were unable to recover it. 00:32:45.689 [2024-04-26 13:15:50.529950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.689 [2024-04-26 13:15:50.530306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.689 [2024-04-26 13:15:50.530312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.689 qpair failed and we were unable to recover it. 00:32:45.689 [2024-04-26 13:15:50.530634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.689 [2024-04-26 13:15:50.530910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.689 [2024-04-26 13:15:50.530917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.689 qpair failed and we were unable to recover it. 00:32:45.689 [2024-04-26 13:15:50.531246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.689 [2024-04-26 13:15:50.531451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.689 [2024-04-26 13:15:50.531457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.689 qpair failed and we were unable to recover it. 00:32:45.689 [2024-04-26 13:15:50.531506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.689 [2024-04-26 13:15:50.531826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.689 [2024-04-26 13:15:50.531833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.689 qpair failed and we were unable to recover it. 00:32:45.689 [2024-04-26 13:15:50.532019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.689 [2024-04-26 13:15:50.532185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.689 [2024-04-26 13:15:50.532191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.689 qpair failed and we were unable to recover it. 00:32:45.689 [2024-04-26 13:15:50.532513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.689 [2024-04-26 13:15:50.532671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.689 [2024-04-26 13:15:50.532677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.689 qpair failed and we were unable to recover it. 00:32:45.689 [2024-04-26 13:15:50.532862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.689 [2024-04-26 13:15:50.533259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.689 [2024-04-26 13:15:50.533266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.689 qpair failed and we were unable to recover it. 00:32:45.689 [2024-04-26 13:15:50.533444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.689 [2024-04-26 13:15:50.533736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.689 [2024-04-26 13:15:50.533743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.689 qpair failed and we were unable to recover it. 00:32:45.689 [2024-04-26 13:15:50.533936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.689 [2024-04-26 13:15:50.534297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.689 [2024-04-26 13:15:50.534305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.689 qpair failed and we were unable to recover it. 00:32:45.689 [2024-04-26 13:15:50.534493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.689 [2024-04-26 13:15:50.534794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.689 [2024-04-26 13:15:50.534801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.689 qpair failed and we were unable to recover it. 00:32:45.689 [2024-04-26 13:15:50.535143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.689 [2024-04-26 13:15:50.535302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.689 [2024-04-26 13:15:50.535309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.689 qpair failed and we were unable to recover it. 00:32:45.689 [2024-04-26 13:15:50.535667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.689 [2024-04-26 13:15:50.535868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.689 [2024-04-26 13:15:50.535875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.689 qpair failed and we were unable to recover it. 00:32:45.689 [2024-04-26 13:15:50.536231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.689 [2024-04-26 13:15:50.536435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.689 [2024-04-26 13:15:50.536441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.689 qpair failed and we were unable to recover it. 00:32:45.689 [2024-04-26 13:15:50.536805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.689 [2024-04-26 13:15:50.537032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.689 [2024-04-26 13:15:50.537039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.689 qpair failed and we were unable to recover it. 00:32:45.689 [2024-04-26 13:15:50.537228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.689 [2024-04-26 13:15:50.537544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.689 [2024-04-26 13:15:50.537551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.689 qpair failed and we were unable to recover it. 00:32:45.689 [2024-04-26 13:15:50.537899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.689 [2024-04-26 13:15:50.538236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.689 [2024-04-26 13:15:50.538243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.689 qpair failed and we were unable to recover it. 00:32:45.689 [2024-04-26 13:15:50.538592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.689 [2024-04-26 13:15:50.538926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.689 [2024-04-26 13:15:50.538933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.689 qpair failed and we were unable to recover it. 00:32:45.689 [2024-04-26 13:15:50.539109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.689 [2024-04-26 13:15:50.539304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.689 [2024-04-26 13:15:50.539311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.689 qpair failed and we were unable to recover it. 00:32:45.689 [2024-04-26 13:15:50.539478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.689 [2024-04-26 13:15:50.539847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.689 [2024-04-26 13:15:50.539854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.689 qpair failed and we were unable to recover it. 00:32:45.689 [2024-04-26 13:15:50.540063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.689 [2024-04-26 13:15:50.540371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.689 [2024-04-26 13:15:50.540377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.689 qpair failed and we were unable to recover it. 00:32:45.689 [2024-04-26 13:15:50.540689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.689 [2024-04-26 13:15:50.541019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.689 [2024-04-26 13:15:50.541026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.689 qpair failed and we were unable to recover it. 00:32:45.689 [2024-04-26 13:15:50.541370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.689 [2024-04-26 13:15:50.541563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.689 [2024-04-26 13:15:50.541570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.689 qpair failed and we were unable to recover it. 00:32:45.689 [2024-04-26 13:15:50.541711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.689 [2024-04-26 13:15:50.542006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.689 [2024-04-26 13:15:50.542013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.689 qpair failed and we were unable to recover it. 00:32:45.689 [2024-04-26 13:15:50.542413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.689 [2024-04-26 13:15:50.542742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.689 [2024-04-26 13:15:50.542749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.689 qpair failed and we were unable to recover it. 00:32:45.690 [2024-04-26 13:15:50.543059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.690 [2024-04-26 13:15:50.543407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.690 [2024-04-26 13:15:50.543413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.690 qpair failed and we were unable to recover it. 00:32:45.690 [2024-04-26 13:15:50.543597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.690 [2024-04-26 13:15:50.543766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.690 [2024-04-26 13:15:50.543773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.690 qpair failed and we were unable to recover it. 00:32:45.690 [2024-04-26 13:15:50.544110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.690 [2024-04-26 13:15:50.544440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.690 [2024-04-26 13:15:50.544447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.690 qpair failed and we were unable to recover it. 00:32:45.690 [2024-04-26 13:15:50.544774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.690 [2024-04-26 13:15:50.545116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.690 [2024-04-26 13:15:50.545123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.690 qpair failed and we were unable to recover it. 00:32:45.690 [2024-04-26 13:15:50.545452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.690 [2024-04-26 13:15:50.545620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.690 [2024-04-26 13:15:50.545626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.690 qpair failed and we were unable to recover it. 00:32:45.690 [2024-04-26 13:15:50.545929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.690 [2024-04-26 13:15:50.546262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.690 [2024-04-26 13:15:50.546269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.690 qpair failed and we were unable to recover it. 00:32:45.690 [2024-04-26 13:15:50.546596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.690 [2024-04-26 13:15:50.546931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.690 [2024-04-26 13:15:50.546938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.690 qpair failed and we were unable to recover it. 00:32:45.690 [2024-04-26 13:15:50.547106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.690 [2024-04-26 13:15:50.547420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.690 [2024-04-26 13:15:50.547427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.690 qpair failed and we were unable to recover it. 00:32:45.690 [2024-04-26 13:15:50.547751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.690 [2024-04-26 13:15:50.548108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.690 [2024-04-26 13:15:50.548114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.690 qpair failed and we were unable to recover it. 00:32:45.690 [2024-04-26 13:15:50.548313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.690 [2024-04-26 13:15:50.548693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.690 [2024-04-26 13:15:50.548700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.690 qpair failed and we were unable to recover it. 00:32:45.690 [2024-04-26 13:15:50.549021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.690 [2024-04-26 13:15:50.549410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.690 [2024-04-26 13:15:50.549417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.690 qpair failed and we were unable to recover it. 00:32:45.690 [2024-04-26 13:15:50.549592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.690 [2024-04-26 13:15:50.549779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.690 [2024-04-26 13:15:50.549786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.690 qpair failed and we were unable to recover it. 00:32:45.690 [2024-04-26 13:15:50.550126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.690 [2024-04-26 13:15:50.550311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.690 [2024-04-26 13:15:50.550318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.690 qpair failed and we were unable to recover it. 00:32:45.690 [2024-04-26 13:15:50.550550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.690 [2024-04-26 13:15:50.550754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.690 [2024-04-26 13:15:50.550761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.690 qpair failed and we were unable to recover it. 00:32:45.690 [2024-04-26 13:15:50.550916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.690 [2024-04-26 13:15:50.551232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.690 [2024-04-26 13:15:50.551239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.690 qpair failed and we were unable to recover it. 00:32:45.690 [2024-04-26 13:15:50.551563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.690 [2024-04-26 13:15:50.551892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.690 [2024-04-26 13:15:50.551898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.690 qpair failed and we were unable to recover it. 00:32:45.690 [2024-04-26 13:15:50.552205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.690 [2024-04-26 13:15:50.552387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.690 [2024-04-26 13:15:50.552394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.690 qpair failed and we were unable to recover it. 00:32:45.690 [2024-04-26 13:15:50.552686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.690 [2024-04-26 13:15:50.552874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.690 [2024-04-26 13:15:50.552881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.690 qpair failed and we were unable to recover it. 00:32:45.690 [2024-04-26 13:15:50.553233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.690 [2024-04-26 13:15:50.553585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.690 [2024-04-26 13:15:50.553592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.690 qpair failed and we were unable to recover it. 00:32:45.690 [2024-04-26 13:15:50.553906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.690 [2024-04-26 13:15:50.554238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.690 [2024-04-26 13:15:50.554245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.690 qpair failed and we were unable to recover it. 00:32:45.690 [2024-04-26 13:15:50.554445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.690 [2024-04-26 13:15:50.554842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.690 [2024-04-26 13:15:50.554849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.690 qpair failed and we were unable to recover it. 00:32:45.690 [2024-04-26 13:15:50.555027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.690 [2024-04-26 13:15:50.555267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.690 [2024-04-26 13:15:50.555274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.690 qpair failed and we were unable to recover it. 00:32:45.690 [2024-04-26 13:15:50.555603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.690 [2024-04-26 13:15:50.555926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.690 [2024-04-26 13:15:50.555932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.690 qpair failed and we were unable to recover it. 00:32:45.690 [2024-04-26 13:15:50.556278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.690 [2024-04-26 13:15:50.556619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.690 [2024-04-26 13:15:50.556626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.690 qpair failed and we were unable to recover it. 00:32:45.690 [2024-04-26 13:15:50.556939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.690 [2024-04-26 13:15:50.557091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.690 [2024-04-26 13:15:50.557098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.690 qpair failed and we were unable to recover it. 00:32:45.690 [2024-04-26 13:15:50.557292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.690 [2024-04-26 13:15:50.557584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.691 [2024-04-26 13:15:50.557590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.691 qpair failed and we were unable to recover it. 00:32:45.691 [2024-04-26 13:15:50.557916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.691 [2024-04-26 13:15:50.558105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.691 [2024-04-26 13:15:50.558112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.691 qpair failed and we were unable to recover it. 00:32:45.691 [2024-04-26 13:15:50.558456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.691 [2024-04-26 13:15:50.558632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.691 [2024-04-26 13:15:50.558638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.691 qpair failed and we were unable to recover it. 00:32:45.691 [2024-04-26 13:15:50.558964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.691 [2024-04-26 13:15:50.559160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.691 [2024-04-26 13:15:50.559167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.691 qpair failed and we were unable to recover it. 00:32:45.691 [2024-04-26 13:15:50.559346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.691 [2024-04-26 13:15:50.559699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.691 [2024-04-26 13:15:50.559708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.691 qpair failed and we were unable to recover it. 00:32:45.691 [2024-04-26 13:15:50.559904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.691 [2024-04-26 13:15:50.560129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.691 [2024-04-26 13:15:50.560136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.691 qpair failed and we were unable to recover it. 00:32:45.691 [2024-04-26 13:15:50.560356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.691 [2024-04-26 13:15:50.560807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.691 [2024-04-26 13:15:50.560818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.691 qpair failed and we were unable to recover it. 00:32:45.691 [2024-04-26 13:15:50.560987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.691 [2024-04-26 13:15:50.561400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.691 [2024-04-26 13:15:50.561407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.691 qpair failed and we were unable to recover it. 00:32:45.691 [2024-04-26 13:15:50.561616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.691 [2024-04-26 13:15:50.561980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.691 [2024-04-26 13:15:50.561987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.691 qpair failed and we were unable to recover it. 00:32:45.691 [2024-04-26 13:15:50.562193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.691 [2024-04-26 13:15:50.562393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.691 [2024-04-26 13:15:50.562399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.691 qpair failed and we were unable to recover it. 00:32:45.691 [2024-04-26 13:15:50.562636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.691 [2024-04-26 13:15:50.562938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.691 [2024-04-26 13:15:50.562945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.691 qpair failed and we were unable to recover it. 00:32:45.691 [2024-04-26 13:15:50.563139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.691 [2024-04-26 13:15:50.563562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.691 [2024-04-26 13:15:50.563569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.691 qpair failed and we were unable to recover it. 00:32:45.691 [2024-04-26 13:15:50.563878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.691 [2024-04-26 13:15:50.564082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.691 [2024-04-26 13:15:50.564089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.691 qpair failed and we were unable to recover it. 00:32:45.691 [2024-04-26 13:15:50.564408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.691 [2024-04-26 13:15:50.564581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.691 [2024-04-26 13:15:50.564587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.691 qpair failed and we were unable to recover it. 00:32:45.691 [2024-04-26 13:15:50.564927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.691 [2024-04-26 13:15:50.564967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.691 [2024-04-26 13:15:50.564975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.691 qpair failed and we were unable to recover it. 00:32:45.691 [2024-04-26 13:15:50.565313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.691 [2024-04-26 13:15:50.565634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.691 [2024-04-26 13:15:50.565641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.691 qpair failed and we were unable to recover it. 00:32:45.691 [2024-04-26 13:15:50.566005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.691 [2024-04-26 13:15:50.566341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.691 [2024-04-26 13:15:50.566347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.691 qpair failed and we were unable to recover it. 00:32:45.691 [2024-04-26 13:15:50.566679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.691 [2024-04-26 13:15:50.567003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.691 [2024-04-26 13:15:50.567010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.691 qpair failed and we were unable to recover it. 00:32:45.691 [2024-04-26 13:15:50.567057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.691 [2024-04-26 13:15:50.567385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.691 [2024-04-26 13:15:50.567392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.691 qpair failed and we were unable to recover it. 00:32:45.691 [2024-04-26 13:15:50.567749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.691 [2024-04-26 13:15:50.568084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.691 [2024-04-26 13:15:50.568091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.691 qpair failed and we were unable to recover it. 00:32:45.691 [2024-04-26 13:15:50.568407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.691 [2024-04-26 13:15:50.568702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.691 [2024-04-26 13:15:50.568709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.691 qpair failed and we were unable to recover it. 00:32:45.691 [2024-04-26 13:15:50.568889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.691 [2024-04-26 13:15:50.569202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.691 [2024-04-26 13:15:50.569208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.691 qpair failed and we were unable to recover it. 00:32:45.691 [2024-04-26 13:15:50.569399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.691 [2024-04-26 13:15:50.569693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.691 [2024-04-26 13:15:50.569700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.691 qpair failed and we were unable to recover it. 00:32:45.691 [2024-04-26 13:15:50.569905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.691 [2024-04-26 13:15:50.570129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.691 [2024-04-26 13:15:50.570136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.692 qpair failed and we were unable to recover it. 00:32:45.692 [2024-04-26 13:15:50.570514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.692 [2024-04-26 13:15:50.570674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.692 [2024-04-26 13:15:50.570684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.692 qpair failed and we were unable to recover it. 00:32:45.692 [2024-04-26 13:15:50.570852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.692 [2024-04-26 13:15:50.571184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.692 [2024-04-26 13:15:50.571190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.692 qpair failed and we were unable to recover it. 00:32:45.692 [2024-04-26 13:15:50.571402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.692 [2024-04-26 13:15:50.571621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.692 [2024-04-26 13:15:50.571628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.692 qpair failed and we were unable to recover it. 00:32:45.692 [2024-04-26 13:15:50.571920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.692 [2024-04-26 13:15:50.572283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.692 [2024-04-26 13:15:50.572290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.692 qpair failed and we were unable to recover it. 00:32:45.692 [2024-04-26 13:15:50.572599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.692 [2024-04-26 13:15:50.572939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.692 [2024-04-26 13:15:50.572946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.692 qpair failed and we were unable to recover it. 00:32:45.692 [2024-04-26 13:15:50.573168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.692 [2024-04-26 13:15:50.573453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.692 [2024-04-26 13:15:50.573460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.692 qpair failed and we were unable to recover it. 00:32:45.692 [2024-04-26 13:15:50.573787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.692 [2024-04-26 13:15:50.574099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.692 [2024-04-26 13:15:50.574106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.692 qpair failed and we were unable to recover it. 00:32:45.692 [2024-04-26 13:15:50.574405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.692 [2024-04-26 13:15:50.574522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.692 [2024-04-26 13:15:50.574528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.692 qpair failed and we were unable to recover it. 00:32:45.692 [2024-04-26 13:15:50.574860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.692 [2024-04-26 13:15:50.575154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.692 [2024-04-26 13:15:50.575161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.692 qpair failed and we were unable to recover it. 00:32:45.692 [2024-04-26 13:15:50.575518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.692 [2024-04-26 13:15:50.575851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.692 [2024-04-26 13:15:50.575859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.692 qpair failed and we were unable to recover it. 00:32:45.692 [2024-04-26 13:15:50.576184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.692 [2024-04-26 13:15:50.576559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.692 [2024-04-26 13:15:50.576569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.692 qpair failed and we were unable to recover it. 00:32:45.692 [2024-04-26 13:15:50.576905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.692 [2024-04-26 13:15:50.577230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.692 [2024-04-26 13:15:50.577237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.692 qpair failed and we were unable to recover it. 00:32:45.692 [2024-04-26 13:15:50.577544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.692 [2024-04-26 13:15:50.577741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.692 [2024-04-26 13:15:50.577748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.692 qpair failed and we were unable to recover it. 00:32:45.692 [2024-04-26 13:15:50.578106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.692 [2024-04-26 13:15:50.578152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.692 [2024-04-26 13:15:50.578158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.692 qpair failed and we were unable to recover it. 00:32:45.692 [2024-04-26 13:15:50.578470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.692 [2024-04-26 13:15:50.578767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.692 [2024-04-26 13:15:50.578774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.692 qpair failed and we were unable to recover it. 00:32:45.692 [2024-04-26 13:15:50.579133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.692 [2024-04-26 13:15:50.579537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.692 [2024-04-26 13:15:50.579544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.692 qpair failed and we were unable to recover it. 00:32:45.692 [2024-04-26 13:15:50.579869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.692 [2024-04-26 13:15:50.580064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.692 [2024-04-26 13:15:50.580071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.692 qpair failed and we were unable to recover it. 00:32:45.692 [2024-04-26 13:15:50.580236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.693 [2024-04-26 13:15:50.580431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.693 [2024-04-26 13:15:50.580438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.693 qpair failed and we were unable to recover it. 00:32:45.693 [2024-04-26 13:15:50.580539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.693 [2024-04-26 13:15:50.580845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.693 [2024-04-26 13:15:50.580852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.693 qpair failed and we were unable to recover it. 00:32:45.693 [2024-04-26 13:15:50.581153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.693 [2024-04-26 13:15:50.581338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.693 [2024-04-26 13:15:50.581345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.693 qpair failed and we were unable to recover it. 00:32:45.693 [2024-04-26 13:15:50.581552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.693 [2024-04-26 13:15:50.581853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.693 [2024-04-26 13:15:50.581860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.693 qpair failed and we were unable to recover it. 00:32:45.693 [2024-04-26 13:15:50.582040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.693 [2024-04-26 13:15:50.582324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.693 [2024-04-26 13:15:50.582330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.693 qpair failed and we were unable to recover it. 00:32:45.693 [2024-04-26 13:15:50.582681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.693 [2024-04-26 13:15:50.582853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.693 [2024-04-26 13:15:50.582860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.693 qpair failed and we were unable to recover it. 00:32:45.693 [2024-04-26 13:15:50.583161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.693 [2024-04-26 13:15:50.583514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.693 [2024-04-26 13:15:50.583521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.693 qpair failed and we were unable to recover it. 00:32:45.693 [2024-04-26 13:15:50.583740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.693 [2024-04-26 13:15:50.583779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.693 [2024-04-26 13:15:50.583786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.693 qpair failed and we were unable to recover it. 00:32:45.693 [2024-04-26 13:15:50.584061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.693 [2024-04-26 13:15:50.584420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.693 [2024-04-26 13:15:50.584427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.693 qpair failed and we were unable to recover it. 00:32:45.693 [2024-04-26 13:15:50.584617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.693 [2024-04-26 13:15:50.584904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.693 [2024-04-26 13:15:50.584911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.693 qpair failed and we were unable to recover it. 00:32:45.693 [2024-04-26 13:15:50.585233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.693 [2024-04-26 13:15:50.585420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.693 [2024-04-26 13:15:50.585427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.693 qpair failed and we were unable to recover it. 00:32:45.693 [2024-04-26 13:15:50.585708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.693 [2024-04-26 13:15:50.586087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.693 [2024-04-26 13:15:50.586095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.693 qpair failed and we were unable to recover it. 00:32:45.693 [2024-04-26 13:15:50.586421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.693 [2024-04-26 13:15:50.586795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.693 [2024-04-26 13:15:50.586802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.693 qpair failed and we were unable to recover it. 00:32:45.693 [2024-04-26 13:15:50.587112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.693 [2024-04-26 13:15:50.587286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.693 [2024-04-26 13:15:50.587292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.693 qpair failed and we were unable to recover it. 00:32:45.693 [2024-04-26 13:15:50.587488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.693 [2024-04-26 13:15:50.587792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.693 [2024-04-26 13:15:50.587799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.693 qpair failed and we were unable to recover it. 00:32:45.693 [2024-04-26 13:15:50.587997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.693 [2024-04-26 13:15:50.588317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.693 [2024-04-26 13:15:50.588324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.693 qpair failed and we were unable to recover it. 00:32:45.693 [2024-04-26 13:15:50.588367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.693 [2024-04-26 13:15:50.588678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.693 [2024-04-26 13:15:50.588685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.693 qpair failed and we were unable to recover it. 00:32:45.693 [2024-04-26 13:15:50.589022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.693 [2024-04-26 13:15:50.589335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.693 [2024-04-26 13:15:50.589341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.693 qpair failed and we were unable to recover it. 00:32:45.693 [2024-04-26 13:15:50.589692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.693 [2024-04-26 13:15:50.590100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.693 [2024-04-26 13:15:50.590108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.693 qpair failed and we were unable to recover it. 00:32:45.693 [2024-04-26 13:15:50.590342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.693 [2024-04-26 13:15:50.590510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.693 [2024-04-26 13:15:50.590517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.693 qpair failed and we were unable to recover it. 00:32:45.693 [2024-04-26 13:15:50.590907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.693 [2024-04-26 13:15:50.591292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.693 [2024-04-26 13:15:50.591299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.693 qpair failed and we were unable to recover it. 00:32:45.693 [2024-04-26 13:15:50.591615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.693 [2024-04-26 13:15:50.591785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.693 [2024-04-26 13:15:50.591791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.693 qpair failed and we were unable to recover it. 00:32:45.693 [2024-04-26 13:15:50.591987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.693 [2024-04-26 13:15:50.592349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.693 [2024-04-26 13:15:50.592355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.693 qpair failed and we were unable to recover it. 00:32:45.693 [2024-04-26 13:15:50.592669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.693 [2024-04-26 13:15:50.592877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.693 [2024-04-26 13:15:50.592883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.693 qpair failed and we were unable to recover it. 00:32:45.693 [2024-04-26 13:15:50.593203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.693 [2024-04-26 13:15:50.593556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.693 [2024-04-26 13:15:50.593563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.693 qpair failed and we were unable to recover it. 00:32:45.693 [2024-04-26 13:15:50.593764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.694 [2024-04-26 13:15:50.593990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.694 [2024-04-26 13:15:50.593998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.694 qpair failed and we were unable to recover it. 00:32:45.694 [2024-04-26 13:15:50.594349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.694 [2024-04-26 13:15:50.594547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.694 [2024-04-26 13:15:50.594554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.694 qpair failed and we were unable to recover it. 00:32:45.694 [2024-04-26 13:15:50.594633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.694 [2024-04-26 13:15:50.594905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.694 [2024-04-26 13:15:50.594912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.694 qpair failed and we were unable to recover it. 00:32:45.694 [2024-04-26 13:15:50.595239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.694 [2024-04-26 13:15:50.595404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.694 [2024-04-26 13:15:50.595411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.694 qpair failed and we were unable to recover it. 00:32:45.694 [2024-04-26 13:15:50.595640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.694 [2024-04-26 13:15:50.595923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.694 [2024-04-26 13:15:50.595930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.694 qpair failed and we were unable to recover it. 00:32:45.694 [2024-04-26 13:15:50.596279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.694 [2024-04-26 13:15:50.596463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.694 [2024-04-26 13:15:50.596469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.694 qpair failed and we were unable to recover it. 00:32:45.694 [2024-04-26 13:15:50.596697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.694 [2024-04-26 13:15:50.596775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.694 [2024-04-26 13:15:50.596781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.694 qpair failed and we were unable to recover it. 00:32:45.694 [2024-04-26 13:15:50.597099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.694 [2024-04-26 13:15:50.597293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.694 [2024-04-26 13:15:50.597301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.694 qpair failed and we were unable to recover it. 00:32:45.694 [2024-04-26 13:15:50.597514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.694 [2024-04-26 13:15:50.597798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.694 [2024-04-26 13:15:50.597805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.694 qpair failed and we were unable to recover it. 00:32:45.694 [2024-04-26 13:15:50.598148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.694 [2024-04-26 13:15:50.598464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.694 [2024-04-26 13:15:50.598470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.694 qpair failed and we were unable to recover it. 00:32:45.694 [2024-04-26 13:15:50.598810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.694 [2024-04-26 13:15:50.599104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.694 [2024-04-26 13:15:50.599111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.694 qpair failed and we were unable to recover it. 00:32:45.694 [2024-04-26 13:15:50.599431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.694 [2024-04-26 13:15:50.599781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.694 [2024-04-26 13:15:50.599788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.694 qpair failed and we were unable to recover it. 00:32:45.694 [2024-04-26 13:15:50.599827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.694 [2024-04-26 13:15:50.600098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.694 [2024-04-26 13:15:50.600106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.694 qpair failed and we were unable to recover it. 00:32:45.694 [2024-04-26 13:15:50.600376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.694 [2024-04-26 13:15:50.600716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.694 [2024-04-26 13:15:50.600723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.694 qpair failed and we were unable to recover it. 00:32:45.694 [2024-04-26 13:15:50.601034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.694 [2024-04-26 13:15:50.601338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.694 [2024-04-26 13:15:50.601344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.694 qpair failed and we were unable to recover it. 00:32:45.694 [2024-04-26 13:15:50.601502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.694 [2024-04-26 13:15:50.601662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.694 [2024-04-26 13:15:50.601669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.694 qpair failed and we were unable to recover it. 00:32:45.694 [2024-04-26 13:15:50.602004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.694 [2024-04-26 13:15:50.602319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.694 [2024-04-26 13:15:50.602325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.694 qpair failed and we were unable to recover it. 00:32:45.694 [2024-04-26 13:15:50.602525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.694 [2024-04-26 13:15:50.602771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.694 [2024-04-26 13:15:50.602778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.694 qpair failed and we were unable to recover it. 00:32:45.694 [2024-04-26 13:15:50.602855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.694 [2024-04-26 13:15:50.603314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.694 [2024-04-26 13:15:50.603320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.694 qpair failed and we were unable to recover it. 00:32:45.694 [2024-04-26 13:15:50.603729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.694 [2024-04-26 13:15:50.603768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.694 [2024-04-26 13:15:50.603774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.694 qpair failed and we were unable to recover it. 00:32:45.694 [2024-04-26 13:15:50.604079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.694 [2024-04-26 13:15:50.604405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.694 [2024-04-26 13:15:50.604412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.694 qpair failed and we were unable to recover it. 00:32:45.694 [2024-04-26 13:15:50.604621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.694 [2024-04-26 13:15:50.604858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.694 [2024-04-26 13:15:50.604865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.694 qpair failed and we were unable to recover it. 00:32:45.694 [2024-04-26 13:15:50.604911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.694 [2024-04-26 13:15:50.605116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.694 [2024-04-26 13:15:50.605123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.694 qpair failed and we were unable to recover it. 00:32:45.694 [2024-04-26 13:15:50.605514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.694 [2024-04-26 13:15:50.605709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.695 [2024-04-26 13:15:50.605715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.695 qpair failed and we were unable to recover it. 00:32:45.695 [2024-04-26 13:15:50.605757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.695 [2024-04-26 13:15:50.605800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.695 [2024-04-26 13:15:50.605806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.695 qpair failed and we were unable to recover it. 00:32:45.695 [2024-04-26 13:15:50.606026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.695 [2024-04-26 13:15:50.606214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.695 [2024-04-26 13:15:50.606221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.695 qpair failed and we were unable to recover it. 00:32:45.695 [2024-04-26 13:15:50.606411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.695 [2024-04-26 13:15:50.606619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.695 [2024-04-26 13:15:50.606625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.695 qpair failed and we were unable to recover it. 00:32:45.695 [2024-04-26 13:15:50.606814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.695 [2024-04-26 13:15:50.607158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.695 [2024-04-26 13:15:50.607166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.695 qpair failed and we were unable to recover it. 00:32:45.695 [2024-04-26 13:15:50.607342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.695 [2024-04-26 13:15:50.607636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.695 [2024-04-26 13:15:50.607642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.695 qpair failed and we were unable to recover it. 00:32:45.695 [2024-04-26 13:15:50.607978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.695 [2024-04-26 13:15:50.608287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.695 [2024-04-26 13:15:50.608293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.695 qpair failed and we were unable to recover it. 00:32:45.695 [2024-04-26 13:15:50.608510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.695 [2024-04-26 13:15:50.608868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.695 [2024-04-26 13:15:50.608874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.695 qpair failed and we were unable to recover it. 00:32:45.695 [2024-04-26 13:15:50.609194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.695 [2024-04-26 13:15:50.609516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.695 [2024-04-26 13:15:50.609523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.695 qpair failed and we were unable to recover it. 00:32:45.695 [2024-04-26 13:15:50.609699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.695 [2024-04-26 13:15:50.610009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.695 [2024-04-26 13:15:50.610017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.695 qpair failed and we were unable to recover it. 00:32:45.695 [2024-04-26 13:15:50.610359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.695 [2024-04-26 13:15:50.610678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.695 [2024-04-26 13:15:50.610684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.695 qpair failed and we were unable to recover it. 00:32:45.695 [2024-04-26 13:15:50.610868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.695 [2024-04-26 13:15:50.611043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.695 [2024-04-26 13:15:50.611050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.695 qpair failed and we were unable to recover it. 00:32:45.695 [2024-04-26 13:15:50.611257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.695 [2024-04-26 13:15:50.611310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.695 [2024-04-26 13:15:50.611316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.695 qpair failed and we were unable to recover it. 00:32:45.695 [2024-04-26 13:15:50.611473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.695 [2024-04-26 13:15:50.611790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.695 [2024-04-26 13:15:50.611797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.695 qpair failed and we were unable to recover it. 00:32:45.695 [2024-04-26 13:15:50.612128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.695 [2024-04-26 13:15:50.612472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.695 [2024-04-26 13:15:50.612479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.695 qpair failed and we were unable to recover it. 00:32:45.695 [2024-04-26 13:15:50.612833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.695 [2024-04-26 13:15:50.613197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.695 [2024-04-26 13:15:50.613204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.695 qpair failed and we were unable to recover it. 00:32:45.695 [2024-04-26 13:15:50.613381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.695 [2024-04-26 13:15:50.613671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.695 [2024-04-26 13:15:50.613677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.695 qpair failed and we were unable to recover it. 00:32:45.695 [2024-04-26 13:15:50.613905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.695 [2024-04-26 13:15:50.614251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.695 [2024-04-26 13:15:50.614258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.695 qpair failed and we were unable to recover it. 00:32:45.695 [2024-04-26 13:15:50.614585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.695 13:15:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:32:45.695 [2024-04-26 13:15:50.614897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.695 [2024-04-26 13:15:50.614905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.695 qpair failed and we were unable to recover it. 00:32:45.695 [2024-04-26 13:15:50.614949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.695 13:15:50 -- common/autotest_common.sh@850 -- # return 0 00:32:45.695 [2024-04-26 13:15:50.615239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.695 [2024-04-26 13:15:50.615245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.695 qpair failed and we were unable to recover it. 00:32:45.695 13:15:50 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:32:45.695 [2024-04-26 13:15:50.615408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.695 13:15:50 -- common/autotest_common.sh@716 -- # xtrace_disable 00:32:45.695 [2024-04-26 13:15:50.615716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.695 [2024-04-26 13:15:50.615723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.695 qpair failed and we were unable to recover it. 00:32:45.695 13:15:50 -- common/autotest_common.sh@10 -- # set +x 00:32:45.695 [2024-04-26 13:15:50.615956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.695 [2024-04-26 13:15:50.616307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.695 [2024-04-26 13:15:50.616315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.695 qpair failed and we were unable to recover it. 00:32:45.695 [2024-04-26 13:15:50.616508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.695 [2024-04-26 13:15:50.616684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.695 [2024-04-26 13:15:50.616690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.695 qpair failed and we were unable to recover it. 00:32:45.695 [2024-04-26 13:15:50.616833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.695 [2024-04-26 13:15:50.617063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.695 [2024-04-26 13:15:50.617070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.695 qpair failed and we were unable to recover it. 00:32:45.695 [2024-04-26 13:15:50.617397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.695 [2024-04-26 13:15:50.617699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.695 [2024-04-26 13:15:50.617705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.695 qpair failed and we were unable to recover it. 00:32:45.695 [2024-04-26 13:15:50.618021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.695 [2024-04-26 13:15:50.618368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.696 [2024-04-26 13:15:50.618375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.696 qpair failed and we were unable to recover it. 00:32:45.696 [2024-04-26 13:15:50.618771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.696 [2024-04-26 13:15:50.618970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.696 [2024-04-26 13:15:50.618976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.696 qpair failed and we were unable to recover it. 00:32:45.696 [2024-04-26 13:15:50.619355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.696 [2024-04-26 13:15:50.619754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.696 [2024-04-26 13:15:50.619762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.696 qpair failed and we were unable to recover it. 00:32:45.696 [2024-04-26 13:15:50.619915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.696 [2024-04-26 13:15:50.620097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.696 [2024-04-26 13:15:50.620104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.696 qpair failed and we were unable to recover it. 00:32:45.696 [2024-04-26 13:15:50.620272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.696 [2024-04-26 13:15:50.620636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.696 [2024-04-26 13:15:50.620643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.696 qpair failed and we were unable to recover it. 00:32:45.696 [2024-04-26 13:15:50.620976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.696 [2024-04-26 13:15:50.621299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.696 [2024-04-26 13:15:50.621305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.696 qpair failed and we were unable to recover it. 00:32:45.696 [2024-04-26 13:15:50.621635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.696 [2024-04-26 13:15:50.621858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.696 [2024-04-26 13:15:50.621865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.696 qpair failed and we were unable to recover it. 00:32:45.696 [2024-04-26 13:15:50.622165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.696 [2024-04-26 13:15:50.622326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.696 [2024-04-26 13:15:50.622334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.696 qpair failed and we were unable to recover it. 00:32:45.696 [2024-04-26 13:15:50.622651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.696 [2024-04-26 13:15:50.622849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.696 [2024-04-26 13:15:50.622856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.696 qpair failed and we were unable to recover it. 00:32:45.696 [2024-04-26 13:15:50.623031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.696 [2024-04-26 13:15:50.623254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.696 [2024-04-26 13:15:50.623267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.696 qpair failed and we were unable to recover it. 00:32:45.696 [2024-04-26 13:15:50.623610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.696 [2024-04-26 13:15:50.623783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.696 [2024-04-26 13:15:50.623792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.696 qpair failed and we were unable to recover it. 00:32:45.696 [2024-04-26 13:15:50.624094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.696 [2024-04-26 13:15:50.624261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.696 [2024-04-26 13:15:50.624268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.696 qpair failed and we were unable to recover it. 00:32:45.696 [2024-04-26 13:15:50.624444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.696 [2024-04-26 13:15:50.624608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.696 [2024-04-26 13:15:50.624616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.696 qpair failed and we were unable to recover it. 00:32:45.696 [2024-04-26 13:15:50.624793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.696 [2024-04-26 13:15:50.625062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.696 [2024-04-26 13:15:50.625069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.696 qpair failed and we were unable to recover it. 00:32:45.696 [2024-04-26 13:15:50.625417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.696 [2024-04-26 13:15:50.625582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.696 [2024-04-26 13:15:50.625589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.696 qpair failed and we were unable to recover it. 00:32:45.696 [2024-04-26 13:15:50.625910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.696 [2024-04-26 13:15:50.626198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.696 [2024-04-26 13:15:50.626205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.696 qpair failed and we were unable to recover it. 00:32:45.696 [2024-04-26 13:15:50.626552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.696 [2024-04-26 13:15:50.626748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.696 [2024-04-26 13:15:50.626755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.696 qpair failed and we were unable to recover it. 00:32:45.696 [2024-04-26 13:15:50.627116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.696 [2024-04-26 13:15:50.627294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.696 [2024-04-26 13:15:50.627301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.696 qpair failed and we were unable to recover it. 00:32:45.696 [2024-04-26 13:15:50.627675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.696 [2024-04-26 13:15:50.627867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.696 [2024-04-26 13:15:50.627875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.696 qpair failed and we were unable to recover it. 00:32:45.696 [2024-04-26 13:15:50.628098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.696 [2024-04-26 13:15:50.628421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.696 [2024-04-26 13:15:50.628428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.696 qpair failed and we were unable to recover it. 00:32:45.696 [2024-04-26 13:15:50.628744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.696 [2024-04-26 13:15:50.629064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.696 [2024-04-26 13:15:50.629074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.696 qpair failed and we were unable to recover it. 00:32:45.696 [2024-04-26 13:15:50.629260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.696 [2024-04-26 13:15:50.629558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.696 [2024-04-26 13:15:50.629565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.696 qpair failed and we were unable to recover it. 00:32:45.696 [2024-04-26 13:15:50.629787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.696 [2024-04-26 13:15:50.630005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.696 [2024-04-26 13:15:50.630011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.696 qpair failed and we were unable to recover it. 00:32:45.696 [2024-04-26 13:15:50.630304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.696 [2024-04-26 13:15:50.630489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.696 [2024-04-26 13:15:50.630495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.696 qpair failed and we were unable to recover it. 00:32:45.696 [2024-04-26 13:15:50.630786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.696 [2024-04-26 13:15:50.630889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.696 [2024-04-26 13:15:50.630897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.696 qpair failed and we were unable to recover it. 00:32:45.696 [2024-04-26 13:15:50.631213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.696 [2024-04-26 13:15:50.631530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.696 [2024-04-26 13:15:50.631536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.697 qpair failed and we were unable to recover it. 00:32:45.697 [2024-04-26 13:15:50.631858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.697 [2024-04-26 13:15:50.632038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.697 [2024-04-26 13:15:50.632045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.697 qpair failed and we were unable to recover it. 00:32:45.697 [2024-04-26 13:15:50.632376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.697 [2024-04-26 13:15:50.632716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.697 [2024-04-26 13:15:50.632723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.697 qpair failed and we were unable to recover it. 00:32:45.697 [2024-04-26 13:15:50.632883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.697 [2024-04-26 13:15:50.633178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.697 [2024-04-26 13:15:50.633184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.697 qpair failed and we were unable to recover it. 00:32:45.697 [2024-04-26 13:15:50.633401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.697 [2024-04-26 13:15:50.633574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.697 [2024-04-26 13:15:50.633581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.697 qpair failed and we were unable to recover it. 00:32:45.697 [2024-04-26 13:15:50.633616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.697 [2024-04-26 13:15:50.633794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.697 [2024-04-26 13:15:50.633803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.697 qpair failed and we were unable to recover it. 00:32:45.697 [2024-04-26 13:15:50.633956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.697 [2024-04-26 13:15:50.634124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.697 [2024-04-26 13:15:50.634131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.697 qpair failed and we were unable to recover it. 00:32:45.697 [2024-04-26 13:15:50.634429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.697 [2024-04-26 13:15:50.634620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.697 [2024-04-26 13:15:50.634627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.697 qpair failed and we were unable to recover it. 00:32:45.697 [2024-04-26 13:15:50.634965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.697 [2024-04-26 13:15:50.635302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.697 [2024-04-26 13:15:50.635309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.697 qpair failed and we were unable to recover it. 00:32:45.697 [2024-04-26 13:15:50.635485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.697 [2024-04-26 13:15:50.635755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.697 [2024-04-26 13:15:50.635762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.697 qpair failed and we were unable to recover it. 00:32:45.697 [2024-04-26 13:15:50.636070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.697 [2024-04-26 13:15:50.636391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.697 [2024-04-26 13:15:50.636398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.697 qpair failed and we were unable to recover it. 00:32:45.697 [2024-04-26 13:15:50.636716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.697 [2024-04-26 13:15:50.637064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.697 [2024-04-26 13:15:50.637071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.697 qpair failed and we were unable to recover it. 00:32:45.697 [2024-04-26 13:15:50.637388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.697 [2024-04-26 13:15:50.637718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.697 [2024-04-26 13:15:50.637725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.697 qpair failed and we were unable to recover it. 00:32:45.697 [2024-04-26 13:15:50.638063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.697 [2024-04-26 13:15:50.638414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.697 [2024-04-26 13:15:50.638421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.697 qpair failed and we were unable to recover it. 00:32:45.697 [2024-04-26 13:15:50.638700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.697 [2024-04-26 13:15:50.639017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.697 [2024-04-26 13:15:50.639024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.697 qpair failed and we were unable to recover it. 00:32:45.697 [2024-04-26 13:15:50.639344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.697 [2024-04-26 13:15:50.639636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.697 [2024-04-26 13:15:50.639643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.697 qpair failed and we were unable to recover it. 00:32:45.697 [2024-04-26 13:15:50.639958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.697 [2024-04-26 13:15:50.640233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.697 [2024-04-26 13:15:50.640240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.697 qpair failed and we were unable to recover it. 00:32:45.697 [2024-04-26 13:15:50.640422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.697 [2024-04-26 13:15:50.640542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.697 [2024-04-26 13:15:50.640548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.697 qpair failed and we were unable to recover it. 00:32:45.697 [2024-04-26 13:15:50.640736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.697 [2024-04-26 13:15:50.641066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.697 [2024-04-26 13:15:50.641073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.697 qpair failed and we were unable to recover it. 00:32:45.697 [2024-04-26 13:15:50.641269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.697 [2024-04-26 13:15:50.641640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.697 [2024-04-26 13:15:50.641647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.697 qpair failed and we were unable to recover it. 00:32:45.697 [2024-04-26 13:15:50.641959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.697 [2024-04-26 13:15:50.642263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.697 [2024-04-26 13:15:50.642270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.697 qpair failed and we were unable to recover it. 00:32:45.697 [2024-04-26 13:15:50.642544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.697 [2024-04-26 13:15:50.642854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.697 [2024-04-26 13:15:50.642861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.697 qpair failed and we were unable to recover it. 00:32:45.698 [2024-04-26 13:15:50.643153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.698 [2024-04-26 13:15:50.643314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.698 [2024-04-26 13:15:50.643322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.698 qpair failed and we were unable to recover it. 00:32:45.698 [2024-04-26 13:15:50.643361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.698 [2024-04-26 13:15:50.643721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.698 [2024-04-26 13:15:50.643727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.698 qpair failed and we were unable to recover it. 00:32:45.698 [2024-04-26 13:15:50.644106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.698 [2024-04-26 13:15:50.644424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.698 [2024-04-26 13:15:50.644432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.698 qpair failed and we were unable to recover it. 00:32:45.698 [2024-04-26 13:15:50.644733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.698 [2024-04-26 13:15:50.644896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.698 [2024-04-26 13:15:50.644903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.698 qpair failed and we were unable to recover it. 00:32:45.698 [2024-04-26 13:15:50.645195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.698 [2024-04-26 13:15:50.645500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.698 [2024-04-26 13:15:50.645512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.698 qpair failed and we were unable to recover it. 00:32:45.698 [2024-04-26 13:15:50.645830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.698 [2024-04-26 13:15:50.646151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.698 [2024-04-26 13:15:50.646157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.698 qpair failed and we were unable to recover it. 00:32:45.698 [2024-04-26 13:15:50.646486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.698 [2024-04-26 13:15:50.646651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.698 [2024-04-26 13:15:50.646658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.698 qpair failed and we were unable to recover it. 00:32:45.698 [2024-04-26 13:15:50.646979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.698 [2024-04-26 13:15:50.647286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.698 [2024-04-26 13:15:50.647293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.698 qpair failed and we were unable to recover it. 00:32:45.698 [2024-04-26 13:15:50.647589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.698 [2024-04-26 13:15:50.647890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.698 [2024-04-26 13:15:50.647897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.698 qpair failed and we were unable to recover it. 00:32:45.698 [2024-04-26 13:15:50.648223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.698 [2024-04-26 13:15:50.648508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.698 [2024-04-26 13:15:50.648514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.698 qpair failed and we were unable to recover it. 00:32:45.698 [2024-04-26 13:15:50.648653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.698 [2024-04-26 13:15:50.648861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.698 [2024-04-26 13:15:50.648869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.698 qpair failed and we were unable to recover it. 00:32:45.698 [2024-04-26 13:15:50.649233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.698 [2024-04-26 13:15:50.649558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.698 [2024-04-26 13:15:50.649564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.698 qpair failed and we were unable to recover it. 00:32:45.698 [2024-04-26 13:15:50.649779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.698 [2024-04-26 13:15:50.650131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.698 [2024-04-26 13:15:50.650138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.698 qpair failed and we were unable to recover it. 00:32:45.698 [2024-04-26 13:15:50.650439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.698 [2024-04-26 13:15:50.650766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.698 [2024-04-26 13:15:50.650773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.698 qpair failed and we were unable to recover it. 00:32:45.698 [2024-04-26 13:15:50.651176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.698 [2024-04-26 13:15:50.651479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.698 [2024-04-26 13:15:50.651486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.698 qpair failed and we were unable to recover it. 00:32:45.698 [2024-04-26 13:15:50.651685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.698 [2024-04-26 13:15:50.651863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.698 [2024-04-26 13:15:50.651870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.698 qpair failed and we were unable to recover it. 00:32:45.698 [2024-04-26 13:15:50.652061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.698 [2024-04-26 13:15:50.652415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.698 [2024-04-26 13:15:50.652422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.698 qpair failed and we were unable to recover it. 00:32:45.698 [2024-04-26 13:15:50.652730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.698 13:15:50 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:45.698 [2024-04-26 13:15:50.653072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.698 [2024-04-26 13:15:50.653080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.698 qpair failed and we were unable to recover it. 00:32:45.698 [2024-04-26 13:15:50.653420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.698 13:15:50 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:45.698 [2024-04-26 13:15:50.653597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.698 [2024-04-26 13:15:50.653604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.698 qpair failed and we were unable to recover it. 00:32:45.698 13:15:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:45.698 [2024-04-26 13:15:50.653768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.698 13:15:50 -- common/autotest_common.sh@10 -- # set +x 00:32:45.698 [2024-04-26 13:15:50.654002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.698 [2024-04-26 13:15:50.654010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.698 qpair failed and we were unable to recover it. 00:32:45.698 [2024-04-26 13:15:50.654329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.698 [2024-04-26 13:15:50.654639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.698 [2024-04-26 13:15:50.654646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.698 qpair failed and we were unable to recover it. 00:32:45.699 [2024-04-26 13:15:50.654980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.699 [2024-04-26 13:15:50.655155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.699 [2024-04-26 13:15:50.655162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.699 qpair failed and we were unable to recover it. 00:32:45.699 [2024-04-26 13:15:50.655500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.699 [2024-04-26 13:15:50.655810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.699 [2024-04-26 13:15:50.655816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.699 qpair failed and we were unable to recover it. 00:32:45.699 [2024-04-26 13:15:50.656211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.699 [2024-04-26 13:15:50.656566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.699 [2024-04-26 13:15:50.656574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.699 qpair failed and we were unable to recover it. 00:32:45.699 [2024-04-26 13:15:50.656892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.699 [2024-04-26 13:15:50.657227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.699 [2024-04-26 13:15:50.657234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.699 qpair failed and we were unable to recover it. 00:32:45.699 [2024-04-26 13:15:50.657580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.699 [2024-04-26 13:15:50.657760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.699 [2024-04-26 13:15:50.657766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.699 qpair failed and we were unable to recover it. 00:32:45.699 [2024-04-26 13:15:50.658100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.699 [2024-04-26 13:15:50.658420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.699 [2024-04-26 13:15:50.658426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.699 qpair failed and we were unable to recover it. 00:32:45.699 [2024-04-26 13:15:50.658768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.699 [2024-04-26 13:15:50.659093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.699 [2024-04-26 13:15:50.659100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.699 qpair failed and we were unable to recover it. 00:32:45.699 [2024-04-26 13:15:50.659301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.699 [2024-04-26 13:15:50.659600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.699 [2024-04-26 13:15:50.659608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.699 qpair failed and we were unable to recover it. 00:32:45.699 [2024-04-26 13:15:50.659916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.699 [2024-04-26 13:15:50.660280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.699 [2024-04-26 13:15:50.660286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.699 qpair failed and we were unable to recover it. 00:32:45.699 [2024-04-26 13:15:50.660625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.699 [2024-04-26 13:15:50.660856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.699 [2024-04-26 13:15:50.660862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.699 qpair failed and we were unable to recover it. 00:32:45.699 [2024-04-26 13:15:50.661181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.699 [2024-04-26 13:15:50.661500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.699 [2024-04-26 13:15:50.661507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.699 qpair failed and we were unable to recover it. 00:32:45.699 [2024-04-26 13:15:50.661782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.699 [2024-04-26 13:15:50.662074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.699 [2024-04-26 13:15:50.662081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.699 qpair failed and we were unable to recover it. 00:32:45.699 [2024-04-26 13:15:50.662375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.699 [2024-04-26 13:15:50.662712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.699 [2024-04-26 13:15:50.662719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.699 qpair failed and we were unable to recover it. 00:32:45.699 [2024-04-26 13:15:50.663021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.699 [2024-04-26 13:15:50.663199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.699 [2024-04-26 13:15:50.663206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.699 qpair failed and we were unable to recover it. 00:32:45.699 [2024-04-26 13:15:50.663531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.699 [2024-04-26 13:15:50.663566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.699 [2024-04-26 13:15:50.663573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.699 qpair failed and we were unable to recover it. 00:32:45.699 [2024-04-26 13:15:50.663770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.699 [2024-04-26 13:15:50.663946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.699 [2024-04-26 13:15:50.663953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.699 qpair failed and we were unable to recover it. 00:32:45.699 [2024-04-26 13:15:50.664134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.699 [2024-04-26 13:15:50.664524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.699 [2024-04-26 13:15:50.664531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.699 qpair failed and we were unable to recover it. 00:32:45.699 [2024-04-26 13:15:50.664831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.699 [2024-04-26 13:15:50.664998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.699 [2024-04-26 13:15:50.665005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.699 qpair failed and we were unable to recover it. 00:32:45.699 [2024-04-26 13:15:50.665330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.699 [2024-04-26 13:15:50.665528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.699 [2024-04-26 13:15:50.665535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.699 qpair failed and we were unable to recover it. 00:32:45.699 [2024-04-26 13:15:50.665843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.699 [2024-04-26 13:15:50.666152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.699 [2024-04-26 13:15:50.666158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.699 qpair failed and we were unable to recover it. 00:32:45.699 [2024-04-26 13:15:50.666342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.699 [2024-04-26 13:15:50.666630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.699 [2024-04-26 13:15:50.666637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.699 qpair failed and we were unable to recover it. 00:32:45.699 [2024-04-26 13:15:50.666823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.699 [2024-04-26 13:15:50.667040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.699 [2024-04-26 13:15:50.667048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.699 qpair failed and we were unable to recover it. 00:32:45.699 [2024-04-26 13:15:50.667255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.699 [2024-04-26 13:15:50.667479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.699 [2024-04-26 13:15:50.667486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.699 qpair failed and we were unable to recover it. 00:32:45.699 [2024-04-26 13:15:50.667825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.699 [2024-04-26 13:15:50.668123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.699 [2024-04-26 13:15:50.668130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.699 qpair failed and we were unable to recover it. 00:32:45.699 [2024-04-26 13:15:50.668431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.699 [2024-04-26 13:15:50.668761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.699 [2024-04-26 13:15:50.668768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.699 qpair failed and we were unable to recover it. 00:32:45.699 [2024-04-26 13:15:50.669103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.699 [2024-04-26 13:15:50.669418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.699 [2024-04-26 13:15:50.669425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.700 qpair failed and we were unable to recover it. 00:32:45.700 [2024-04-26 13:15:50.669615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.700 [2024-04-26 13:15:50.669772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.700 [2024-04-26 13:15:50.669779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.700 qpair failed and we were unable to recover it. 00:32:45.700 [2024-04-26 13:15:50.670087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.700 [2024-04-26 13:15:50.670270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.700 [2024-04-26 13:15:50.670278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.700 qpair failed and we were unable to recover it. 00:32:45.700 [2024-04-26 13:15:50.670603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.700 Malloc0 00:32:45.700 [2024-04-26 13:15:50.670951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.700 [2024-04-26 13:15:50.670965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.700 qpair failed and we were unable to recover it. 00:32:45.700 [2024-04-26 13:15:50.671291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.700 [2024-04-26 13:15:50.671624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.700 [2024-04-26 13:15:50.671631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.700 qpair failed and we were unable to recover it. 00:32:45.700 13:15:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:45.700 [2024-04-26 13:15:50.671933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.700 13:15:50 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:32:45.700 [2024-04-26 13:15:50.672080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.700 [2024-04-26 13:15:50.672087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.700 qpair failed and we were unable to recover it. 00:32:45.700 13:15:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:45.700 [2024-04-26 13:15:50.672406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.700 13:15:50 -- common/autotest_common.sh@10 -- # set +x 00:32:45.700 [2024-04-26 13:15:50.672576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.700 [2024-04-26 13:15:50.672585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.700 qpair failed and we were unable to recover it. 00:32:45.700 [2024-04-26 13:15:50.672770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.700 [2024-04-26 13:15:50.673009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.700 [2024-04-26 13:15:50.673017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.700 qpair failed and we were unable to recover it. 00:32:45.700 [2024-04-26 13:15:50.673066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.700 [2024-04-26 13:15:50.673361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.700 [2024-04-26 13:15:50.673367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.700 qpair failed and we were unable to recover it. 00:32:45.700 [2024-04-26 13:15:50.673674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.700 [2024-04-26 13:15:50.673840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.700 [2024-04-26 13:15:50.673847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.700 qpair failed and we were unable to recover it. 00:32:45.700 [2024-04-26 13:15:50.674202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.700 [2024-04-26 13:15:50.674547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.700 [2024-04-26 13:15:50.674553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.700 qpair failed and we were unable to recover it. 00:32:45.700 [2024-04-26 13:15:50.674884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.700 [2024-04-26 13:15:50.675072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.700 [2024-04-26 13:15:50.675078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.700 qpair failed and we were unable to recover it. 00:32:45.700 [2024-04-26 13:15:50.675425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.700 [2024-04-26 13:15:50.675722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.700 [2024-04-26 13:15:50.675728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.700 qpair failed and we were unable to recover it. 00:32:45.700 [2024-04-26 13:15:50.676034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.700 [2024-04-26 13:15:50.676217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.700 [2024-04-26 13:15:50.676224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.700 qpair failed and we were unable to recover it. 00:32:45.700 [2024-04-26 13:15:50.676647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.700 [2024-04-26 13:15:50.676978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.700 [2024-04-26 13:15:50.676984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.700 qpair failed and we were unable to recover it. 00:32:45.700 [2024-04-26 13:15:50.677176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.700 [2024-04-26 13:15:50.677546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.700 [2024-04-26 13:15:50.677552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.700 qpair failed and we were unable to recover it. 00:32:45.700 [2024-04-26 13:15:50.677875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.700 [2024-04-26 13:15:50.678074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.700 [2024-04-26 13:15:50.678083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.700 qpair failed and we were unable to recover it. 00:32:45.700 [2024-04-26 13:15:50.678362] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:45.700 [2024-04-26 13:15:50.678408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.700 [2024-04-26 13:15:50.678735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.700 [2024-04-26 13:15:50.678743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.700 qpair failed and we were unable to recover it. 00:32:45.700 [2024-04-26 13:15:50.678899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.700 [2024-04-26 13:15:50.679248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.700 [2024-04-26 13:15:50.679255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.700 qpair failed and we were unable to recover it. 00:32:45.700 [2024-04-26 13:15:50.679544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.700 [2024-04-26 13:15:50.679817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.700 [2024-04-26 13:15:50.679824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.700 qpair failed and we were unable to recover it. 00:32:45.700 [2024-04-26 13:15:50.680146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.700 [2024-04-26 13:15:50.680498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.700 [2024-04-26 13:15:50.680505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.700 qpair failed and we were unable to recover it. 00:32:45.700 [2024-04-26 13:15:50.680595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.700 [2024-04-26 13:15:50.680795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.700 [2024-04-26 13:15:50.680802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.700 qpair failed and we were unable to recover it. 00:32:45.700 [2024-04-26 13:15:50.680852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.700 [2024-04-26 13:15:50.680997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.700 [2024-04-26 13:15:50.681005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.700 qpair failed and we were unable to recover it. 00:32:45.700 [2024-04-26 13:15:50.681231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.700 [2024-04-26 13:15:50.681428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.700 [2024-04-26 13:15:50.681436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.700 qpair failed and we were unable to recover it. 00:32:45.700 [2024-04-26 13:15:50.681752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.700 [2024-04-26 13:15:50.681914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.700 [2024-04-26 13:15:50.681921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.700 qpair failed and we were unable to recover it. 00:32:45.700 [2024-04-26 13:15:50.682270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.700 [2024-04-26 13:15:50.682458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.700 [2024-04-26 13:15:50.682464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.700 qpair failed and we were unable to recover it. 00:32:45.700 [2024-04-26 13:15:50.682782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.700 [2024-04-26 13:15:50.683095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.700 [2024-04-26 13:15:50.683104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.700 qpair failed and we were unable to recover it. 00:32:45.700 [2024-04-26 13:15:50.683288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.700 [2024-04-26 13:15:50.683594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.700 [2024-04-26 13:15:50.683600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.700 qpair failed and we were unable to recover it. 00:32:45.701 [2024-04-26 13:15:50.683773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.701 [2024-04-26 13:15:50.684079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.701 [2024-04-26 13:15:50.684085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.701 qpair failed and we were unable to recover it. 00:32:45.701 [2024-04-26 13:15:50.684389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.701 [2024-04-26 13:15:50.684733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.701 [2024-04-26 13:15:50.684739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.701 qpair failed and we were unable to recover it. 00:32:45.701 [2024-04-26 13:15:50.685043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.701 [2024-04-26 13:15:50.685190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.701 [2024-04-26 13:15:50.685197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.701 qpair failed and we were unable to recover it. 00:32:45.701 [2024-04-26 13:15:50.685474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.701 [2024-04-26 13:15:50.685812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.701 [2024-04-26 13:15:50.685818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.701 qpair failed and we were unable to recover it. 00:32:45.701 [2024-04-26 13:15:50.686129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.701 [2024-04-26 13:15:50.686446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.701 [2024-04-26 13:15:50.686453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.701 qpair failed and we were unable to recover it. 00:32:45.701 [2024-04-26 13:15:50.686773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.701 [2024-04-26 13:15:50.687090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.701 [2024-04-26 13:15:50.687097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.701 qpair failed and we were unable to recover it. 00:32:45.701 13:15:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:45.701 [2024-04-26 13:15:50.687394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.701 13:15:50 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:45.701 [2024-04-26 13:15:50.687711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.701 [2024-04-26 13:15:50.687717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.701 qpair failed and we were unable to recover it. 00:32:45.701 [2024-04-26 13:15:50.687916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.701 13:15:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:45.701 [2024-04-26 13:15:50.688122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.701 [2024-04-26 13:15:50.688129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.701 qpair failed and we were unable to recover it. 00:32:45.701 13:15:50 -- common/autotest_common.sh@10 -- # set +x 00:32:45.701 [2024-04-26 13:15:50.688348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.701 [2024-04-26 13:15:50.688544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.701 [2024-04-26 13:15:50.688550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.701 qpair failed and we were unable to recover it. 00:32:45.701 [2024-04-26 13:15:50.688768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.701 [2024-04-26 13:15:50.689066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.701 [2024-04-26 13:15:50.689073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.701 qpair failed and we were unable to recover it. 00:32:45.701 [2024-04-26 13:15:50.689269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.701 [2024-04-26 13:15:50.689643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.701 [2024-04-26 13:15:50.689650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.701 qpair failed and we were unable to recover it. 00:32:45.701 [2024-04-26 13:15:50.689970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.701 [2024-04-26 13:15:50.690147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.701 [2024-04-26 13:15:50.690153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.701 qpair failed and we were unable to recover it. 00:32:45.701 [2024-04-26 13:15:50.690435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.701 [2024-04-26 13:15:50.690766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.701 [2024-04-26 13:15:50.690773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.701 qpair failed and we were unable to recover it. 00:32:45.701 [2024-04-26 13:15:50.690998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.701 [2024-04-26 13:15:50.691276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.701 [2024-04-26 13:15:50.691283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.701 qpair failed and we were unable to recover it. 00:32:45.701 [2024-04-26 13:15:50.691599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.701 [2024-04-26 13:15:50.691901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.701 [2024-04-26 13:15:50.691908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.701 qpair failed and we were unable to recover it. 00:32:45.701 [2024-04-26 13:15:50.692130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.701 [2024-04-26 13:15:50.692468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.701 [2024-04-26 13:15:50.692474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.701 qpair failed and we were unable to recover it. 00:32:45.701 [2024-04-26 13:15:50.692775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.701 [2024-04-26 13:15:50.693097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.701 [2024-04-26 13:15:50.693103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.701 qpair failed and we were unable to recover it. 00:32:45.701 [2024-04-26 13:15:50.693298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.701 [2024-04-26 13:15:50.693667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.701 [2024-04-26 13:15:50.693674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.701 qpair failed and we were unable to recover it. 00:32:45.701 [2024-04-26 13:15:50.693869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.701 [2024-04-26 13:15:50.694117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.701 [2024-04-26 13:15:50.694124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.701 qpair failed and we were unable to recover it. 00:32:45.701 [2024-04-26 13:15:50.694442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.701 [2024-04-26 13:15:50.694482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.701 [2024-04-26 13:15:50.694487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.701 qpair failed and we were unable to recover it. 00:32:45.701 [2024-04-26 13:15:50.694768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.701 [2024-04-26 13:15:50.695090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.701 [2024-04-26 13:15:50.695097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.701 qpair failed and we were unable to recover it. 00:32:45.701 [2024-04-26 13:15:50.695402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.701 [2024-04-26 13:15:50.695748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.701 [2024-04-26 13:15:50.695754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.701 qpair failed and we were unable to recover it. 00:32:45.701 [2024-04-26 13:15:50.696096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.701 [2024-04-26 13:15:50.696291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.701 [2024-04-26 13:15:50.696297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.701 qpair failed and we were unable to recover it. 00:32:45.701 [2024-04-26 13:15:50.696633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.701 [2024-04-26 13:15:50.696800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.701 [2024-04-26 13:15:50.696806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.701 qpair failed and we were unable to recover it. 00:32:45.701 [2024-04-26 13:15:50.697099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.701 [2024-04-26 13:15:50.697402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.701 [2024-04-26 13:15:50.697408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.701 qpair failed and we were unable to recover it. 00:32:45.702 [2024-04-26 13:15:50.697714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.702 [2024-04-26 13:15:50.697998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.702 [2024-04-26 13:15:50.698004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.702 qpair failed and we were unable to recover it. 00:32:45.702 [2024-04-26 13:15:50.698332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.702 [2024-04-26 13:15:50.698644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.702 [2024-04-26 13:15:50.698650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.702 qpair failed and we were unable to recover it. 00:32:45.702 [2024-04-26 13:15:50.698944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.702 [2024-04-26 13:15:50.699252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.702 [2024-04-26 13:15:50.699258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.702 qpair failed and we were unable to recover it. 00:32:45.702 13:15:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:45.702 [2024-04-26 13:15:50.699659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.702 13:15:50 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:45.702 13:15:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:45.702 [2024-04-26 13:15:50.700011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.702 [2024-04-26 13:15:50.700018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.702 qpair failed and we were unable to recover it. 00:32:45.702 13:15:50 -- common/autotest_common.sh@10 -- # set +x 00:32:45.702 [2024-04-26 13:15:50.700356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.702 [2024-04-26 13:15:50.700659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.702 [2024-04-26 13:15:50.700666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.702 qpair failed and we were unable to recover it. 00:32:45.702 [2024-04-26 13:15:50.700855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.702 [2024-04-26 13:15:50.701058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.702 [2024-04-26 13:15:50.701064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.702 qpair failed and we were unable to recover it. 00:32:45.702 [2024-04-26 13:15:50.701115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.702 [2024-04-26 13:15:50.701296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.702 [2024-04-26 13:15:50.701302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.702 qpair failed and we were unable to recover it. 00:32:45.702 [2024-04-26 13:15:50.701487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.702 [2024-04-26 13:15:50.701688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.702 [2024-04-26 13:15:50.701696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.702 qpair failed and we were unable to recover it. 00:32:45.702 [2024-04-26 13:15:50.701997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.702 [2024-04-26 13:15:50.702162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.702 [2024-04-26 13:15:50.702168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.702 qpair failed and we were unable to recover it. 00:32:45.702 [2024-04-26 13:15:50.702414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.702 [2024-04-26 13:15:50.702720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.702 [2024-04-26 13:15:50.702727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.702 qpair failed and we were unable to recover it. 00:32:45.702 [2024-04-26 13:15:50.702918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.702 [2024-04-26 13:15:50.703306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.702 [2024-04-26 13:15:50.703312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.702 qpair failed and we were unable to recover it. 00:32:45.702 [2024-04-26 13:15:50.703615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.702 [2024-04-26 13:15:50.703776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.702 [2024-04-26 13:15:50.703782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.702 qpair failed and we were unable to recover it. 00:32:45.702 [2024-04-26 13:15:50.704097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.702 [2024-04-26 13:15:50.704417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.702 [2024-04-26 13:15:50.704423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.702 qpair failed and we were unable to recover it. 00:32:45.702 [2024-04-26 13:15:50.704730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.702 [2024-04-26 13:15:50.705006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.702 [2024-04-26 13:15:50.705013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.702 qpair failed and we were unable to recover it. 00:32:45.702 [2024-04-26 13:15:50.705198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.702 [2024-04-26 13:15:50.705530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.702 [2024-04-26 13:15:50.705536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.702 qpair failed and we were unable to recover it. 00:32:45.702 [2024-04-26 13:15:50.705843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.702 [2024-04-26 13:15:50.706174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.702 [2024-04-26 13:15:50.706180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.702 qpair failed and we were unable to recover it. 00:32:45.702 [2024-04-26 13:15:50.706223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.702 [2024-04-26 13:15:50.706585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.702 [2024-04-26 13:15:50.706592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.702 qpair failed and we were unable to recover it. 00:32:45.702 [2024-04-26 13:15:50.706756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.702 [2024-04-26 13:15:50.706934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.702 [2024-04-26 13:15:50.706940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.702 qpair failed and we were unable to recover it. 00:32:45.702 [2024-04-26 13:15:50.707221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.702 [2024-04-26 13:15:50.707558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.702 [2024-04-26 13:15:50.707565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.702 qpair failed and we were unable to recover it. 00:32:45.702 [2024-04-26 13:15:50.707759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.702 [2024-04-26 13:15:50.708098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.702 [2024-04-26 13:15:50.708105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.702 qpair failed and we were unable to recover it. 00:32:45.702 [2024-04-26 13:15:50.708278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.702 [2024-04-26 13:15:50.708614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.702 [2024-04-26 13:15:50.708621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.702 qpair failed and we were unable to recover it. 00:32:45.702 [2024-04-26 13:15:50.708803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.702 [2024-04-26 13:15:50.708963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.702 [2024-04-26 13:15:50.708970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.702 qpair failed and we were unable to recover it. 00:32:45.702 [2024-04-26 13:15:50.709153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.702 [2024-04-26 13:15:50.709526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.702 [2024-04-26 13:15:50.709532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.702 qpair failed and we were unable to recover it. 00:32:45.702 [2024-04-26 13:15:50.709843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.702 [2024-04-26 13:15:50.710102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.702 [2024-04-26 13:15:50.710108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.702 qpair failed and we were unable to recover it. 00:32:45.702 [2024-04-26 13:15:50.710392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.702 [2024-04-26 13:15:50.710687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.702 [2024-04-26 13:15:50.710693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.703 qpair failed and we were unable to recover it. 00:32:45.703 [2024-04-26 13:15:50.710921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.703 [2024-04-26 13:15:50.711116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.703 [2024-04-26 13:15:50.711123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.703 qpair failed and we were unable to recover it. 00:32:45.703 [2024-04-26 13:15:50.711441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.703 13:15:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:45.703 [2024-04-26 13:15:50.711826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.703 [2024-04-26 13:15:50.711834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.703 13:15:50 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:45.703 qpair failed and we were unable to recover it. 00:32:45.703 13:15:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:45.703 [2024-04-26 13:15:50.712150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.703 [2024-04-26 13:15:50.712334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.703 [2024-04-26 13:15:50.712341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.703 qpair failed and we were unable to recover it. 00:32:45.703 13:15:50 -- common/autotest_common.sh@10 -- # set +x 00:32:45.703 [2024-04-26 13:15:50.712717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.703 [2024-04-26 13:15:50.712846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.703 [2024-04-26 13:15:50.712853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.703 qpair failed and we were unable to recover it. 00:32:45.703 [2024-04-26 13:15:50.713026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.703 [2024-04-26 13:15:50.713332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.703 [2024-04-26 13:15:50.713339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.703 qpair failed and we were unable to recover it. 00:32:45.703 [2024-04-26 13:15:50.713554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.703 [2024-04-26 13:15:50.713732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.703 [2024-04-26 13:15:50.713738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.703 qpair failed and we were unable to recover it. 00:32:45.703 [2024-04-26 13:15:50.714065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.703 [2024-04-26 13:15:50.714256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.703 [2024-04-26 13:15:50.714265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.703 qpair failed and we were unable to recover it. 00:32:45.703 [2024-04-26 13:15:50.714463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.703 [2024-04-26 13:15:50.714629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.703 [2024-04-26 13:15:50.714635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.703 qpair failed and we were unable to recover it. 00:32:45.703 [2024-04-26 13:15:50.714843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.703 [2024-04-26 13:15:50.715030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.703 [2024-04-26 13:15:50.715038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.703 qpair failed and we were unable to recover it. 00:32:45.703 [2024-04-26 13:15:50.715369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.703 [2024-04-26 13:15:50.715688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.703 [2024-04-26 13:15:50.715694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.703 qpair failed and we were unable to recover it. 00:32:45.703 [2024-04-26 13:15:50.715993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.703 [2024-04-26 13:15:50.716333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.703 [2024-04-26 13:15:50.716339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.703 qpair failed and we were unable to recover it. 00:32:45.703 [2024-04-26 13:15:50.716643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.703 [2024-04-26 13:15:50.716957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.703 [2024-04-26 13:15:50.716963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.703 qpair failed and we were unable to recover it. 00:32:45.703 [2024-04-26 13:15:50.717263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.703 [2024-04-26 13:15:50.717346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.703 [2024-04-26 13:15:50.717352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.703 qpair failed and we were unable to recover it. 00:32:45.703 [2024-04-26 13:15:50.717639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.703 [2024-04-26 13:15:50.717818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.703 [2024-04-26 13:15:50.717825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.703 qpair failed and we were unable to recover it. 00:32:45.703 [2024-04-26 13:15:50.718113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.703 [2024-04-26 13:15:50.718444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.703 [2024-04-26 13:15:50.718451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3198000b90 with addr=10.0.0.2, port=4420 00:32:45.703 qpair failed and we were unable to recover it. 00:32:45.703 [2024-04-26 13:15:50.718633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.703 [2024-04-26 13:15:50.718639] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:45.703 [2024-04-26 13:15:50.720671] posix.c: 675:posix_sock_psk_use_session_client_cb: *ERROR*: PSK is not set 00:32:45.703 [2024-04-26 13:15:50.720704] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f3198000b90 (107): Transport endpoint is not connected 00:32:45.703 [2024-04-26 13:15:50.720738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:45.703 qpair failed and we were unable to recover it. 00:32:45.703 13:15:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:45.703 13:15:50 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:45.703 13:15:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:45.703 13:15:50 -- common/autotest_common.sh@10 -- # set +x 00:32:45.966 [2024-04-26 13:15:50.729352] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.966 [2024-04-26 13:15:50.729422] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.966 [2024-04-26 13:15:50.729436] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.966 [2024-04-26 13:15:50.729442] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.966 [2024-04-26 13:15:50.729447] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:45.966 [2024-04-26 13:15:50.729460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:45.966 qpair failed and we were unable to recover it. 00:32:45.966 13:15:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:45.966 13:15:50 -- host/target_disconnect.sh@58 -- # wait 19188 00:32:45.966 [2024-04-26 13:15:50.739047] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.966 [2024-04-26 13:15:50.739104] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.966 [2024-04-26 13:15:50.739116] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.966 [2024-04-26 13:15:50.739121] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.966 [2024-04-26 13:15:50.739125] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:45.966 [2024-04-26 13:15:50.739136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:45.966 qpair failed and we were unable to recover it. 00:32:45.966 [2024-04-26 13:15:50.749128] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.966 [2024-04-26 13:15:50.749181] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.966 [2024-04-26 13:15:50.749193] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.966 [2024-04-26 13:15:50.749198] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.967 [2024-04-26 13:15:50.749202] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:45.967 [2024-04-26 13:15:50.749213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:45.967 qpair failed and we were unable to recover it. 00:32:45.967 [2024-04-26 13:15:50.759152] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.967 [2024-04-26 13:15:50.759212] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.967 [2024-04-26 13:15:50.759223] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.967 [2024-04-26 13:15:50.759228] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.967 [2024-04-26 13:15:50.759232] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:45.967 [2024-04-26 13:15:50.759243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:45.967 qpair failed and we were unable to recover it. 00:32:45.967 [2024-04-26 13:15:50.769176] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.967 [2024-04-26 13:15:50.769221] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.967 [2024-04-26 13:15:50.769232] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.967 [2024-04-26 13:15:50.769238] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.967 [2024-04-26 13:15:50.769242] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:45.967 [2024-04-26 13:15:50.769252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:45.967 qpair failed and we were unable to recover it. 00:32:45.967 [2024-04-26 13:15:50.779281] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.967 [2024-04-26 13:15:50.779328] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.967 [2024-04-26 13:15:50.779338] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.967 [2024-04-26 13:15:50.779343] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.967 [2024-04-26 13:15:50.779348] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:45.967 [2024-04-26 13:15:50.779358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:45.967 qpair failed and we were unable to recover it. 00:32:45.967 [2024-04-26 13:15:50.789211] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.967 [2024-04-26 13:15:50.789261] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.967 [2024-04-26 13:15:50.789272] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.967 [2024-04-26 13:15:50.789277] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.967 [2024-04-26 13:15:50.789281] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:45.967 [2024-04-26 13:15:50.789292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:45.967 qpair failed and we were unable to recover it. 00:32:45.967 [2024-04-26 13:15:50.799143] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.967 [2024-04-26 13:15:50.799197] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.967 [2024-04-26 13:15:50.799209] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.967 [2024-04-26 13:15:50.799214] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.967 [2024-04-26 13:15:50.799218] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:45.967 [2024-04-26 13:15:50.799229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:45.967 qpair failed and we were unable to recover it. 00:32:45.967 [2024-04-26 13:15:50.809248] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.967 [2024-04-26 13:15:50.809295] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.967 [2024-04-26 13:15:50.809306] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.967 [2024-04-26 13:15:50.809319] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.967 [2024-04-26 13:15:50.809323] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:45.967 [2024-04-26 13:15:50.809333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:45.967 qpair failed and we were unable to recover it. 00:32:45.967 [2024-04-26 13:15:50.819287] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.967 [2024-04-26 13:15:50.819336] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.967 [2024-04-26 13:15:50.819347] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.967 [2024-04-26 13:15:50.819352] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.967 [2024-04-26 13:15:50.819356] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:45.967 [2024-04-26 13:15:50.819367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:45.967 qpair failed and we were unable to recover it. 00:32:45.967 [2024-04-26 13:15:50.829340] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.967 [2024-04-26 13:15:50.829396] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.967 [2024-04-26 13:15:50.829406] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.967 [2024-04-26 13:15:50.829411] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.967 [2024-04-26 13:15:50.829415] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:45.967 [2024-04-26 13:15:50.829425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:45.967 qpair failed and we were unable to recover it. 00:32:45.967 [2024-04-26 13:15:50.839210] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.967 [2024-04-26 13:15:50.839276] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.967 [2024-04-26 13:15:50.839288] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.967 [2024-04-26 13:15:50.839293] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.967 [2024-04-26 13:15:50.839297] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:45.967 [2024-04-26 13:15:50.839308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:45.967 qpair failed and we were unable to recover it. 00:32:45.967 [2024-04-26 13:15:50.849400] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.967 [2024-04-26 13:15:50.849452] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.967 [2024-04-26 13:15:50.849464] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.967 [2024-04-26 13:15:50.849469] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.967 [2024-04-26 13:15:50.849473] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:45.967 [2024-04-26 13:15:50.849484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:45.967 qpair failed and we were unable to recover it. 00:32:45.967 [2024-04-26 13:15:50.859406] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.967 [2024-04-26 13:15:50.859451] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.967 [2024-04-26 13:15:50.859462] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.967 [2024-04-26 13:15:50.859467] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.967 [2024-04-26 13:15:50.859471] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:45.967 [2024-04-26 13:15:50.859482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:45.967 qpair failed and we were unable to recover it. 00:32:45.967 [2024-04-26 13:15:50.869307] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.967 [2024-04-26 13:15:50.869362] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.967 [2024-04-26 13:15:50.869373] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.968 [2024-04-26 13:15:50.869378] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.968 [2024-04-26 13:15:50.869382] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:45.968 [2024-04-26 13:15:50.869392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:45.968 qpair failed and we were unable to recover it. 00:32:45.968 [2024-04-26 13:15:50.879476] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.968 [2024-04-26 13:15:50.879544] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.968 [2024-04-26 13:15:50.879555] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.968 [2024-04-26 13:15:50.879560] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.968 [2024-04-26 13:15:50.879564] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:45.968 [2024-04-26 13:15:50.879574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:45.968 qpair failed and we were unable to recover it. 00:32:45.968 [2024-04-26 13:15:50.889512] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.968 [2024-04-26 13:15:50.889562] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.968 [2024-04-26 13:15:50.889573] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.968 [2024-04-26 13:15:50.889578] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.968 [2024-04-26 13:15:50.889582] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:45.968 [2024-04-26 13:15:50.889592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:45.968 qpair failed and we were unable to recover it. 00:32:45.968 [2024-04-26 13:15:50.899447] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.968 [2024-04-26 13:15:50.899490] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.968 [2024-04-26 13:15:50.899504] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.968 [2024-04-26 13:15:50.899509] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.968 [2024-04-26 13:15:50.899513] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:45.968 [2024-04-26 13:15:50.899523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:45.968 qpair failed and we were unable to recover it. 00:32:45.968 [2024-04-26 13:15:50.909598] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.968 [2024-04-26 13:15:50.909648] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.968 [2024-04-26 13:15:50.909659] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.968 [2024-04-26 13:15:50.909664] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.968 [2024-04-26 13:15:50.909668] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:45.968 [2024-04-26 13:15:50.909679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:45.968 qpair failed and we were unable to recover it. 00:32:45.968 [2024-04-26 13:15:50.919588] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.968 [2024-04-26 13:15:50.919644] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.968 [2024-04-26 13:15:50.919654] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.968 [2024-04-26 13:15:50.919659] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.968 [2024-04-26 13:15:50.919663] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:45.968 [2024-04-26 13:15:50.919674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:45.968 qpair failed and we were unable to recover it. 00:32:45.968 [2024-04-26 13:15:50.929616] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.968 [2024-04-26 13:15:50.929666] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.968 [2024-04-26 13:15:50.929677] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.968 [2024-04-26 13:15:50.929682] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.968 [2024-04-26 13:15:50.929686] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:45.968 [2024-04-26 13:15:50.929696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:45.968 qpair failed and we were unable to recover it. 00:32:45.968 [2024-04-26 13:15:50.939655] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.968 [2024-04-26 13:15:50.939702] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.968 [2024-04-26 13:15:50.939713] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.968 [2024-04-26 13:15:50.939718] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.968 [2024-04-26 13:15:50.939722] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:45.968 [2024-04-26 13:15:50.939735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:45.968 qpair failed and we were unable to recover it. 00:32:45.968 [2024-04-26 13:15:50.949664] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.968 [2024-04-26 13:15:50.949751] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.968 [2024-04-26 13:15:50.949762] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.968 [2024-04-26 13:15:50.949767] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.968 [2024-04-26 13:15:50.949771] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:45.968 [2024-04-26 13:15:50.949782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:45.968 qpair failed and we were unable to recover it. 00:32:45.968 [2024-04-26 13:15:50.959711] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.968 [2024-04-26 13:15:50.959762] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.968 [2024-04-26 13:15:50.959773] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.968 [2024-04-26 13:15:50.959778] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.968 [2024-04-26 13:15:50.959782] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:45.968 [2024-04-26 13:15:50.959792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:45.968 qpair failed and we were unable to recover it. 00:32:45.968 [2024-04-26 13:15:50.969725] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.968 [2024-04-26 13:15:50.969772] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.968 [2024-04-26 13:15:50.969783] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.968 [2024-04-26 13:15:50.969788] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.968 [2024-04-26 13:15:50.969793] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:45.968 [2024-04-26 13:15:50.969803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:45.968 qpair failed and we were unable to recover it. 00:32:45.968 [2024-04-26 13:15:50.979744] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.968 [2024-04-26 13:15:50.979794] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.968 [2024-04-26 13:15:50.979804] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.968 [2024-04-26 13:15:50.979809] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.968 [2024-04-26 13:15:50.979813] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:45.968 [2024-04-26 13:15:50.979823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:45.968 qpair failed and we were unable to recover it. 00:32:45.968 [2024-04-26 13:15:50.989774] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.968 [2024-04-26 13:15:50.989850] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.968 [2024-04-26 13:15:50.989864] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.968 [2024-04-26 13:15:50.989869] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.968 [2024-04-26 13:15:50.989873] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:45.968 [2024-04-26 13:15:50.989883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:45.969 qpair failed and we were unable to recover it. 00:32:45.969 [2024-04-26 13:15:50.999816] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.969 [2024-04-26 13:15:50.999881] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.969 [2024-04-26 13:15:50.999892] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.969 [2024-04-26 13:15:50.999897] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.969 [2024-04-26 13:15:50.999901] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:45.969 [2024-04-26 13:15:50.999912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:45.969 qpair failed and we were unable to recover it. 00:32:45.969 [2024-04-26 13:15:51.009848] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.969 [2024-04-26 13:15:51.009894] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.969 [2024-04-26 13:15:51.009904] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.969 [2024-04-26 13:15:51.009909] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.969 [2024-04-26 13:15:51.009913] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:45.969 [2024-04-26 13:15:51.009923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:45.969 qpair failed and we were unable to recover it. 00:32:45.969 [2024-04-26 13:15:51.019997] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.969 [2024-04-26 13:15:51.020057] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.969 [2024-04-26 13:15:51.020068] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.969 [2024-04-26 13:15:51.020073] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.969 [2024-04-26 13:15:51.020077] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:45.969 [2024-04-26 13:15:51.020087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:45.969 qpair failed and we were unable to recover it. 00:32:46.231 [2024-04-26 13:15:51.029888] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.231 [2024-04-26 13:15:51.029941] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.231 [2024-04-26 13:15:51.029952] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.232 [2024-04-26 13:15:51.029957] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.232 [2024-04-26 13:15:51.029964] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:46.232 [2024-04-26 13:15:51.029974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:46.232 qpair failed and we were unable to recover it. 00:32:46.232 [2024-04-26 13:15:51.039985] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.232 [2024-04-26 13:15:51.040038] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.232 [2024-04-26 13:15:51.040049] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.232 [2024-04-26 13:15:51.040054] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.232 [2024-04-26 13:15:51.040058] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:46.232 [2024-04-26 13:15:51.040068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:46.232 qpair failed and we were unable to recover it. 00:32:46.232 [2024-04-26 13:15:51.050006] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.232 [2024-04-26 13:15:51.050059] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.232 [2024-04-26 13:15:51.050070] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.232 [2024-04-26 13:15:51.050075] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.232 [2024-04-26 13:15:51.050079] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:46.232 [2024-04-26 13:15:51.050089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:46.232 qpair failed and we were unable to recover it. 00:32:46.232 [2024-04-26 13:15:51.060037] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.232 [2024-04-26 13:15:51.060083] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.232 [2024-04-26 13:15:51.060094] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.232 [2024-04-26 13:15:51.060099] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.232 [2024-04-26 13:15:51.060103] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:46.232 [2024-04-26 13:15:51.060113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:46.232 qpair failed and we were unable to recover it. 00:32:46.232 [2024-04-26 13:15:51.069995] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.232 [2024-04-26 13:15:51.070045] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.232 [2024-04-26 13:15:51.070055] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.232 [2024-04-26 13:15:51.070060] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.232 [2024-04-26 13:15:51.070065] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:46.232 [2024-04-26 13:15:51.070075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:46.232 qpair failed and we were unable to recover it. 00:32:46.232 [2024-04-26 13:15:51.080060] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.232 [2024-04-26 13:15:51.080124] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.232 [2024-04-26 13:15:51.080135] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.232 [2024-04-26 13:15:51.080140] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.232 [2024-04-26 13:15:51.080144] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:46.232 [2024-04-26 13:15:51.080154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:46.232 qpair failed and we were unable to recover it. 00:32:46.232 [2024-04-26 13:15:51.090109] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.232 [2024-04-26 13:15:51.090160] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.232 [2024-04-26 13:15:51.090171] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.232 [2024-04-26 13:15:51.090176] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.232 [2024-04-26 13:15:51.090180] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:46.232 [2024-04-26 13:15:51.090191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:46.232 qpair failed and we were unable to recover it. 00:32:46.232 [2024-04-26 13:15:51.100118] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.232 [2024-04-26 13:15:51.100179] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.232 [2024-04-26 13:15:51.100190] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.232 [2024-04-26 13:15:51.100194] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.232 [2024-04-26 13:15:51.100199] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:46.232 [2024-04-26 13:15:51.100208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:46.232 qpair failed and we were unable to recover it. 00:32:46.232 [2024-04-26 13:15:51.110119] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.232 [2024-04-26 13:15:51.110169] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.232 [2024-04-26 13:15:51.110179] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.232 [2024-04-26 13:15:51.110184] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.232 [2024-04-26 13:15:51.110188] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:46.232 [2024-04-26 13:15:51.110198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:46.232 qpair failed and we were unable to recover it. 00:32:46.232 [2024-04-26 13:15:51.120029] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.232 [2024-04-26 13:15:51.120086] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.232 [2024-04-26 13:15:51.120096] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.232 [2024-04-26 13:15:51.120104] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.232 [2024-04-26 13:15:51.120108] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:46.232 [2024-04-26 13:15:51.120118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:46.232 qpair failed and we were unable to recover it. 00:32:46.232 [2024-04-26 13:15:51.130039] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.232 [2024-04-26 13:15:51.130106] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.232 [2024-04-26 13:15:51.130117] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.232 [2024-04-26 13:15:51.130122] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.232 [2024-04-26 13:15:51.130126] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:46.232 [2024-04-26 13:15:51.130136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:46.232 qpair failed and we were unable to recover it. 00:32:46.232 [2024-04-26 13:15:51.140072] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.232 [2024-04-26 13:15:51.140123] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.232 [2024-04-26 13:15:51.140133] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.232 [2024-04-26 13:15:51.140138] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.232 [2024-04-26 13:15:51.140142] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:46.232 [2024-04-26 13:15:51.140152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:46.232 qpair failed and we were unable to recover it. 00:32:46.232 [2024-04-26 13:15:51.150230] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.232 [2024-04-26 13:15:51.150279] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.232 [2024-04-26 13:15:51.150290] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.232 [2024-04-26 13:15:51.150295] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.232 [2024-04-26 13:15:51.150299] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:46.232 [2024-04-26 13:15:51.150309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:46.232 qpair failed and we were unable to recover it. 00:32:46.232 [2024-04-26 13:15:51.160143] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.232 [2024-04-26 13:15:51.160196] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.232 [2024-04-26 13:15:51.160207] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.233 [2024-04-26 13:15:51.160212] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.233 [2024-04-26 13:15:51.160216] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:46.233 [2024-04-26 13:15:51.160226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:46.233 qpair failed and we were unable to recover it. 00:32:46.233 [2024-04-26 13:15:51.170299] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.233 [2024-04-26 13:15:51.170349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.233 [2024-04-26 13:15:51.170360] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.233 [2024-04-26 13:15:51.170365] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.233 [2024-04-26 13:15:51.170369] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:46.233 [2024-04-26 13:15:51.170379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:46.233 qpair failed and we were unable to recover it. 00:32:46.233 [2024-04-26 13:15:51.180208] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.233 [2024-04-26 13:15:51.180263] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.233 [2024-04-26 13:15:51.180274] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.233 [2024-04-26 13:15:51.180279] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.233 [2024-04-26 13:15:51.180283] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:46.233 [2024-04-26 13:15:51.180294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:46.233 qpair failed and we were unable to recover it. 00:32:46.233 [2024-04-26 13:15:51.190352] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.233 [2024-04-26 13:15:51.190403] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.233 [2024-04-26 13:15:51.190414] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.233 [2024-04-26 13:15:51.190418] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.233 [2024-04-26 13:15:51.190422] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:46.233 [2024-04-26 13:15:51.190433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:46.233 qpair failed and we were unable to recover it. 00:32:46.233 [2024-04-26 13:15:51.200237] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.233 [2024-04-26 13:15:51.200297] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.233 [2024-04-26 13:15:51.200308] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.233 [2024-04-26 13:15:51.200312] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.233 [2024-04-26 13:15:51.200317] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:46.233 [2024-04-26 13:15:51.200327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:46.233 qpair failed and we were unable to recover it. 00:32:46.233 [2024-04-26 13:15:51.210397] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.233 [2024-04-26 13:15:51.210447] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.233 [2024-04-26 13:15:51.210457] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.233 [2024-04-26 13:15:51.210465] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.233 [2024-04-26 13:15:51.210469] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:46.233 [2024-04-26 13:15:51.210479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:46.233 qpair failed and we were unable to recover it. 00:32:46.233 [2024-04-26 13:15:51.220424] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.233 [2024-04-26 13:15:51.220469] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.233 [2024-04-26 13:15:51.220480] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.233 [2024-04-26 13:15:51.220485] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.233 [2024-04-26 13:15:51.220489] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:46.233 [2024-04-26 13:15:51.220499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:46.233 qpair failed and we were unable to recover it. 00:32:46.233 [2024-04-26 13:15:51.230452] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.233 [2024-04-26 13:15:51.230499] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.233 [2024-04-26 13:15:51.230510] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.233 [2024-04-26 13:15:51.230514] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.233 [2024-04-26 13:15:51.230518] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:46.233 [2024-04-26 13:15:51.230529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:46.233 qpair failed and we were unable to recover it. 00:32:46.233 [2024-04-26 13:15:51.240390] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.233 [2024-04-26 13:15:51.240486] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.233 [2024-04-26 13:15:51.240497] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.233 [2024-04-26 13:15:51.240501] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.233 [2024-04-26 13:15:51.240505] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:46.233 [2024-04-26 13:15:51.240516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:46.233 qpair failed and we were unable to recover it. 00:32:46.233 [2024-04-26 13:15:51.250517] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.233 [2024-04-26 13:15:51.250561] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.233 [2024-04-26 13:15:51.250572] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.233 [2024-04-26 13:15:51.250576] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.233 [2024-04-26 13:15:51.250581] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:46.233 [2024-04-26 13:15:51.250591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:46.233 qpair failed and we were unable to recover it. 00:32:46.233 [2024-04-26 13:15:51.260420] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.233 [2024-04-26 13:15:51.260514] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.233 [2024-04-26 13:15:51.260525] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.233 [2024-04-26 13:15:51.260530] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.233 [2024-04-26 13:15:51.260534] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:46.233 [2024-04-26 13:15:51.260544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:46.233 qpair failed and we were unable to recover it. 00:32:46.233 [2024-04-26 13:15:51.270578] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.233 [2024-04-26 13:15:51.270627] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.233 [2024-04-26 13:15:51.270637] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.233 [2024-04-26 13:15:51.270642] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.233 [2024-04-26 13:15:51.270646] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:46.233 [2024-04-26 13:15:51.270656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:46.233 qpair failed and we were unable to recover it. 00:32:46.233 [2024-04-26 13:15:51.280594] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.233 [2024-04-26 13:15:51.280646] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.233 [2024-04-26 13:15:51.280656] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.233 [2024-04-26 13:15:51.280660] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.233 [2024-04-26 13:15:51.280665] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:46.233 [2024-04-26 13:15:51.280674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:46.233 qpair failed and we were unable to recover it. 00:32:46.496 [2024-04-26 13:15:51.290618] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.496 [2024-04-26 13:15:51.290667] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.496 [2024-04-26 13:15:51.290685] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.496 [2024-04-26 13:15:51.290691] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.496 [2024-04-26 13:15:51.290696] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:46.496 [2024-04-26 13:15:51.290709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:46.496 qpair failed and we were unable to recover it. 00:32:46.496 [2024-04-26 13:15:51.300659] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.496 [2024-04-26 13:15:51.300710] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.496 [2024-04-26 13:15:51.300733] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.496 [2024-04-26 13:15:51.300738] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.496 [2024-04-26 13:15:51.300742] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:46.496 [2024-04-26 13:15:51.300756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:46.496 qpair failed and we were unable to recover it. 00:32:46.496 [2024-04-26 13:15:51.310666] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.496 [2024-04-26 13:15:51.310736] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.496 [2024-04-26 13:15:51.310747] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.496 [2024-04-26 13:15:51.310752] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.496 [2024-04-26 13:15:51.310756] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:46.496 [2024-04-26 13:15:51.310767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:46.496 qpair failed and we were unable to recover it. 00:32:46.496 [2024-04-26 13:15:51.320709] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.496 [2024-04-26 13:15:51.320797] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.496 [2024-04-26 13:15:51.320810] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.496 [2024-04-26 13:15:51.320814] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.496 [2024-04-26 13:15:51.320819] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:46.496 [2024-04-26 13:15:51.320831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:46.496 qpair failed and we were unable to recover it. 00:32:46.496 [2024-04-26 13:15:51.330742] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.496 [2024-04-26 13:15:51.330790] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.496 [2024-04-26 13:15:51.330801] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.496 [2024-04-26 13:15:51.330806] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.496 [2024-04-26 13:15:51.330810] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:46.497 [2024-04-26 13:15:51.330821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:46.497 qpair failed and we were unable to recover it. 00:32:46.497 [2024-04-26 13:15:51.340627] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.497 [2024-04-26 13:15:51.340674] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.497 [2024-04-26 13:15:51.340685] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.497 [2024-04-26 13:15:51.340690] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.497 [2024-04-26 13:15:51.340694] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:46.497 [2024-04-26 13:15:51.340707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:46.497 qpair failed and we were unable to recover it. 00:32:46.497 [2024-04-26 13:15:51.350797] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.497 [2024-04-26 13:15:51.350889] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.497 [2024-04-26 13:15:51.350901] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.497 [2024-04-26 13:15:51.350905] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.497 [2024-04-26 13:15:51.350910] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:46.497 [2024-04-26 13:15:51.350920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:46.497 qpair failed and we were unable to recover it. 00:32:46.497 [2024-04-26 13:15:51.360797] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.497 [2024-04-26 13:15:51.360852] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.497 [2024-04-26 13:15:51.360864] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.497 [2024-04-26 13:15:51.360868] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.497 [2024-04-26 13:15:51.360873] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:46.497 [2024-04-26 13:15:51.360883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:46.497 qpair failed and we were unable to recover it. 00:32:46.497 [2024-04-26 13:15:51.370720] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.497 [2024-04-26 13:15:51.370777] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.497 [2024-04-26 13:15:51.370788] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.497 [2024-04-26 13:15:51.370793] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.497 [2024-04-26 13:15:51.370797] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:46.497 [2024-04-26 13:15:51.370808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:46.497 qpair failed and we were unable to recover it. 00:32:46.497 [2024-04-26 13:15:51.380880] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.497 [2024-04-26 13:15:51.380926] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.497 [2024-04-26 13:15:51.380936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.497 [2024-04-26 13:15:51.380941] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.497 [2024-04-26 13:15:51.380946] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:46.497 [2024-04-26 13:15:51.380956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:46.497 qpair failed and we were unable to recover it. 00:32:46.497 [2024-04-26 13:15:51.390915] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.497 [2024-04-26 13:15:51.390964] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.497 [2024-04-26 13:15:51.390977] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.497 [2024-04-26 13:15:51.390982] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.497 [2024-04-26 13:15:51.390987] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:46.497 [2024-04-26 13:15:51.390997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:46.497 qpair failed and we were unable to recover it. 00:32:46.497 [2024-04-26 13:15:51.400941] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.497 [2024-04-26 13:15:51.400999] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.497 [2024-04-26 13:15:51.401010] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.497 [2024-04-26 13:15:51.401015] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.497 [2024-04-26 13:15:51.401019] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:46.497 [2024-04-26 13:15:51.401029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:46.497 qpair failed and we were unable to recover it. 00:32:46.497 [2024-04-26 13:15:51.410982] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.497 [2024-04-26 13:15:51.411103] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.497 [2024-04-26 13:15:51.411114] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.497 [2024-04-26 13:15:51.411119] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.497 [2024-04-26 13:15:51.411123] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:46.497 [2024-04-26 13:15:51.411134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:46.497 qpair failed and we were unable to recover it. 00:32:46.497 [2024-04-26 13:15:51.420952] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.497 [2024-04-26 13:15:51.421001] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.497 [2024-04-26 13:15:51.421011] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.497 [2024-04-26 13:15:51.421016] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.497 [2024-04-26 13:15:51.421021] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:46.497 [2024-04-26 13:15:51.421031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:46.497 qpair failed and we were unable to recover it. 00:32:46.497 [2024-04-26 13:15:51.431008] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.497 [2024-04-26 13:15:51.431105] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.497 [2024-04-26 13:15:51.431116] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.497 [2024-04-26 13:15:51.431121] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.497 [2024-04-26 13:15:51.431128] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:46.497 [2024-04-26 13:15:51.431139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:46.497 qpair failed and we were unable to recover it. 00:32:46.497 [2024-04-26 13:15:51.441026] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.497 [2024-04-26 13:15:51.441082] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.497 [2024-04-26 13:15:51.441092] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.497 [2024-04-26 13:15:51.441097] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.497 [2024-04-26 13:15:51.441102] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:46.497 [2024-04-26 13:15:51.441112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:46.497 qpair failed and we were unable to recover it. 00:32:46.497 [2024-04-26 13:15:51.450936] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.497 [2024-04-26 13:15:51.450984] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.497 [2024-04-26 13:15:51.450995] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.497 [2024-04-26 13:15:51.451000] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.497 [2024-04-26 13:15:51.451004] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:46.497 [2024-04-26 13:15:51.451014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:46.497 qpair failed and we were unable to recover it. 00:32:46.497 [2024-04-26 13:15:51.460969] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.497 [2024-04-26 13:15:51.461016] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.497 [2024-04-26 13:15:51.461027] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.497 [2024-04-26 13:15:51.461032] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.498 [2024-04-26 13:15:51.461036] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:46.498 [2024-04-26 13:15:51.461047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:46.498 qpair failed and we were unable to recover it. 00:32:46.498 [2024-04-26 13:15:51.471121] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.498 [2024-04-26 13:15:51.471170] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.498 [2024-04-26 13:15:51.471181] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.498 [2024-04-26 13:15:51.471186] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.498 [2024-04-26 13:15:51.471190] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:46.498 [2024-04-26 13:15:51.471200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:46.498 qpair failed and we were unable to recover it. 00:32:46.498 [2024-04-26 13:15:51.481171] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.498 [2024-04-26 13:15:51.481232] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.498 [2024-04-26 13:15:51.481243] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.498 [2024-04-26 13:15:51.481248] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.498 [2024-04-26 13:15:51.481252] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:46.498 [2024-04-26 13:15:51.481262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:46.498 qpair failed and we were unable to recover it. 00:32:46.498 [2024-04-26 13:15:51.491176] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.498 [2024-04-26 13:15:51.491223] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.498 [2024-04-26 13:15:51.491234] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.498 [2024-04-26 13:15:51.491239] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.498 [2024-04-26 13:15:51.491243] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:46.498 [2024-04-26 13:15:51.491254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:46.498 qpair failed and we were unable to recover it. 00:32:46.498 [2024-04-26 13:15:51.501260] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.498 [2024-04-26 13:15:51.501308] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.498 [2024-04-26 13:15:51.501320] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.498 [2024-04-26 13:15:51.501325] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.498 [2024-04-26 13:15:51.501329] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:46.498 [2024-04-26 13:15:51.501340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:46.498 qpair failed and we were unable to recover it. 00:32:46.498 [2024-04-26 13:15:51.511142] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.498 [2024-04-26 13:15:51.511195] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.498 [2024-04-26 13:15:51.511206] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.498 [2024-04-26 13:15:51.511210] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.498 [2024-04-26 13:15:51.511214] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:46.498 [2024-04-26 13:15:51.511224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:46.498 qpair failed and we were unable to recover it. 00:32:46.498 [2024-04-26 13:15:51.521273] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.498 [2024-04-26 13:15:51.521334] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.498 [2024-04-26 13:15:51.521345] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.498 [2024-04-26 13:15:51.521350] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.498 [2024-04-26 13:15:51.521357] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:46.498 [2024-04-26 13:15:51.521367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:46.498 qpair failed and we were unable to recover it. 00:32:46.498 [2024-04-26 13:15:51.531295] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.498 [2024-04-26 13:15:51.531344] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.498 [2024-04-26 13:15:51.531355] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.498 [2024-04-26 13:15:51.531360] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.498 [2024-04-26 13:15:51.531364] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:46.498 [2024-04-26 13:15:51.531374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:46.498 qpair failed and we were unable to recover it. 00:32:46.498 [2024-04-26 13:15:51.541333] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.498 [2024-04-26 13:15:51.541381] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.498 [2024-04-26 13:15:51.541391] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.498 [2024-04-26 13:15:51.541396] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.498 [2024-04-26 13:15:51.541401] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:46.498 [2024-04-26 13:15:51.541411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:46.498 qpair failed and we were unable to recover it. 00:32:46.498 [2024-04-26 13:15:51.551367] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.498 [2024-04-26 13:15:51.551461] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.498 [2024-04-26 13:15:51.551472] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.498 [2024-04-26 13:15:51.551477] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.498 [2024-04-26 13:15:51.551481] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:46.498 [2024-04-26 13:15:51.551492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:46.498 qpair failed and we were unable to recover it. 00:32:46.761 [2024-04-26 13:15:51.561267] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.761 [2024-04-26 13:15:51.561317] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.761 [2024-04-26 13:15:51.561328] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.761 [2024-04-26 13:15:51.561333] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.761 [2024-04-26 13:15:51.561337] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:46.761 [2024-04-26 13:15:51.561348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:46.761 qpair failed and we were unable to recover it. 00:32:46.761 [2024-04-26 13:15:51.571457] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.761 [2024-04-26 13:15:51.571539] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.761 [2024-04-26 13:15:51.571550] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.761 [2024-04-26 13:15:51.571554] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.761 [2024-04-26 13:15:51.571559] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:46.761 [2024-04-26 13:15:51.571569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:46.761 qpair failed and we were unable to recover it. 00:32:46.761 [2024-04-26 13:15:51.581456] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.761 [2024-04-26 13:15:51.581506] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.761 [2024-04-26 13:15:51.581516] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.761 [2024-04-26 13:15:51.581521] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.761 [2024-04-26 13:15:51.581526] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:46.761 [2024-04-26 13:15:51.581535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:46.761 qpair failed and we were unable to recover it. 00:32:46.761 [2024-04-26 13:15:51.591490] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.761 [2024-04-26 13:15:51.591538] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.761 [2024-04-26 13:15:51.591549] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.761 [2024-04-26 13:15:51.591554] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.761 [2024-04-26 13:15:51.591558] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:46.761 [2024-04-26 13:15:51.591568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:46.761 qpair failed and we were unable to recover it. 00:32:46.761 [2024-04-26 13:15:51.601517] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.761 [2024-04-26 13:15:51.601574] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.761 [2024-04-26 13:15:51.601585] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.761 [2024-04-26 13:15:51.601590] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.761 [2024-04-26 13:15:51.601596] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:46.761 [2024-04-26 13:15:51.601606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:46.761 qpair failed and we were unable to recover it. 00:32:46.761 [2024-04-26 13:15:51.611531] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.762 [2024-04-26 13:15:51.611588] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.762 [2024-04-26 13:15:51.611599] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.762 [2024-04-26 13:15:51.611606] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.762 [2024-04-26 13:15:51.611611] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:46.762 [2024-04-26 13:15:51.611621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:46.762 qpair failed and we were unable to recover it. 00:32:46.762 [2024-04-26 13:15:51.621569] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.762 [2024-04-26 13:15:51.621615] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.762 [2024-04-26 13:15:51.621626] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.762 [2024-04-26 13:15:51.621631] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.762 [2024-04-26 13:15:51.621635] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:46.762 [2024-04-26 13:15:51.621646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:46.762 qpair failed and we were unable to recover it. 00:32:46.762 [2024-04-26 13:15:51.631493] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.762 [2024-04-26 13:15:51.631543] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.762 [2024-04-26 13:15:51.631554] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.762 [2024-04-26 13:15:51.631559] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.762 [2024-04-26 13:15:51.631563] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:46.762 [2024-04-26 13:15:51.631573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:46.762 qpair failed and we were unable to recover it. 00:32:46.762 [2024-04-26 13:15:51.641597] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.762 [2024-04-26 13:15:51.641650] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.762 [2024-04-26 13:15:51.641660] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.762 [2024-04-26 13:15:51.641665] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.762 [2024-04-26 13:15:51.641669] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:46.762 [2024-04-26 13:15:51.641679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:46.762 qpair failed and we were unable to recover it. 00:32:46.762 [2024-04-26 13:15:51.651624] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.762 [2024-04-26 13:15:51.651675] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.762 [2024-04-26 13:15:51.651694] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.762 [2024-04-26 13:15:51.651700] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.762 [2024-04-26 13:15:51.651704] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:46.762 [2024-04-26 13:15:51.651718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:46.762 qpair failed and we were unable to recover it. 00:32:46.762 [2024-04-26 13:15:51.661546] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.762 [2024-04-26 13:15:51.661595] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.762 [2024-04-26 13:15:51.661608] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.762 [2024-04-26 13:15:51.661613] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.762 [2024-04-26 13:15:51.661617] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:46.762 [2024-04-26 13:15:51.661628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:46.762 qpair failed and we were unable to recover it. 00:32:46.762 [2024-04-26 13:15:51.671663] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.762 [2024-04-26 13:15:51.671726] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.762 [2024-04-26 13:15:51.671737] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.762 [2024-04-26 13:15:51.671742] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.762 [2024-04-26 13:15:51.671746] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:46.762 [2024-04-26 13:15:51.671756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:46.762 qpair failed and we were unable to recover it. 00:32:46.762 [2024-04-26 13:15:51.681710] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.762 [2024-04-26 13:15:51.681758] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.762 [2024-04-26 13:15:51.681768] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.762 [2024-04-26 13:15:51.681773] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.762 [2024-04-26 13:15:51.681777] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:46.762 [2024-04-26 13:15:51.681788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:46.762 qpair failed and we were unable to recover it. 00:32:46.762 [2024-04-26 13:15:51.691721] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.762 [2024-04-26 13:15:51.691778] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.762 [2024-04-26 13:15:51.691789] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.762 [2024-04-26 13:15:51.691794] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.762 [2024-04-26 13:15:51.691798] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:46.762 [2024-04-26 13:15:51.691809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:46.762 qpair failed and we were unable to recover it. 00:32:46.762 [2024-04-26 13:15:51.701765] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.762 [2024-04-26 13:15:51.701810] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.762 [2024-04-26 13:15:51.701823] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.762 [2024-04-26 13:15:51.701828] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.762 [2024-04-26 13:15:51.701832] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:46.762 [2024-04-26 13:15:51.701847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:46.762 qpair failed and we were unable to recover it. 00:32:46.762 [2024-04-26 13:15:51.711807] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.762 [2024-04-26 13:15:51.711860] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.762 [2024-04-26 13:15:51.711871] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.762 [2024-04-26 13:15:51.711876] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.762 [2024-04-26 13:15:51.711880] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:46.762 [2024-04-26 13:15:51.711890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:46.763 qpair failed and we were unable to recover it. 00:32:46.763 [2024-04-26 13:15:51.721827] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.763 [2024-04-26 13:15:51.721930] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.763 [2024-04-26 13:15:51.721942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.763 [2024-04-26 13:15:51.721947] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.763 [2024-04-26 13:15:51.721951] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:46.763 [2024-04-26 13:15:51.721962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:46.763 qpair failed and we were unable to recover it. 00:32:46.763 [2024-04-26 13:15:51.731858] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.763 [2024-04-26 13:15:51.731913] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.763 [2024-04-26 13:15:51.731924] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.763 [2024-04-26 13:15:51.731929] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.763 [2024-04-26 13:15:51.731933] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:46.763 [2024-04-26 13:15:51.731943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:46.763 qpair failed and we were unable to recover it. 00:32:46.763 [2024-04-26 13:15:51.741881] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.763 [2024-04-26 13:15:51.741930] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.763 [2024-04-26 13:15:51.741940] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.763 [2024-04-26 13:15:51.741945] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.763 [2024-04-26 13:15:51.741949] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:46.763 [2024-04-26 13:15:51.741962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:46.763 qpair failed and we were unable to recover it. 00:32:46.763 [2024-04-26 13:15:51.751925] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.763 [2024-04-26 13:15:51.751973] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.763 [2024-04-26 13:15:51.751985] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.763 [2024-04-26 13:15:51.751990] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.763 [2024-04-26 13:15:51.751994] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:46.763 [2024-04-26 13:15:51.752004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:46.763 qpair failed and we were unable to recover it. 00:32:46.763 [2024-04-26 13:15:51.761943] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.763 [2024-04-26 13:15:51.761994] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.763 [2024-04-26 13:15:51.762005] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.763 [2024-04-26 13:15:51.762010] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.763 [2024-04-26 13:15:51.762014] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:46.763 [2024-04-26 13:15:51.762024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:46.763 qpair failed and we were unable to recover it. 00:32:46.763 [2024-04-26 13:15:51.771854] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.763 [2024-04-26 13:15:51.771910] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.763 [2024-04-26 13:15:51.771921] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.763 [2024-04-26 13:15:51.771926] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.763 [2024-04-26 13:15:51.771930] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:46.763 [2024-04-26 13:15:51.771941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:46.763 qpair failed and we were unable to recover it. 00:32:46.763 [2024-04-26 13:15:51.781875] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.763 [2024-04-26 13:15:51.781936] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.763 [2024-04-26 13:15:51.781948] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.763 [2024-04-26 13:15:51.781953] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.763 [2024-04-26 13:15:51.781957] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:46.763 [2024-04-26 13:15:51.781968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:46.763 qpair failed and we were unable to recover it. 00:32:46.763 [2024-04-26 13:15:51.792033] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.763 [2024-04-26 13:15:51.792085] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.763 [2024-04-26 13:15:51.792098] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.763 [2024-04-26 13:15:51.792103] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.763 [2024-04-26 13:15:51.792107] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:46.763 [2024-04-26 13:15:51.792117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:46.763 qpair failed and we were unable to recover it. 00:32:46.763 [2024-04-26 13:15:51.801933] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.763 [2024-04-26 13:15:51.801999] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.763 [2024-04-26 13:15:51.802010] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.763 [2024-04-26 13:15:51.802015] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.763 [2024-04-26 13:15:51.802019] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:46.763 [2024-04-26 13:15:51.802030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:46.763 qpair failed and we were unable to recover it. 00:32:46.763 [2024-04-26 13:15:51.812092] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.763 [2024-04-26 13:15:51.812145] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.763 [2024-04-26 13:15:51.812156] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.763 [2024-04-26 13:15:51.812161] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.763 [2024-04-26 13:15:51.812165] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:46.763 [2024-04-26 13:15:51.812175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:46.763 qpair failed and we were unable to recover it. 00:32:47.026 [2024-04-26 13:15:51.822010] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.026 [2024-04-26 13:15:51.822106] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.026 [2024-04-26 13:15:51.822118] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.026 [2024-04-26 13:15:51.822123] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.026 [2024-04-26 13:15:51.822127] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:47.026 [2024-04-26 13:15:51.822138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:47.026 qpair failed and we were unable to recover it. 00:32:47.026 [2024-04-26 13:15:51.832158] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.026 [2024-04-26 13:15:51.832207] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.026 [2024-04-26 13:15:51.832218] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.026 [2024-04-26 13:15:51.832223] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.026 [2024-04-26 13:15:51.832233] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:47.026 [2024-04-26 13:15:51.832243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:47.026 qpair failed and we were unable to recover it. 00:32:47.026 [2024-04-26 13:15:51.842183] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.026 [2024-04-26 13:15:51.842236] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.026 [2024-04-26 13:15:51.842247] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.026 [2024-04-26 13:15:51.842252] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.026 [2024-04-26 13:15:51.842256] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:47.026 [2024-04-26 13:15:51.842266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:47.026 qpair failed and we were unable to recover it. 00:32:47.026 [2024-04-26 13:15:51.852178] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.026 [2024-04-26 13:15:51.852232] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.026 [2024-04-26 13:15:51.852243] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.026 [2024-04-26 13:15:51.852247] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.026 [2024-04-26 13:15:51.852251] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:47.026 [2024-04-26 13:15:51.852261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:47.026 qpair failed and we were unable to recover it. 00:32:47.026 [2024-04-26 13:15:51.862228] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.026 [2024-04-26 13:15:51.862278] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.026 [2024-04-26 13:15:51.862288] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.026 [2024-04-26 13:15:51.862293] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.026 [2024-04-26 13:15:51.862297] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:47.026 [2024-04-26 13:15:51.862307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:47.026 qpair failed and we were unable to recover it. 00:32:47.026 [2024-04-26 13:15:51.872259] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.026 [2024-04-26 13:15:51.872307] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.026 [2024-04-26 13:15:51.872317] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.026 [2024-04-26 13:15:51.872322] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.026 [2024-04-26 13:15:51.872326] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:47.026 [2024-04-26 13:15:51.872336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:47.026 qpair failed and we were unable to recover it. 00:32:47.026 [2024-04-26 13:15:51.882268] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.026 [2024-04-26 13:15:51.882323] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.026 [2024-04-26 13:15:51.882334] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.026 [2024-04-26 13:15:51.882339] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.026 [2024-04-26 13:15:51.882343] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:47.026 [2024-04-26 13:15:51.882354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:47.026 qpair failed and we were unable to recover it. 00:32:47.026 [2024-04-26 13:15:51.892321] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.026 [2024-04-26 13:15:51.892368] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.026 [2024-04-26 13:15:51.892379] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.026 [2024-04-26 13:15:51.892384] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.026 [2024-04-26 13:15:51.892388] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:47.026 [2024-04-26 13:15:51.892398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:47.026 qpair failed and we were unable to recover it. 00:32:47.026 [2024-04-26 13:15:51.902216] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.026 [2024-04-26 13:15:51.902261] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.026 [2024-04-26 13:15:51.902273] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.026 [2024-04-26 13:15:51.902277] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.026 [2024-04-26 13:15:51.902282] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:47.026 [2024-04-26 13:15:51.902292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:47.026 qpair failed and we were unable to recover it. 00:32:47.026 [2024-04-26 13:15:51.912372] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.026 [2024-04-26 13:15:51.912423] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.026 [2024-04-26 13:15:51.912434] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.026 [2024-04-26 13:15:51.912439] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.026 [2024-04-26 13:15:51.912443] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:47.026 [2024-04-26 13:15:51.912453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:47.026 qpair failed and we were unable to recover it. 00:32:47.026 [2024-04-26 13:15:51.922398] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.026 [2024-04-26 13:15:51.922458] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.026 [2024-04-26 13:15:51.922469] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.026 [2024-04-26 13:15:51.922473] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.026 [2024-04-26 13:15:51.922480] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:47.027 [2024-04-26 13:15:51.922491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:47.027 qpair failed and we were unable to recover it. 00:32:47.027 [2024-04-26 13:15:51.932424] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.027 [2024-04-26 13:15:51.932482] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.027 [2024-04-26 13:15:51.932492] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.027 [2024-04-26 13:15:51.932497] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.027 [2024-04-26 13:15:51.932501] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:47.027 [2024-04-26 13:15:51.932511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:47.027 qpair failed and we were unable to recover it. 00:32:47.027 [2024-04-26 13:15:51.942470] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.027 [2024-04-26 13:15:51.942514] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.027 [2024-04-26 13:15:51.942524] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.027 [2024-04-26 13:15:51.942529] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.027 [2024-04-26 13:15:51.942533] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:47.027 [2024-04-26 13:15:51.942543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:47.027 qpair failed and we were unable to recover it. 00:32:47.027 [2024-04-26 13:15:51.952496] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.027 [2024-04-26 13:15:51.952542] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.027 [2024-04-26 13:15:51.952554] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.027 [2024-04-26 13:15:51.952558] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.027 [2024-04-26 13:15:51.952563] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:47.027 [2024-04-26 13:15:51.952573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:47.027 qpair failed and we were unable to recover it. 00:32:47.027 [2024-04-26 13:15:51.962529] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.027 [2024-04-26 13:15:51.962577] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.027 [2024-04-26 13:15:51.962588] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.027 [2024-04-26 13:15:51.962592] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.027 [2024-04-26 13:15:51.962596] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:47.027 [2024-04-26 13:15:51.962606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:47.027 qpair failed and we were unable to recover it. 00:32:47.027 [2024-04-26 13:15:51.972554] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.027 [2024-04-26 13:15:51.972599] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.027 [2024-04-26 13:15:51.972610] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.027 [2024-04-26 13:15:51.972615] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.027 [2024-04-26 13:15:51.972619] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:47.027 [2024-04-26 13:15:51.972629] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:47.027 qpair failed and we were unable to recover it. 00:32:47.027 [2024-04-26 13:15:51.982473] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.027 [2024-04-26 13:15:51.982529] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.027 [2024-04-26 13:15:51.982540] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.027 [2024-04-26 13:15:51.982545] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.027 [2024-04-26 13:15:51.982549] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:47.027 [2024-04-26 13:15:51.982559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:47.027 qpair failed and we were unable to recover it. 00:32:47.027 [2024-04-26 13:15:51.992627] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.027 [2024-04-26 13:15:51.992679] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.027 [2024-04-26 13:15:51.992697] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.027 [2024-04-26 13:15:51.992703] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.027 [2024-04-26 13:15:51.992707] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:47.027 [2024-04-26 13:15:51.992721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:47.027 qpair failed and we were unable to recover it. 00:32:47.027 [2024-04-26 13:15:52.002656] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.027 [2024-04-26 13:15:52.002710] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.027 [2024-04-26 13:15:52.002722] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.027 [2024-04-26 13:15:52.002727] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.027 [2024-04-26 13:15:52.002732] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:47.027 [2024-04-26 13:15:52.002743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:47.027 qpair failed and we were unable to recover it. 00:32:47.027 [2024-04-26 13:15:52.012645] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.027 [2024-04-26 13:15:52.012696] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.027 [2024-04-26 13:15:52.012707] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.027 [2024-04-26 13:15:52.012715] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.027 [2024-04-26 13:15:52.012720] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:47.027 [2024-04-26 13:15:52.012730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:47.027 qpair failed and we were unable to recover it. 00:32:47.027 [2024-04-26 13:15:52.022706] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.027 [2024-04-26 13:15:52.022753] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.027 [2024-04-26 13:15:52.022763] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.027 [2024-04-26 13:15:52.022768] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.027 [2024-04-26 13:15:52.022772] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:47.027 [2024-04-26 13:15:52.022783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:47.027 qpair failed and we were unable to recover it. 00:32:47.027 [2024-04-26 13:15:52.032747] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.027 [2024-04-26 13:15:52.032796] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.027 [2024-04-26 13:15:52.032806] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.027 [2024-04-26 13:15:52.032811] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.027 [2024-04-26 13:15:52.032815] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:47.027 [2024-04-26 13:15:52.032825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:47.027 qpair failed and we were unable to recover it. 00:32:47.027 [2024-04-26 13:15:52.042726] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.027 [2024-04-26 13:15:52.042781] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.027 [2024-04-26 13:15:52.042791] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.027 [2024-04-26 13:15:52.042796] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.027 [2024-04-26 13:15:52.042800] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:47.027 [2024-04-26 13:15:52.042811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:47.027 qpair failed and we were unable to recover it. 00:32:47.027 [2024-04-26 13:15:52.052773] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.027 [2024-04-26 13:15:52.052825] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.027 [2024-04-26 13:15:52.052840] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.027 [2024-04-26 13:15:52.052846] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.027 [2024-04-26 13:15:52.052850] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:47.027 [2024-04-26 13:15:52.052861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:47.027 qpair failed and we were unable to recover it. 00:32:47.027 [2024-04-26 13:15:52.062820] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.028 [2024-04-26 13:15:52.062875] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.028 [2024-04-26 13:15:52.062886] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.028 [2024-04-26 13:15:52.062891] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.028 [2024-04-26 13:15:52.062895] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:47.028 [2024-04-26 13:15:52.062906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:47.028 qpair failed and we were unable to recover it. 00:32:47.028 [2024-04-26 13:15:52.072721] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.028 [2024-04-26 13:15:52.072769] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.028 [2024-04-26 13:15:52.072780] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.028 [2024-04-26 13:15:52.072785] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.028 [2024-04-26 13:15:52.072789] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:47.028 [2024-04-26 13:15:52.072800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:47.028 qpair failed and we were unable to recover it. 00:32:47.028 [2024-04-26 13:15:52.082874] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.028 [2024-04-26 13:15:52.082957] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.028 [2024-04-26 13:15:52.082968] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.028 [2024-04-26 13:15:52.082973] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.028 [2024-04-26 13:15:52.082977] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:47.028 [2024-04-26 13:15:52.082987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:47.028 qpair failed and we were unable to recover it. 00:32:47.290 [2024-04-26 13:15:52.092780] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.290 [2024-04-26 13:15:52.092834] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.290 [2024-04-26 13:15:52.092849] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.290 [2024-04-26 13:15:52.092854] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.290 [2024-04-26 13:15:52.092858] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:47.290 [2024-04-26 13:15:52.092869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:47.290 qpair failed and we were unable to recover it. 00:32:47.290 [2024-04-26 13:15:52.102940] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.290 [2024-04-26 13:15:52.103024] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.290 [2024-04-26 13:15:52.103038] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.290 [2024-04-26 13:15:52.103042] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.290 [2024-04-26 13:15:52.103047] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:47.290 [2024-04-26 13:15:52.103057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:47.290 qpair failed and we were unable to recover it. 00:32:47.290 [2024-04-26 13:15:52.112834] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.290 [2024-04-26 13:15:52.112891] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.290 [2024-04-26 13:15:52.112902] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.290 [2024-04-26 13:15:52.112907] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.290 [2024-04-26 13:15:52.112911] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:47.290 [2024-04-26 13:15:52.112922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:47.290 qpair failed and we were unable to recover it. 00:32:47.290 [2024-04-26 13:15:52.123027] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.290 [2024-04-26 13:15:52.123090] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.290 [2024-04-26 13:15:52.123100] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.290 [2024-04-26 13:15:52.123105] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.290 [2024-04-26 13:15:52.123109] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:47.290 [2024-04-26 13:15:52.123119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:47.290 qpair failed and we were unable to recover it. 00:32:47.290 [2024-04-26 13:15:52.133009] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.290 [2024-04-26 13:15:52.133058] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.290 [2024-04-26 13:15:52.133069] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.290 [2024-04-26 13:15:52.133073] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.290 [2024-04-26 13:15:52.133078] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:47.290 [2024-04-26 13:15:52.133088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:47.290 qpair failed and we were unable to recover it. 00:32:47.291 [2024-04-26 13:15:52.143046] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.291 [2024-04-26 13:15:52.143095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.291 [2024-04-26 13:15:52.143106] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.291 [2024-04-26 13:15:52.143110] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.291 [2024-04-26 13:15:52.143115] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:47.291 [2024-04-26 13:15:52.143129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:47.291 qpair failed and we were unable to recover it. 00:32:47.291 [2024-04-26 13:15:52.153089] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.291 [2024-04-26 13:15:52.153144] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.291 [2024-04-26 13:15:52.153156] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.291 [2024-04-26 13:15:52.153161] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.291 [2024-04-26 13:15:52.153165] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:47.291 [2024-04-26 13:15:52.153175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:47.291 qpair failed and we were unable to recover it. 00:32:47.291 [2024-04-26 13:15:52.163136] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.291 [2024-04-26 13:15:52.163232] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.291 [2024-04-26 13:15:52.163243] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.291 [2024-04-26 13:15:52.163248] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.291 [2024-04-26 13:15:52.163252] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:47.291 [2024-04-26 13:15:52.163263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:47.291 qpair failed and we were unable to recover it. 00:32:47.291 [2024-04-26 13:15:52.173023] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.291 [2024-04-26 13:15:52.173081] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.291 [2024-04-26 13:15:52.173091] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.291 [2024-04-26 13:15:52.173096] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.291 [2024-04-26 13:15:52.173100] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:47.291 [2024-04-26 13:15:52.173110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:47.291 qpair failed and we were unable to recover it. 00:32:47.291 [2024-04-26 13:15:52.183204] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.291 [2024-04-26 13:15:52.183252] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.291 [2024-04-26 13:15:52.183262] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.291 [2024-04-26 13:15:52.183267] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.291 [2024-04-26 13:15:52.183271] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:47.291 [2024-04-26 13:15:52.183282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:47.291 qpair failed and we were unable to recover it. 00:32:47.291 [2024-04-26 13:15:52.193216] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.291 [2024-04-26 13:15:52.193265] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.291 [2024-04-26 13:15:52.193279] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.291 [2024-04-26 13:15:52.193284] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.291 [2024-04-26 13:15:52.193289] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:47.291 [2024-04-26 13:15:52.193299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:47.291 qpair failed and we were unable to recover it. 00:32:47.291 [2024-04-26 13:15:52.203101] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.291 [2024-04-26 13:15:52.203155] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.291 [2024-04-26 13:15:52.203165] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.291 [2024-04-26 13:15:52.203170] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.291 [2024-04-26 13:15:52.203174] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:47.291 [2024-04-26 13:15:52.203185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:47.291 qpair failed and we were unable to recover it. 00:32:47.291 [2024-04-26 13:15:52.213249] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.291 [2024-04-26 13:15:52.213295] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.291 [2024-04-26 13:15:52.213307] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.291 [2024-04-26 13:15:52.213311] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.291 [2024-04-26 13:15:52.213316] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:47.291 [2024-04-26 13:15:52.213326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:47.291 qpair failed and we were unable to recover it. 00:32:47.291 [2024-04-26 13:15:52.223268] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.291 [2024-04-26 13:15:52.223353] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.291 [2024-04-26 13:15:52.223363] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.291 [2024-04-26 13:15:52.223368] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.291 [2024-04-26 13:15:52.223372] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:47.291 [2024-04-26 13:15:52.223382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:47.291 qpair failed and we were unable to recover it. 00:32:47.291 [2024-04-26 13:15:52.233266] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.291 [2024-04-26 13:15:52.233314] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.291 [2024-04-26 13:15:52.233324] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.291 [2024-04-26 13:15:52.233329] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.291 [2024-04-26 13:15:52.233333] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:47.291 [2024-04-26 13:15:52.233346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:47.291 qpair failed and we were unable to recover it. 00:32:47.291 [2024-04-26 13:15:52.243333] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.291 [2024-04-26 13:15:52.243385] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.291 [2024-04-26 13:15:52.243396] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.291 [2024-04-26 13:15:52.243400] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.291 [2024-04-26 13:15:52.243404] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:47.291 [2024-04-26 13:15:52.243415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:47.291 qpair failed and we were unable to recover it. 00:32:47.291 [2024-04-26 13:15:52.253370] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.291 [2024-04-26 13:15:52.253420] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.291 [2024-04-26 13:15:52.253431] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.291 [2024-04-26 13:15:52.253435] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.291 [2024-04-26 13:15:52.253440] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:47.291 [2024-04-26 13:15:52.253450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:47.291 qpair failed and we were unable to recover it. 00:32:47.291 [2024-04-26 13:15:52.263383] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.291 [2024-04-26 13:15:52.263475] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.291 [2024-04-26 13:15:52.263486] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.291 [2024-04-26 13:15:52.263490] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.291 [2024-04-26 13:15:52.263495] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:47.291 [2024-04-26 13:15:52.263505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:47.291 qpair failed and we were unable to recover it. 00:32:47.291 [2024-04-26 13:15:52.273415] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.291 [2024-04-26 13:15:52.273467] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.291 [2024-04-26 13:15:52.273477] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.291 [2024-04-26 13:15:52.273482] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.291 [2024-04-26 13:15:52.273486] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:47.291 [2024-04-26 13:15:52.273496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:47.291 qpair failed and we were unable to recover it. 00:32:47.291 [2024-04-26 13:15:52.283441] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.291 [2024-04-26 13:15:52.283504] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.291 [2024-04-26 13:15:52.283515] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.291 [2024-04-26 13:15:52.283520] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.291 [2024-04-26 13:15:52.283524] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:47.291 [2024-04-26 13:15:52.283534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:47.291 qpair failed and we were unable to recover it. 00:32:47.291 [2024-04-26 13:15:52.293465] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.291 [2024-04-26 13:15:52.293516] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.291 [2024-04-26 13:15:52.293526] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.291 [2024-04-26 13:15:52.293531] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.291 [2024-04-26 13:15:52.293535] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:47.291 [2024-04-26 13:15:52.293545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:47.291 qpair failed and we were unable to recover it. 00:32:47.291 [2024-04-26 13:15:52.303490] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.291 [2024-04-26 13:15:52.303566] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.291 [2024-04-26 13:15:52.303576] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.291 [2024-04-26 13:15:52.303581] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.291 [2024-04-26 13:15:52.303585] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:47.291 [2024-04-26 13:15:52.303595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:47.291 qpair failed and we were unable to recover it. 00:32:47.291 [2024-04-26 13:15:52.313519] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.291 [2024-04-26 13:15:52.313569] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.291 [2024-04-26 13:15:52.313579] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.291 [2024-04-26 13:15:52.313584] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.291 [2024-04-26 13:15:52.313589] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:47.291 [2024-04-26 13:15:52.313599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:47.291 qpair failed and we were unable to recover it. 00:32:47.291 [2024-04-26 13:15:52.323433] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.291 [2024-04-26 13:15:52.323487] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.291 [2024-04-26 13:15:52.323498] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.291 [2024-04-26 13:15:52.323503] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.291 [2024-04-26 13:15:52.323510] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:47.291 [2024-04-26 13:15:52.323521] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:47.291 qpair failed and we were unable to recover it. 00:32:47.291 [2024-04-26 13:15:52.333450] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.291 [2024-04-26 13:15:52.333496] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.291 [2024-04-26 13:15:52.333507] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.291 [2024-04-26 13:15:52.333511] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.291 [2024-04-26 13:15:52.333516] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:47.291 [2024-04-26 13:15:52.333526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:47.291 qpair failed and we were unable to recover it. 00:32:47.291 [2024-04-26 13:15:52.343594] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.291 [2024-04-26 13:15:52.343647] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.291 [2024-04-26 13:15:52.343658] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.291 [2024-04-26 13:15:52.343662] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.291 [2024-04-26 13:15:52.343667] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:47.291 [2024-04-26 13:15:52.343677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:47.291 qpair failed and we were unable to recover it. 00:32:47.553 [2024-04-26 13:15:52.353532] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.553 [2024-04-26 13:15:52.353631] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.553 [2024-04-26 13:15:52.353642] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.553 [2024-04-26 13:15:52.353646] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.553 [2024-04-26 13:15:52.353651] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:47.553 [2024-04-26 13:15:52.353661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:47.553 qpair failed and we were unable to recover it. 00:32:47.553 [2024-04-26 13:15:52.363562] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.553 [2024-04-26 13:15:52.363617] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.553 [2024-04-26 13:15:52.363628] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.553 [2024-04-26 13:15:52.363633] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.553 [2024-04-26 13:15:52.363637] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:47.553 [2024-04-26 13:15:52.363647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:47.553 qpair failed and we were unable to recover it. 00:32:47.553 [2024-04-26 13:15:52.373692] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.553 [2024-04-26 13:15:52.373743] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.553 [2024-04-26 13:15:52.373754] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.553 [2024-04-26 13:15:52.373759] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.553 [2024-04-26 13:15:52.373763] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:47.553 [2024-04-26 13:15:52.373774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:47.553 qpair failed and we were unable to recover it. 00:32:47.553 [2024-04-26 13:15:52.383708] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.553 [2024-04-26 13:15:52.383751] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.553 [2024-04-26 13:15:52.383762] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.553 [2024-04-26 13:15:52.383767] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.553 [2024-04-26 13:15:52.383771] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:47.553 [2024-04-26 13:15:52.383782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:47.553 qpair failed and we were unable to recover it. 00:32:47.553 [2024-04-26 13:15:52.393759] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.553 [2024-04-26 13:15:52.393808] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.553 [2024-04-26 13:15:52.393819] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.553 [2024-04-26 13:15:52.393823] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.553 [2024-04-26 13:15:52.393827] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:47.553 [2024-04-26 13:15:52.393840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:47.553 qpair failed and we were unable to recover it. 00:32:47.553 [2024-04-26 13:15:52.403784] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.553 [2024-04-26 13:15:52.403839] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.553 [2024-04-26 13:15:52.403850] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.553 [2024-04-26 13:15:52.403855] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.553 [2024-04-26 13:15:52.403859] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:47.553 [2024-04-26 13:15:52.403870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:47.553 qpair failed and we were unable to recover it. 00:32:47.553 [2024-04-26 13:15:52.413804] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.553 [2024-04-26 13:15:52.413856] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.553 [2024-04-26 13:15:52.413867] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.553 [2024-04-26 13:15:52.413875] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.553 [2024-04-26 13:15:52.413879] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:47.553 [2024-04-26 13:15:52.413889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:47.553 qpair failed and we were unable to recover it. 00:32:47.553 [2024-04-26 13:15:52.423825] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.553 [2024-04-26 13:15:52.423883] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.553 [2024-04-26 13:15:52.423893] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.553 [2024-04-26 13:15:52.423898] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.553 [2024-04-26 13:15:52.423902] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:47.553 [2024-04-26 13:15:52.423913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:47.553 qpair failed and we were unable to recover it. 00:32:47.553 [2024-04-26 13:15:52.433741] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.553 [2024-04-26 13:15:52.433789] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.553 [2024-04-26 13:15:52.433800] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.553 [2024-04-26 13:15:52.433804] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.553 [2024-04-26 13:15:52.433808] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:47.553 [2024-04-26 13:15:52.433819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:47.553 qpair failed and we were unable to recover it. 00:32:47.553 [2024-04-26 13:15:52.443902] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.553 [2024-04-26 13:15:52.443952] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.553 [2024-04-26 13:15:52.443962] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.553 [2024-04-26 13:15:52.443967] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.553 [2024-04-26 13:15:52.443971] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:47.553 [2024-04-26 13:15:52.443981] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:47.553 qpair failed and we were unable to recover it. 00:32:47.553 [2024-04-26 13:15:52.453793] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.553 [2024-04-26 13:15:52.453848] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.553 [2024-04-26 13:15:52.453859] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.553 [2024-04-26 13:15:52.453864] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.553 [2024-04-26 13:15:52.453868] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:47.553 [2024-04-26 13:15:52.453879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:47.553 qpair failed and we were unable to recover it. 00:32:47.553 [2024-04-26 13:15:52.463952] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.553 [2024-04-26 13:15:52.464001] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.553 [2024-04-26 13:15:52.464011] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.553 [2024-04-26 13:15:52.464016] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.553 [2024-04-26 13:15:52.464020] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:47.553 [2024-04-26 13:15:52.464030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:47.553 qpair failed and we were unable to recover it. 00:32:47.553 [2024-04-26 13:15:52.473988] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.553 [2024-04-26 13:15:52.474069] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.553 [2024-04-26 13:15:52.474079] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.553 [2024-04-26 13:15:52.474084] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.553 [2024-04-26 13:15:52.474088] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:47.553 [2024-04-26 13:15:52.474098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:47.553 qpair failed and we were unable to recover it. 00:32:47.553 [2024-04-26 13:15:52.483980] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.553 [2024-04-26 13:15:52.484043] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.553 [2024-04-26 13:15:52.484054] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.553 [2024-04-26 13:15:52.484058] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.553 [2024-04-26 13:15:52.484063] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:47.553 [2024-04-26 13:15:52.484073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:47.553 qpair failed and we were unable to recover it. 00:32:47.553 [2024-04-26 13:15:52.494051] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.553 [2024-04-26 13:15:52.494105] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.553 [2024-04-26 13:15:52.494115] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.553 [2024-04-26 13:15:52.494120] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.553 [2024-04-26 13:15:52.494124] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:47.553 [2024-04-26 13:15:52.494134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:47.554 qpair failed and we were unable to recover it. 00:32:47.554 [2024-04-26 13:15:52.503955] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.554 [2024-04-26 13:15:52.504003] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.554 [2024-04-26 13:15:52.504013] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.554 [2024-04-26 13:15:52.504021] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.554 [2024-04-26 13:15:52.504025] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:47.554 [2024-04-26 13:15:52.504035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:47.554 qpair failed and we were unable to recover it. 00:32:47.554 [2024-04-26 13:15:52.514096] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.554 [2024-04-26 13:15:52.514143] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.554 [2024-04-26 13:15:52.514154] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.554 [2024-04-26 13:15:52.514159] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.554 [2024-04-26 13:15:52.514163] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:47.554 [2024-04-26 13:15:52.514173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:47.554 qpair failed and we were unable to recover it. 00:32:47.554 [2024-04-26 13:15:52.524128] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.554 [2024-04-26 13:15:52.524216] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.554 [2024-04-26 13:15:52.524226] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.554 [2024-04-26 13:15:52.524231] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.554 [2024-04-26 13:15:52.524235] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:47.554 [2024-04-26 13:15:52.524245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:47.554 qpair failed and we were unable to recover it. 00:32:47.554 [2024-04-26 13:15:52.534074] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.554 [2024-04-26 13:15:52.534121] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.554 [2024-04-26 13:15:52.534133] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.554 [2024-04-26 13:15:52.534137] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.554 [2024-04-26 13:15:52.534141] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:47.554 [2024-04-26 13:15:52.534151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:47.554 qpair failed and we were unable to recover it. 00:32:47.554 [2024-04-26 13:15:52.544188] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.554 [2024-04-26 13:15:52.544230] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.554 [2024-04-26 13:15:52.544240] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.554 [2024-04-26 13:15:52.544245] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.554 [2024-04-26 13:15:52.544249] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:47.554 [2024-04-26 13:15:52.544259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:47.554 qpair failed and we were unable to recover it. 00:32:47.554 [2024-04-26 13:15:52.554225] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.554 [2024-04-26 13:15:52.554274] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.554 [2024-04-26 13:15:52.554285] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.554 [2024-04-26 13:15:52.554290] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.554 [2024-04-26 13:15:52.554294] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:47.554 [2024-04-26 13:15:52.554304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:47.554 qpair failed and we were unable to recover it. 00:32:47.554 [2024-04-26 13:15:52.564253] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.554 [2024-04-26 13:15:52.564310] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.554 [2024-04-26 13:15:52.564320] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.554 [2024-04-26 13:15:52.564325] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.554 [2024-04-26 13:15:52.564329] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:47.554 [2024-04-26 13:15:52.564339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:47.554 qpair failed and we were unable to recover it. 00:32:47.554 [2024-04-26 13:15:52.574285] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.554 [2024-04-26 13:15:52.574332] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.554 [2024-04-26 13:15:52.574342] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.554 [2024-04-26 13:15:52.574347] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.554 [2024-04-26 13:15:52.574351] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:47.554 [2024-04-26 13:15:52.574361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:47.554 qpair failed and we were unable to recover it. 00:32:47.554 [2024-04-26 13:15:52.584289] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.554 [2024-04-26 13:15:52.584337] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.554 [2024-04-26 13:15:52.584347] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.554 [2024-04-26 13:15:52.584352] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.554 [2024-04-26 13:15:52.584356] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:47.554 [2024-04-26 13:15:52.584366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:47.554 qpair failed and we were unable to recover it. 00:32:47.554 [2024-04-26 13:15:52.594317] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.554 [2024-04-26 13:15:52.594375] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.554 [2024-04-26 13:15:52.594388] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.554 [2024-04-26 13:15:52.594393] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.554 [2024-04-26 13:15:52.594397] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:47.554 [2024-04-26 13:15:52.594407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:47.554 qpair failed and we were unable to recover it. 00:32:47.554 [2024-04-26 13:15:52.604373] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.554 [2024-04-26 13:15:52.604472] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.554 [2024-04-26 13:15:52.604482] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.554 [2024-04-26 13:15:52.604487] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.554 [2024-04-26 13:15:52.604491] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:47.554 [2024-04-26 13:15:52.604502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:47.554 qpair failed and we were unable to recover it. 00:32:47.816 [2024-04-26 13:15:52.614258] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.816 [2024-04-26 13:15:52.614311] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.816 [2024-04-26 13:15:52.614322] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.816 [2024-04-26 13:15:52.614328] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.816 [2024-04-26 13:15:52.614332] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:47.816 [2024-04-26 13:15:52.614344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:47.816 qpair failed and we were unable to recover it. 00:32:47.816 [2024-04-26 13:15:52.624430] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.816 [2024-04-26 13:15:52.624480] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.816 [2024-04-26 13:15:52.624491] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.816 [2024-04-26 13:15:52.624496] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.816 [2024-04-26 13:15:52.624500] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:47.816 [2024-04-26 13:15:52.624511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:47.816 qpair failed and we were unable to recover it. 00:32:47.816 [2024-04-26 13:15:52.634431] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.816 [2024-04-26 13:15:52.634501] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.816 [2024-04-26 13:15:52.634512] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.816 [2024-04-26 13:15:52.634517] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.816 [2024-04-26 13:15:52.634522] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:47.816 [2024-04-26 13:15:52.634538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:47.816 qpair failed and we were unable to recover it. 00:32:47.816 [2024-04-26 13:15:52.644339] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.816 [2024-04-26 13:15:52.644399] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.816 [2024-04-26 13:15:52.644409] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.816 [2024-04-26 13:15:52.644414] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.816 [2024-04-26 13:15:52.644419] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:47.816 [2024-04-26 13:15:52.644429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:47.816 qpair failed and we were unable to recover it. 00:32:47.816 [2024-04-26 13:15:52.654505] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.816 [2024-04-26 13:15:52.654586] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.816 [2024-04-26 13:15:52.654597] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.816 [2024-04-26 13:15:52.654602] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.816 [2024-04-26 13:15:52.654606] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:47.816 [2024-04-26 13:15:52.654616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:47.816 qpair failed and we were unable to recover it. 00:32:47.816 [2024-04-26 13:15:52.664394] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.816 [2024-04-26 13:15:52.664444] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.816 [2024-04-26 13:15:52.664454] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.816 [2024-04-26 13:15:52.664459] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.816 [2024-04-26 13:15:52.664463] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:47.816 [2024-04-26 13:15:52.664473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:47.817 qpair failed and we were unable to recover it. 00:32:47.817 [2024-04-26 13:15:52.674559] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.817 [2024-04-26 13:15:52.674608] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.817 [2024-04-26 13:15:52.674619] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.817 [2024-04-26 13:15:52.674623] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.817 [2024-04-26 13:15:52.674628] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:47.817 [2024-04-26 13:15:52.674638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:47.817 qpair failed and we were unable to recover it. 00:32:47.817 [2024-04-26 13:15:52.684578] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.817 [2024-04-26 13:15:52.684630] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.817 [2024-04-26 13:15:52.684644] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.817 [2024-04-26 13:15:52.684649] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.817 [2024-04-26 13:15:52.684653] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:47.817 [2024-04-26 13:15:52.684663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:47.817 qpair failed and we were unable to recover it. 00:32:47.817 [2024-04-26 13:15:52.694608] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.817 [2024-04-26 13:15:52.694656] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.817 [2024-04-26 13:15:52.694667] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.817 [2024-04-26 13:15:52.694671] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.817 [2024-04-26 13:15:52.694676] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:47.817 [2024-04-26 13:15:52.694685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:47.817 qpair failed and we were unable to recover it. 00:32:47.817 [2024-04-26 13:15:52.704615] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.817 [2024-04-26 13:15:52.704666] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.817 [2024-04-26 13:15:52.704676] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.817 [2024-04-26 13:15:52.704681] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.817 [2024-04-26 13:15:52.704685] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:47.817 [2024-04-26 13:15:52.704695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:47.817 qpair failed and we were unable to recover it. 00:32:47.817 [2024-04-26 13:15:52.714668] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.817 [2024-04-26 13:15:52.714717] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.817 [2024-04-26 13:15:52.714728] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.817 [2024-04-26 13:15:52.714732] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.817 [2024-04-26 13:15:52.714736] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:47.817 [2024-04-26 13:15:52.714746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:47.817 qpair failed and we were unable to recover it. 00:32:47.817 [2024-04-26 13:15:52.724670] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.817 [2024-04-26 13:15:52.724751] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.817 [2024-04-26 13:15:52.724762] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.817 [2024-04-26 13:15:52.724766] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.817 [2024-04-26 13:15:52.724773] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:47.817 [2024-04-26 13:15:52.724783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:47.817 qpair failed and we were unable to recover it. 00:32:47.817 [2024-04-26 13:15:52.734706] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.817 [2024-04-26 13:15:52.734756] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.817 [2024-04-26 13:15:52.734767] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.817 [2024-04-26 13:15:52.734772] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.817 [2024-04-26 13:15:52.734776] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:47.817 [2024-04-26 13:15:52.734786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:47.817 qpair failed and we were unable to recover it. 00:32:47.817 [2024-04-26 13:15:52.744741] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.817 [2024-04-26 13:15:52.744797] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.817 [2024-04-26 13:15:52.744808] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.817 [2024-04-26 13:15:52.744813] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.817 [2024-04-26 13:15:52.744817] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:47.817 [2024-04-26 13:15:52.744827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:47.817 qpair failed and we were unable to recover it. 00:32:47.817 [2024-04-26 13:15:52.754833] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.817 [2024-04-26 13:15:52.754888] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.817 [2024-04-26 13:15:52.754900] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.817 [2024-04-26 13:15:52.754905] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.817 [2024-04-26 13:15:52.754909] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:47.817 [2024-04-26 13:15:52.754919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:47.817 qpair failed and we were unable to recover it. 00:32:47.817 [2024-04-26 13:15:52.764816] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.817 [2024-04-26 13:15:52.764883] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.817 [2024-04-26 13:15:52.764894] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.817 [2024-04-26 13:15:52.764899] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.817 [2024-04-26 13:15:52.764904] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:47.817 [2024-04-26 13:15:52.764914] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:47.817 qpair failed and we were unable to recover it. 00:32:47.817 [2024-04-26 13:15:52.774826] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.817 [2024-04-26 13:15:52.774879] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.817 [2024-04-26 13:15:52.774890] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.817 [2024-04-26 13:15:52.774895] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.817 [2024-04-26 13:15:52.774899] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:47.817 [2024-04-26 13:15:52.774909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:47.817 qpair failed and we were unable to recover it. 00:32:47.817 [2024-04-26 13:15:52.784859] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.817 [2024-04-26 13:15:52.784907] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.817 [2024-04-26 13:15:52.784918] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.817 [2024-04-26 13:15:52.784923] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.817 [2024-04-26 13:15:52.784927] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:47.817 [2024-04-26 13:15:52.784937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:47.817 qpair failed and we were unable to recover it. 00:32:47.817 [2024-04-26 13:15:52.794802] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.817 [2024-04-26 13:15:52.794852] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.817 [2024-04-26 13:15:52.794863] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.817 [2024-04-26 13:15:52.794868] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.817 [2024-04-26 13:15:52.794872] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:47.817 [2024-04-26 13:15:52.794883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:47.817 qpair failed and we were unable to recover it. 00:32:47.817 [2024-04-26 13:15:52.804896] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.817 [2024-04-26 13:15:52.804954] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.818 [2024-04-26 13:15:52.804965] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.818 [2024-04-26 13:15:52.804969] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.818 [2024-04-26 13:15:52.804973] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:47.818 [2024-04-26 13:15:52.804984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:47.818 qpair failed and we were unable to recover it. 00:32:47.818 [2024-04-26 13:15:52.814999] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.818 [2024-04-26 13:15:52.815063] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.818 [2024-04-26 13:15:52.815074] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.818 [2024-04-26 13:15:52.815081] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.818 [2024-04-26 13:15:52.815086] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:47.818 [2024-04-26 13:15:52.815096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:47.818 qpair failed and we were unable to recover it. 00:32:47.818 [2024-04-26 13:15:52.824850] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.818 [2024-04-26 13:15:52.824901] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.818 [2024-04-26 13:15:52.824911] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.818 [2024-04-26 13:15:52.824916] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.818 [2024-04-26 13:15:52.824920] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:47.818 [2024-04-26 13:15:52.824931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:47.818 qpair failed and we were unable to recover it. 00:32:47.818 [2024-04-26 13:15:52.835003] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.818 [2024-04-26 13:15:52.835054] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.818 [2024-04-26 13:15:52.835065] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.818 [2024-04-26 13:15:52.835069] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.818 [2024-04-26 13:15:52.835074] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:47.818 [2024-04-26 13:15:52.835084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:47.818 qpair failed and we were unable to recover it. 00:32:47.818 [2024-04-26 13:15:52.844906] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.818 [2024-04-26 13:15:52.844965] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.818 [2024-04-26 13:15:52.844976] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.818 [2024-04-26 13:15:52.844980] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.818 [2024-04-26 13:15:52.844985] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:47.818 [2024-04-26 13:15:52.844995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:47.818 qpair failed and we were unable to recover it. 00:32:47.818 [2024-04-26 13:15:52.855065] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.818 [2024-04-26 13:15:52.855114] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.818 [2024-04-26 13:15:52.855125] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.818 [2024-04-26 13:15:52.855130] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.818 [2024-04-26 13:15:52.855134] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:47.818 [2024-04-26 13:15:52.855144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:47.818 qpair failed and we were unable to recover it. 00:32:47.818 [2024-04-26 13:15:52.865091] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.818 [2024-04-26 13:15:52.865138] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.818 [2024-04-26 13:15:52.865149] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.818 [2024-04-26 13:15:52.865154] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.818 [2024-04-26 13:15:52.865158] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:47.818 [2024-04-26 13:15:52.865168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:47.818 qpair failed and we were unable to recover it. 00:32:48.081 [2024-04-26 13:15:52.875117] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.081 [2024-04-26 13:15:52.875166] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.081 [2024-04-26 13:15:52.875177] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.081 [2024-04-26 13:15:52.875182] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.081 [2024-04-26 13:15:52.875186] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:48.081 [2024-04-26 13:15:52.875196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:48.081 qpair failed and we were unable to recover it. 00:32:48.081 [2024-04-26 13:15:52.885196] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.081 [2024-04-26 13:15:52.885248] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.081 [2024-04-26 13:15:52.885259] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.081 [2024-04-26 13:15:52.885264] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.081 [2024-04-26 13:15:52.885268] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:48.081 [2024-04-26 13:15:52.885278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:48.081 qpair failed and we were unable to recover it. 00:32:48.081 [2024-04-26 13:15:52.895172] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.081 [2024-04-26 13:15:52.895220] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.081 [2024-04-26 13:15:52.895230] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.081 [2024-04-26 13:15:52.895235] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.081 [2024-04-26 13:15:52.895239] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:48.081 [2024-04-26 13:15:52.895249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:48.081 qpair failed and we were unable to recover it. 00:32:48.081 [2024-04-26 13:15:52.905197] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.081 [2024-04-26 13:15:52.905246] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.081 [2024-04-26 13:15:52.905257] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.081 [2024-04-26 13:15:52.905264] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.081 [2024-04-26 13:15:52.905268] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:48.081 [2024-04-26 13:15:52.905278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:48.081 qpair failed and we were unable to recover it. 00:32:48.081 [2024-04-26 13:15:52.915245] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.081 [2024-04-26 13:15:52.915293] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.081 [2024-04-26 13:15:52.915304] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.081 [2024-04-26 13:15:52.915308] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.081 [2024-04-26 13:15:52.915312] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:48.081 [2024-04-26 13:15:52.915323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:48.081 qpair failed and we were unable to recover it. 00:32:48.081 [2024-04-26 13:15:52.925229] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.081 [2024-04-26 13:15:52.925288] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.081 [2024-04-26 13:15:52.925299] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.081 [2024-04-26 13:15:52.925303] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.081 [2024-04-26 13:15:52.925308] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:48.081 [2024-04-26 13:15:52.925318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:48.081 qpair failed and we were unable to recover it. 00:32:48.081 [2024-04-26 13:15:52.935292] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.081 [2024-04-26 13:15:52.935336] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.081 [2024-04-26 13:15:52.935347] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.081 [2024-04-26 13:15:52.935351] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.081 [2024-04-26 13:15:52.935355] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:48.081 [2024-04-26 13:15:52.935365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:48.081 qpair failed and we were unable to recover it. 00:32:48.081 [2024-04-26 13:15:52.945307] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.081 [2024-04-26 13:15:52.945353] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.081 [2024-04-26 13:15:52.945363] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.081 [2024-04-26 13:15:52.945368] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.081 [2024-04-26 13:15:52.945373] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:48.082 [2024-04-26 13:15:52.945383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:48.082 qpair failed and we were unable to recover it. 00:32:48.082 [2024-04-26 13:15:52.955345] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.082 [2024-04-26 13:15:52.955394] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.082 [2024-04-26 13:15:52.955405] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.082 [2024-04-26 13:15:52.955409] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.082 [2024-04-26 13:15:52.955414] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:48.082 [2024-04-26 13:15:52.955424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:48.082 qpair failed and we were unable to recover it. 00:32:48.082 [2024-04-26 13:15:52.965351] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.082 [2024-04-26 13:15:52.965410] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.082 [2024-04-26 13:15:52.965421] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.082 [2024-04-26 13:15:52.965425] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.082 [2024-04-26 13:15:52.965430] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:48.082 [2024-04-26 13:15:52.965439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:48.082 qpair failed and we were unable to recover it. 00:32:48.082 [2024-04-26 13:15:52.975400] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.082 [2024-04-26 13:15:52.975448] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.082 [2024-04-26 13:15:52.975459] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.082 [2024-04-26 13:15:52.975463] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.082 [2024-04-26 13:15:52.975467] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:48.082 [2024-04-26 13:15:52.975477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:48.082 qpair failed and we were unable to recover it. 00:32:48.082 [2024-04-26 13:15:52.985436] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.082 [2024-04-26 13:15:52.985482] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.082 [2024-04-26 13:15:52.985493] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.082 [2024-04-26 13:15:52.985497] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.082 [2024-04-26 13:15:52.985501] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:48.082 [2024-04-26 13:15:52.985512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:48.082 qpair failed and we were unable to recover it. 00:32:48.082 [2024-04-26 13:15:52.995478] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.082 [2024-04-26 13:15:52.995530] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.082 [2024-04-26 13:15:52.995543] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.082 [2024-04-26 13:15:52.995548] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.082 [2024-04-26 13:15:52.995552] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:48.082 [2024-04-26 13:15:52.995562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:48.082 qpair failed and we were unable to recover it. 00:32:48.082 [2024-04-26 13:15:53.005494] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.082 [2024-04-26 13:15:53.005550] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.082 [2024-04-26 13:15:53.005561] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.082 [2024-04-26 13:15:53.005566] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.082 [2024-04-26 13:15:53.005570] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:48.082 [2024-04-26 13:15:53.005581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:48.082 qpair failed and we were unable to recover it. 00:32:48.082 [2024-04-26 13:15:53.015512] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.082 [2024-04-26 13:15:53.015560] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.082 [2024-04-26 13:15:53.015571] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.082 [2024-04-26 13:15:53.015576] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.082 [2024-04-26 13:15:53.015581] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:48.082 [2024-04-26 13:15:53.015590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:48.082 qpair failed and we were unable to recover it. 00:32:48.082 [2024-04-26 13:15:53.025538] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.082 [2024-04-26 13:15:53.025588] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.082 [2024-04-26 13:15:53.025598] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.082 [2024-04-26 13:15:53.025603] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.082 [2024-04-26 13:15:53.025607] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:48.082 [2024-04-26 13:15:53.025618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:48.082 qpair failed and we were unable to recover it. 00:32:48.082 [2024-04-26 13:15:53.035556] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.082 [2024-04-26 13:15:53.035637] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.082 [2024-04-26 13:15:53.035647] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.082 [2024-04-26 13:15:53.035652] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.082 [2024-04-26 13:15:53.035656] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:48.082 [2024-04-26 13:15:53.035669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:48.082 qpair failed and we were unable to recover it. 00:32:48.082 [2024-04-26 13:15:53.045612] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.082 [2024-04-26 13:15:53.045663] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.082 [2024-04-26 13:15:53.045674] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.082 [2024-04-26 13:15:53.045678] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.082 [2024-04-26 13:15:53.045683] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:48.082 [2024-04-26 13:15:53.045693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:48.082 qpair failed and we were unable to recover it. 00:32:48.082 [2024-04-26 13:15:53.055579] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.082 [2024-04-26 13:15:53.055624] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.082 [2024-04-26 13:15:53.055634] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.082 [2024-04-26 13:15:53.055639] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.082 [2024-04-26 13:15:53.055643] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:48.082 [2024-04-26 13:15:53.055653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:48.082 qpair failed and we were unable to recover it. 00:32:48.082 [2024-04-26 13:15:53.065664] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.082 [2024-04-26 13:15:53.065733] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.082 [2024-04-26 13:15:53.065743] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.082 [2024-04-26 13:15:53.065748] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.082 [2024-04-26 13:15:53.065752] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:48.082 [2024-04-26 13:15:53.065762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:48.082 qpair failed and we were unable to recover it. 00:32:48.082 [2024-04-26 13:15:53.075689] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.082 [2024-04-26 13:15:53.075740] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.082 [2024-04-26 13:15:53.075750] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.082 [2024-04-26 13:15:53.075755] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.082 [2024-04-26 13:15:53.075759] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:48.082 [2024-04-26 13:15:53.075769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:48.082 qpair failed and we were unable to recover it. 00:32:48.082 [2024-04-26 13:15:53.085580] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.083 [2024-04-26 13:15:53.085657] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.083 [2024-04-26 13:15:53.085670] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.083 [2024-04-26 13:15:53.085675] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.083 [2024-04-26 13:15:53.085679] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:48.083 [2024-04-26 13:15:53.085689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:48.083 qpair failed and we were unable to recover it. 00:32:48.083 [2024-04-26 13:15:53.095678] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.083 [2024-04-26 13:15:53.095760] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.083 [2024-04-26 13:15:53.095771] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.083 [2024-04-26 13:15:53.095776] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.083 [2024-04-26 13:15:53.095780] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:48.083 [2024-04-26 13:15:53.095790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:48.083 qpair failed and we were unable to recover it. 00:32:48.083 [2024-04-26 13:15:53.105722] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.083 [2024-04-26 13:15:53.105804] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.083 [2024-04-26 13:15:53.105815] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.083 [2024-04-26 13:15:53.105820] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.083 [2024-04-26 13:15:53.105824] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:48.083 [2024-04-26 13:15:53.105834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:48.083 qpair failed and we were unable to recover it. 00:32:48.083 [2024-04-26 13:15:53.115788] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.083 [2024-04-26 13:15:53.115840] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.083 [2024-04-26 13:15:53.115851] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.083 [2024-04-26 13:15:53.115856] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.083 [2024-04-26 13:15:53.115860] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:48.083 [2024-04-26 13:15:53.115871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:48.083 qpair failed and we were unable to recover it. 00:32:48.083 [2024-04-26 13:15:53.125844] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.083 [2024-04-26 13:15:53.125901] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.083 [2024-04-26 13:15:53.125911] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.083 [2024-04-26 13:15:53.125916] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.083 [2024-04-26 13:15:53.125923] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:48.083 [2024-04-26 13:15:53.125933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:48.083 qpair failed and we were unable to recover it. 00:32:48.083 [2024-04-26 13:15:53.135845] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.083 [2024-04-26 13:15:53.135892] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.083 [2024-04-26 13:15:53.135902] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.083 [2024-04-26 13:15:53.135907] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.083 [2024-04-26 13:15:53.135911] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:48.083 [2024-04-26 13:15:53.135922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:48.083 qpair failed and we were unable to recover it. 00:32:48.345 [2024-04-26 13:15:53.145840] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.345 [2024-04-26 13:15:53.145889] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.345 [2024-04-26 13:15:53.145900] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.345 [2024-04-26 13:15:53.145905] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.345 [2024-04-26 13:15:53.145909] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:48.345 [2024-04-26 13:15:53.145919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:48.345 qpair failed and we were unable to recover it. 00:32:48.345 [2024-04-26 13:15:53.155807] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.345 [2024-04-26 13:15:53.155899] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.345 [2024-04-26 13:15:53.155910] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.345 [2024-04-26 13:15:53.155915] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.345 [2024-04-26 13:15:53.155919] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:48.345 [2024-04-26 13:15:53.155929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:48.345 qpair failed and we were unable to recover it. 00:32:48.345 [2024-04-26 13:15:53.165808] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.345 [2024-04-26 13:15:53.165865] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.345 [2024-04-26 13:15:53.165876] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.345 [2024-04-26 13:15:53.165881] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.345 [2024-04-26 13:15:53.165885] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:48.345 [2024-04-26 13:15:53.165896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:48.345 qpair failed and we were unable to recover it. 00:32:48.345 [2024-04-26 13:15:53.175848] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.345 [2024-04-26 13:15:53.175901] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.345 [2024-04-26 13:15:53.175913] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.345 [2024-04-26 13:15:53.175918] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.345 [2024-04-26 13:15:53.175922] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:48.345 [2024-04-26 13:15:53.175932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:48.345 qpair failed and we were unable to recover it. 00:32:48.345 [2024-04-26 13:15:53.185938] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.345 [2024-04-26 13:15:53.185979] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.345 [2024-04-26 13:15:53.185990] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.345 [2024-04-26 13:15:53.185995] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.345 [2024-04-26 13:15:53.185999] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:48.345 [2024-04-26 13:15:53.186009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:48.345 qpair failed and we were unable to recover it. 00:32:48.345 [2024-04-26 13:15:53.196014] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.345 [2024-04-26 13:15:53.196096] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.345 [2024-04-26 13:15:53.196107] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.345 [2024-04-26 13:15:53.196112] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.345 [2024-04-26 13:15:53.196116] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:48.345 [2024-04-26 13:15:53.196126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:48.345 qpair failed and we were unable to recover it. 00:32:48.345 [2024-04-26 13:15:53.206055] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.345 [2024-04-26 13:15:53.206116] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.345 [2024-04-26 13:15:53.206126] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.345 [2024-04-26 13:15:53.206131] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.345 [2024-04-26 13:15:53.206135] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:48.345 [2024-04-26 13:15:53.206146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:48.345 qpair failed and we were unable to recover it. 00:32:48.345 [2024-04-26 13:15:53.216135] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.345 [2024-04-26 13:15:53.216190] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.345 [2024-04-26 13:15:53.216201] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.345 [2024-04-26 13:15:53.216206] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.345 [2024-04-26 13:15:53.216213] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:48.345 [2024-04-26 13:15:53.216224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:48.345 qpair failed and we were unable to recover it. 00:32:48.345 [2024-04-26 13:15:53.225956] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.345 [2024-04-26 13:15:53.226011] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.345 [2024-04-26 13:15:53.226021] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.345 [2024-04-26 13:15:53.226026] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.345 [2024-04-26 13:15:53.226030] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:48.345 [2024-04-26 13:15:53.226041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:48.345 qpair failed and we were unable to recover it. 00:32:48.345 [2024-04-26 13:15:53.236144] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.345 [2024-04-26 13:15:53.236194] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.345 [2024-04-26 13:15:53.236205] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.345 [2024-04-26 13:15:53.236210] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.345 [2024-04-26 13:15:53.236214] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:48.345 [2024-04-26 13:15:53.236224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:48.345 qpair failed and we were unable to recover it. 00:32:48.345 [2024-04-26 13:15:53.246151] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.345 [2024-04-26 13:15:53.246205] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.345 [2024-04-26 13:15:53.246216] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.345 [2024-04-26 13:15:53.246221] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.345 [2024-04-26 13:15:53.246225] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:48.345 [2024-04-26 13:15:53.246236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:48.345 qpair failed and we were unable to recover it. 00:32:48.345 [2024-04-26 13:15:53.256082] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.345 [2024-04-26 13:15:53.256173] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.345 [2024-04-26 13:15:53.256184] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.345 [2024-04-26 13:15:53.256189] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.345 [2024-04-26 13:15:53.256193] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:48.345 [2024-04-26 13:15:53.256203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:48.345 qpair failed and we were unable to recover it. 00:32:48.345 [2024-04-26 13:15:53.266056] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.345 [2024-04-26 13:15:53.266097] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.345 [2024-04-26 13:15:53.266108] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.345 [2024-04-26 13:15:53.266113] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.345 [2024-04-26 13:15:53.266117] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:48.345 [2024-04-26 13:15:53.266128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:48.345 qpair failed and we were unable to recover it. 00:32:48.345 [2024-04-26 13:15:53.276252] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.345 [2024-04-26 13:15:53.276331] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.345 [2024-04-26 13:15:53.276341] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.345 [2024-04-26 13:15:53.276346] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.345 [2024-04-26 13:15:53.276350] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:48.345 [2024-04-26 13:15:53.276360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:48.345 qpair failed and we were unable to recover it. 00:32:48.345 [2024-04-26 13:15:53.286273] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.345 [2024-04-26 13:15:53.286324] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.345 [2024-04-26 13:15:53.286335] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.345 [2024-04-26 13:15:53.286340] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.345 [2024-04-26 13:15:53.286344] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:48.345 [2024-04-26 13:15:53.286354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:48.345 qpair failed and we were unable to recover it. 00:32:48.345 [2024-04-26 13:15:53.296289] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.345 [2024-04-26 13:15:53.296369] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.345 [2024-04-26 13:15:53.296380] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.345 [2024-04-26 13:15:53.296384] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.345 [2024-04-26 13:15:53.296389] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:48.345 [2024-04-26 13:15:53.296399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:48.345 qpair failed and we were unable to recover it. 00:32:48.345 [2024-04-26 13:15:53.306271] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.345 [2024-04-26 13:15:53.306313] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.345 [2024-04-26 13:15:53.306323] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.345 [2024-04-26 13:15:53.306330] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.345 [2024-04-26 13:15:53.306335] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:48.345 [2024-04-26 13:15:53.306345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:48.345 qpair failed and we were unable to recover it. 00:32:48.345 [2024-04-26 13:15:53.316373] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.345 [2024-04-26 13:15:53.316450] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.345 [2024-04-26 13:15:53.316461] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.345 [2024-04-26 13:15:53.316466] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.345 [2024-04-26 13:15:53.316470] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:48.345 [2024-04-26 13:15:53.316480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:48.345 qpair failed and we were unable to recover it. 00:32:48.345 [2024-04-26 13:15:53.326385] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.345 [2024-04-26 13:15:53.326438] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.345 [2024-04-26 13:15:53.326449] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.345 [2024-04-26 13:15:53.326453] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.345 [2024-04-26 13:15:53.326457] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:48.345 [2024-04-26 13:15:53.326468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:48.345 qpair failed and we were unable to recover it. 00:32:48.345 [2024-04-26 13:15:53.336305] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.345 [2024-04-26 13:15:53.336353] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.345 [2024-04-26 13:15:53.336364] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.345 [2024-04-26 13:15:53.336368] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.345 [2024-04-26 13:15:53.336372] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:48.345 [2024-04-26 13:15:53.336382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:48.345 qpair failed and we were unable to recover it. 00:32:48.346 [2024-04-26 13:15:53.346401] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.346 [2024-04-26 13:15:53.346441] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.346 [2024-04-26 13:15:53.346452] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.346 [2024-04-26 13:15:53.346457] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.346 [2024-04-26 13:15:53.346461] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:48.346 [2024-04-26 13:15:53.346472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:48.346 qpair failed and we were unable to recover it. 00:32:48.346 [2024-04-26 13:15:53.356481] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.346 [2024-04-26 13:15:53.356530] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.346 [2024-04-26 13:15:53.356540] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.346 [2024-04-26 13:15:53.356545] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.346 [2024-04-26 13:15:53.356549] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:48.346 [2024-04-26 13:15:53.356559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:48.346 qpair failed and we were unable to recover it. 00:32:48.346 [2024-04-26 13:15:53.366491] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.346 [2024-04-26 13:15:53.366542] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.346 [2024-04-26 13:15:53.366553] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.346 [2024-04-26 13:15:53.366558] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.346 [2024-04-26 13:15:53.366562] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:48.346 [2024-04-26 13:15:53.366572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:48.346 qpair failed and we were unable to recover it. 00:32:48.346 [2024-04-26 13:15:53.376520] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.346 [2024-04-26 13:15:53.376567] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.346 [2024-04-26 13:15:53.376578] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.346 [2024-04-26 13:15:53.376584] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.346 [2024-04-26 13:15:53.376589] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:48.346 [2024-04-26 13:15:53.376599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:48.346 qpair failed and we were unable to recover it. 00:32:48.346 [2024-04-26 13:15:53.386385] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.346 [2024-04-26 13:15:53.386429] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.346 [2024-04-26 13:15:53.386440] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.346 [2024-04-26 13:15:53.386445] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.346 [2024-04-26 13:15:53.386449] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:48.346 [2024-04-26 13:15:53.386459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:48.346 qpair failed and we were unable to recover it. 00:32:48.346 [2024-04-26 13:15:53.396586] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.346 [2024-04-26 13:15:53.396634] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.346 [2024-04-26 13:15:53.396650] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.346 [2024-04-26 13:15:53.396655] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.346 [2024-04-26 13:15:53.396659] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:48.346 [2024-04-26 13:15:53.396669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:48.346 qpair failed and we were unable to recover it. 00:32:48.607 [2024-04-26 13:15:53.406606] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.607 [2024-04-26 13:15:53.406713] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.607 [2024-04-26 13:15:53.406731] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.607 [2024-04-26 13:15:53.406736] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.607 [2024-04-26 13:15:53.406741] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:48.607 [2024-04-26 13:15:53.406755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:48.607 qpair failed and we were unable to recover it. 00:32:48.607 [2024-04-26 13:15:53.416669] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.608 [2024-04-26 13:15:53.416719] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.608 [2024-04-26 13:15:53.416732] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.608 [2024-04-26 13:15:53.416737] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.608 [2024-04-26 13:15:53.416741] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:48.608 [2024-04-26 13:15:53.416752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:48.608 qpair failed and we were unable to recover it. 00:32:48.608 [2024-04-26 13:15:53.426617] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.608 [2024-04-26 13:15:53.426658] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.608 [2024-04-26 13:15:53.426669] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.608 [2024-04-26 13:15:53.426674] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.608 [2024-04-26 13:15:53.426678] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:48.608 [2024-04-26 13:15:53.426688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:48.608 qpair failed and we were unable to recover it. 00:32:48.608 [2024-04-26 13:15:53.436699] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.608 [2024-04-26 13:15:53.436750] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.608 [2024-04-26 13:15:53.436760] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.608 [2024-04-26 13:15:53.436765] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.608 [2024-04-26 13:15:53.436769] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:48.608 [2024-04-26 13:15:53.436783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:48.608 qpair failed and we were unable to recover it. 00:32:48.608 [2024-04-26 13:15:53.446587] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.608 [2024-04-26 13:15:53.446645] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.608 [2024-04-26 13:15:53.446657] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.608 [2024-04-26 13:15:53.446661] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.608 [2024-04-26 13:15:53.446666] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:48.608 [2024-04-26 13:15:53.446676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:48.608 qpair failed and we were unable to recover it. 00:32:48.608 [2024-04-26 13:15:53.456748] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.608 [2024-04-26 13:15:53.456795] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.608 [2024-04-26 13:15:53.456805] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.608 [2024-04-26 13:15:53.456810] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.608 [2024-04-26 13:15:53.456815] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:48.608 [2024-04-26 13:15:53.456825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:48.608 qpair failed and we were unable to recover it. 00:32:48.608 [2024-04-26 13:15:53.466733] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.608 [2024-04-26 13:15:53.466777] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.608 [2024-04-26 13:15:53.466787] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.608 [2024-04-26 13:15:53.466792] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.608 [2024-04-26 13:15:53.466796] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:48.608 [2024-04-26 13:15:53.466806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:48.608 qpair failed and we were unable to recover it. 00:32:48.608 [2024-04-26 13:15:53.476773] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.608 [2024-04-26 13:15:53.476824] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.608 [2024-04-26 13:15:53.476835] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.608 [2024-04-26 13:15:53.476844] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.608 [2024-04-26 13:15:53.476849] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:48.608 [2024-04-26 13:15:53.476859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:48.608 qpair failed and we were unable to recover it. 00:32:48.608 [2024-04-26 13:15:53.486699] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.608 [2024-04-26 13:15:53.486751] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.608 [2024-04-26 13:15:53.486765] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.608 [2024-04-26 13:15:53.486770] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.608 [2024-04-26 13:15:53.486774] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:48.608 [2024-04-26 13:15:53.486784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:48.608 qpair failed and we were unable to recover it. 00:32:48.608 [2024-04-26 13:15:53.496720] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.608 [2024-04-26 13:15:53.496769] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.608 [2024-04-26 13:15:53.496780] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.608 [2024-04-26 13:15:53.496785] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.608 [2024-04-26 13:15:53.496789] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:48.608 [2024-04-26 13:15:53.496799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:48.608 qpair failed and we were unable to recover it. 00:32:48.608 [2024-04-26 13:15:53.506713] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.608 [2024-04-26 13:15:53.506755] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.608 [2024-04-26 13:15:53.506767] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.608 [2024-04-26 13:15:53.506771] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.608 [2024-04-26 13:15:53.506775] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:48.608 [2024-04-26 13:15:53.506786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:48.608 qpair failed and we were unable to recover it. 00:32:48.608 [2024-04-26 13:15:53.516903] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.608 [2024-04-26 13:15:53.516982] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.608 [2024-04-26 13:15:53.516993] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.608 [2024-04-26 13:15:53.516998] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.608 [2024-04-26 13:15:53.517002] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:48.608 [2024-04-26 13:15:53.517012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:48.608 qpair failed and we were unable to recover it. 00:32:48.608 [2024-04-26 13:15:53.526952] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.608 [2024-04-26 13:15:53.527007] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.608 [2024-04-26 13:15:53.527018] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.608 [2024-04-26 13:15:53.527023] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.608 [2024-04-26 13:15:53.527030] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:48.608 [2024-04-26 13:15:53.527040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:48.608 qpair failed and we were unable to recover it. 00:32:48.608 [2024-04-26 13:15:53.536968] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.608 [2024-04-26 13:15:53.537017] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.608 [2024-04-26 13:15:53.537027] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.608 [2024-04-26 13:15:53.537032] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.608 [2024-04-26 13:15:53.537036] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:48.608 [2024-04-26 13:15:53.537047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:48.608 qpair failed and we were unable to recover it. 00:32:48.608 [2024-04-26 13:15:53.546927] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.608 [2024-04-26 13:15:53.546972] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.608 [2024-04-26 13:15:53.546983] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.608 [2024-04-26 13:15:53.546988] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.608 [2024-04-26 13:15:53.546992] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:48.608 [2024-04-26 13:15:53.547003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:48.608 qpair failed and we were unable to recover it. 00:32:48.608 [2024-04-26 13:15:53.557020] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.608 [2024-04-26 13:15:53.557117] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.608 [2024-04-26 13:15:53.557128] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.608 [2024-04-26 13:15:53.557133] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.608 [2024-04-26 13:15:53.557137] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:48.608 [2024-04-26 13:15:53.557148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:48.608 qpair failed and we were unable to recover it. 00:32:48.608 [2024-04-26 13:15:53.567013] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.608 [2024-04-26 13:15:53.567079] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.608 [2024-04-26 13:15:53.567090] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.608 [2024-04-26 13:15:53.567094] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.609 [2024-04-26 13:15:53.567099] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:48.609 [2024-04-26 13:15:53.567109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:48.609 qpair failed and we were unable to recover it. 00:32:48.609 [2024-04-26 13:15:53.577077] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.609 [2024-04-26 13:15:53.577130] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.609 [2024-04-26 13:15:53.577141] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.609 [2024-04-26 13:15:53.577146] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.609 [2024-04-26 13:15:53.577150] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:48.609 [2024-04-26 13:15:53.577160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:48.609 qpair failed and we were unable to recover it. 00:32:48.609 [2024-04-26 13:15:53.587077] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.609 [2024-04-26 13:15:53.587165] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.609 [2024-04-26 13:15:53.587176] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.609 [2024-04-26 13:15:53.587181] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.609 [2024-04-26 13:15:53.587185] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:48.609 [2024-04-26 13:15:53.587195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:48.609 qpair failed and we were unable to recover it. 00:32:48.609 [2024-04-26 13:15:53.597142] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.609 [2024-04-26 13:15:53.597190] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.609 [2024-04-26 13:15:53.597200] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.609 [2024-04-26 13:15:53.597205] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.609 [2024-04-26 13:15:53.597209] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:48.609 [2024-04-26 13:15:53.597219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:48.609 qpair failed and we were unable to recover it. 00:32:48.609 [2024-04-26 13:15:53.607156] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.609 [2024-04-26 13:15:53.607208] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.609 [2024-04-26 13:15:53.607219] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.609 [2024-04-26 13:15:53.607223] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.609 [2024-04-26 13:15:53.607228] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:48.609 [2024-04-26 13:15:53.607238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:48.609 qpair failed and we were unable to recover it. 00:32:48.609 [2024-04-26 13:15:53.617161] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.609 [2024-04-26 13:15:53.617206] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.609 [2024-04-26 13:15:53.617218] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.609 [2024-04-26 13:15:53.617222] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.609 [2024-04-26 13:15:53.617229] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:48.609 [2024-04-26 13:15:53.617240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:48.609 qpair failed and we were unable to recover it. 00:32:48.609 [2024-04-26 13:15:53.627050] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.609 [2024-04-26 13:15:53.627095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.609 [2024-04-26 13:15:53.627107] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.609 [2024-04-26 13:15:53.627111] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.609 [2024-04-26 13:15:53.627116] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:48.609 [2024-04-26 13:15:53.627126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:48.609 qpair failed and we were unable to recover it. 00:32:48.609 [2024-04-26 13:15:53.637123] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.609 [2024-04-26 13:15:53.637184] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.609 [2024-04-26 13:15:53.637195] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.609 [2024-04-26 13:15:53.637200] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.609 [2024-04-26 13:15:53.637204] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:48.609 [2024-04-26 13:15:53.637215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:48.609 qpair failed and we were unable to recover it. 00:32:48.609 [2024-04-26 13:15:53.647262] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.609 [2024-04-26 13:15:53.647316] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.609 [2024-04-26 13:15:53.647327] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.609 [2024-04-26 13:15:53.647332] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.609 [2024-04-26 13:15:53.647336] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:48.609 [2024-04-26 13:15:53.647346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:48.609 qpair failed and we were unable to recover it. 00:32:48.609 [2024-04-26 13:15:53.657278] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.609 [2024-04-26 13:15:53.657324] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.609 [2024-04-26 13:15:53.657335] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.609 [2024-04-26 13:15:53.657340] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.609 [2024-04-26 13:15:53.657344] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:48.609 [2024-04-26 13:15:53.657355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:48.609 qpair failed and we were unable to recover it. 00:32:48.872 [2024-04-26 13:15:53.667289] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.872 [2024-04-26 13:15:53.667334] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.872 [2024-04-26 13:15:53.667345] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.872 [2024-04-26 13:15:53.667350] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.872 [2024-04-26 13:15:53.667354] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:48.872 [2024-04-26 13:15:53.667364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:48.872 qpair failed and we were unable to recover it. 00:32:48.872 [2024-04-26 13:15:53.677354] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.872 [2024-04-26 13:15:53.677403] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.872 [2024-04-26 13:15:53.677414] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.872 [2024-04-26 13:15:53.677418] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.872 [2024-04-26 13:15:53.677422] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:48.872 [2024-04-26 13:15:53.677432] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:48.872 qpair failed and we were unable to recover it. 00:32:48.872 [2024-04-26 13:15:53.687384] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.872 [2024-04-26 13:15:53.687455] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.872 [2024-04-26 13:15:53.687466] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.872 [2024-04-26 13:15:53.687471] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.872 [2024-04-26 13:15:53.687475] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:48.872 [2024-04-26 13:15:53.687485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:48.872 qpair failed and we were unable to recover it. 00:32:48.872 [2024-04-26 13:15:53.697398] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.872 [2024-04-26 13:15:53.697443] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.872 [2024-04-26 13:15:53.697454] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.872 [2024-04-26 13:15:53.697458] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.872 [2024-04-26 13:15:53.697463] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:48.872 [2024-04-26 13:15:53.697473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:48.872 qpair failed and we were unable to recover it. 00:32:48.872 [2024-04-26 13:15:53.707370] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.872 [2024-04-26 13:15:53.707429] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.872 [2024-04-26 13:15:53.707440] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.872 [2024-04-26 13:15:53.707448] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.872 [2024-04-26 13:15:53.707452] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:48.872 [2024-04-26 13:15:53.707462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:48.872 qpair failed and we were unable to recover it. 00:32:48.872 [2024-04-26 13:15:53.717431] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.872 [2024-04-26 13:15:53.717480] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.872 [2024-04-26 13:15:53.717490] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.872 [2024-04-26 13:15:53.717495] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.872 [2024-04-26 13:15:53.717500] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:48.872 [2024-04-26 13:15:53.717510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:48.872 qpair failed and we were unable to recover it. 00:32:48.872 [2024-04-26 13:15:53.727482] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.872 [2024-04-26 13:15:53.727536] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.872 [2024-04-26 13:15:53.727548] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.872 [2024-04-26 13:15:53.727553] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.872 [2024-04-26 13:15:53.727557] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:48.872 [2024-04-26 13:15:53.727568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:48.872 qpair failed and we were unable to recover it. 00:32:48.872 [2024-04-26 13:15:53.737506] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.872 [2024-04-26 13:15:53.737553] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.872 [2024-04-26 13:15:53.737563] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.872 [2024-04-26 13:15:53.737568] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.872 [2024-04-26 13:15:53.737572] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:48.872 [2024-04-26 13:15:53.737582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:48.872 qpair failed and we were unable to recover it. 00:32:48.873 [2024-04-26 13:15:53.747365] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.873 [2024-04-26 13:15:53.747408] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.873 [2024-04-26 13:15:53.747420] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.873 [2024-04-26 13:15:53.747424] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.873 [2024-04-26 13:15:53.747428] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:48.873 [2024-04-26 13:15:53.747439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:48.873 qpair failed and we were unable to recover it. 00:32:48.873 [2024-04-26 13:15:53.757588] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.873 [2024-04-26 13:15:53.757644] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.873 [2024-04-26 13:15:53.757655] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.873 [2024-04-26 13:15:53.757660] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.873 [2024-04-26 13:15:53.757664] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:48.873 [2024-04-26 13:15:53.757674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:48.873 qpair failed and we were unable to recover it. 00:32:48.873 [2024-04-26 13:15:53.767593] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.873 [2024-04-26 13:15:53.767654] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.873 [2024-04-26 13:15:53.767672] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.873 [2024-04-26 13:15:53.767678] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.873 [2024-04-26 13:15:53.767683] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:48.873 [2024-04-26 13:15:53.767696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:48.873 qpair failed and we were unable to recover it. 00:32:48.873 [2024-04-26 13:15:53.777612] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.873 [2024-04-26 13:15:53.777666] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.873 [2024-04-26 13:15:53.777685] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.873 [2024-04-26 13:15:53.777691] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.873 [2024-04-26 13:15:53.777695] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:48.873 [2024-04-26 13:15:53.777709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:48.873 qpair failed and we were unable to recover it. 00:32:48.873 [2024-04-26 13:15:53.787609] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.873 [2024-04-26 13:15:53.787698] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.873 [2024-04-26 13:15:53.787717] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.873 [2024-04-26 13:15:53.787723] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.873 [2024-04-26 13:15:53.787727] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:48.873 [2024-04-26 13:15:53.787740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:48.873 qpair failed and we were unable to recover it. 00:32:48.873 [2024-04-26 13:15:53.797679] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.873 [2024-04-26 13:15:53.797771] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.873 [2024-04-26 13:15:53.797794] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.873 [2024-04-26 13:15:53.797799] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.873 [2024-04-26 13:15:53.797803] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:48.873 [2024-04-26 13:15:53.797815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:48.873 qpair failed and we were unable to recover it. 00:32:48.873 [2024-04-26 13:15:53.807581] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.873 [2024-04-26 13:15:53.807644] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.873 [2024-04-26 13:15:53.807656] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.873 [2024-04-26 13:15:53.807661] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.873 [2024-04-26 13:15:53.807665] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:48.873 [2024-04-26 13:15:53.807676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:48.873 qpair failed and we were unable to recover it. 00:32:48.873 [2024-04-26 13:15:53.817725] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.873 [2024-04-26 13:15:53.817817] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.873 [2024-04-26 13:15:53.817829] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.873 [2024-04-26 13:15:53.817834] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.873 [2024-04-26 13:15:53.817842] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:48.873 [2024-04-26 13:15:53.817853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:48.873 qpair failed and we were unable to recover it. 00:32:48.873 [2024-04-26 13:15:53.827729] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.873 [2024-04-26 13:15:53.827796] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.873 [2024-04-26 13:15:53.827808] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.873 [2024-04-26 13:15:53.827812] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.873 [2024-04-26 13:15:53.827817] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:48.873 [2024-04-26 13:15:53.827827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:48.873 qpair failed and we were unable to recover it. 00:32:48.873 [2024-04-26 13:15:53.837803] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.873 [2024-04-26 13:15:53.837852] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.873 [2024-04-26 13:15:53.837862] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.873 [2024-04-26 13:15:53.837867] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.873 [2024-04-26 13:15:53.837871] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:48.873 [2024-04-26 13:15:53.837885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:48.873 qpair failed and we were unable to recover it. 00:32:48.873 [2024-04-26 13:15:53.847811] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.873 [2024-04-26 13:15:53.847869] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.873 [2024-04-26 13:15:53.847880] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.873 [2024-04-26 13:15:53.847885] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.873 [2024-04-26 13:15:53.847889] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:48.873 [2024-04-26 13:15:53.847900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:48.873 qpair failed and we were unable to recover it. 00:32:48.873 [2024-04-26 13:15:53.857827] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.873 [2024-04-26 13:15:53.857877] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.873 [2024-04-26 13:15:53.857888] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.873 [2024-04-26 13:15:53.857892] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.873 [2024-04-26 13:15:53.857897] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:48.874 [2024-04-26 13:15:53.857907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:48.874 qpair failed and we were unable to recover it. 00:32:48.874 [2024-04-26 13:15:53.867713] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.874 [2024-04-26 13:15:53.867758] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.874 [2024-04-26 13:15:53.867769] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.874 [2024-04-26 13:15:53.867774] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.874 [2024-04-26 13:15:53.867778] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:48.874 [2024-04-26 13:15:53.867789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:48.874 qpair failed and we were unable to recover it. 00:32:48.874 [2024-04-26 13:15:53.877900] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.874 [2024-04-26 13:15:53.877948] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.874 [2024-04-26 13:15:53.877959] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.874 [2024-04-26 13:15:53.877964] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.874 [2024-04-26 13:15:53.877968] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:48.874 [2024-04-26 13:15:53.877979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:48.874 qpair failed and we were unable to recover it. 00:32:48.874 [2024-04-26 13:15:53.887922] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.874 [2024-04-26 13:15:53.887973] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.874 [2024-04-26 13:15:53.887987] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.874 [2024-04-26 13:15:53.887992] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.874 [2024-04-26 13:15:53.887996] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:48.874 [2024-04-26 13:15:53.888006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:48.874 qpair failed and we were unable to recover it. 00:32:48.874 [2024-04-26 13:15:53.897959] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.874 [2024-04-26 13:15:53.898007] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.874 [2024-04-26 13:15:53.898017] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.874 [2024-04-26 13:15:53.898022] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.874 [2024-04-26 13:15:53.898026] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:48.874 [2024-04-26 13:15:53.898036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:48.874 qpair failed and we were unable to recover it. 00:32:48.874 [2024-04-26 13:15:53.907956] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.874 [2024-04-26 13:15:53.908001] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.874 [2024-04-26 13:15:53.908012] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.874 [2024-04-26 13:15:53.908016] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.874 [2024-04-26 13:15:53.908021] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:48.874 [2024-04-26 13:15:53.908031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:48.874 qpair failed and we were unable to recover it. 00:32:48.874 [2024-04-26 13:15:53.917900] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.874 [2024-04-26 13:15:53.917954] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.874 [2024-04-26 13:15:53.917965] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.874 [2024-04-26 13:15:53.917970] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.874 [2024-04-26 13:15:53.917975] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:48.874 [2024-04-26 13:15:53.917985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:48.874 qpair failed and we were unable to recover it. 00:32:48.874 [2024-04-26 13:15:53.928033] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.874 [2024-04-26 13:15:53.928084] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.874 [2024-04-26 13:15:53.928095] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.874 [2024-04-26 13:15:53.928100] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.874 [2024-04-26 13:15:53.928104] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:48.874 [2024-04-26 13:15:53.928117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:48.874 qpair failed and we were unable to recover it. 00:32:49.136 [2024-04-26 13:15:53.938029] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.136 [2024-04-26 13:15:53.938073] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.136 [2024-04-26 13:15:53.938084] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.136 [2024-04-26 13:15:53.938089] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.136 [2024-04-26 13:15:53.938093] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:49.136 [2024-04-26 13:15:53.938103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.136 qpair failed and we were unable to recover it. 00:32:49.136 [2024-04-26 13:15:53.948044] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.136 [2024-04-26 13:15:53.948089] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.136 [2024-04-26 13:15:53.948100] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.136 [2024-04-26 13:15:53.948105] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.136 [2024-04-26 13:15:53.948110] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:49.136 [2024-04-26 13:15:53.948120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.136 qpair failed and we were unable to recover it. 00:32:49.136 [2024-04-26 13:15:53.958136] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.136 [2024-04-26 13:15:53.958183] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.136 [2024-04-26 13:15:53.958194] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.136 [2024-04-26 13:15:53.958199] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.136 [2024-04-26 13:15:53.958203] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:49.136 [2024-04-26 13:15:53.958213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.136 qpair failed and we were unable to recover it. 00:32:49.136 [2024-04-26 13:15:53.968057] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.136 [2024-04-26 13:15:53.968157] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.136 [2024-04-26 13:15:53.968167] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.136 [2024-04-26 13:15:53.968172] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.136 [2024-04-26 13:15:53.968176] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:49.136 [2024-04-26 13:15:53.968186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.136 qpair failed and we were unable to recover it. 00:32:49.136 [2024-04-26 13:15:53.978120] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.136 [2024-04-26 13:15:53.978177] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.136 [2024-04-26 13:15:53.978187] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.136 [2024-04-26 13:15:53.978192] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.136 [2024-04-26 13:15:53.978196] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:49.136 [2024-04-26 13:15:53.978206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.136 qpair failed and we were unable to recover it. 00:32:49.136 [2024-04-26 13:15:53.988158] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.136 [2024-04-26 13:15:53.988205] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.136 [2024-04-26 13:15:53.988216] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.136 [2024-04-26 13:15:53.988221] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.136 [2024-04-26 13:15:53.988225] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:49.136 [2024-04-26 13:15:53.988235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.136 qpair failed and we were unable to recover it. 00:32:49.136 [2024-04-26 13:15:53.998130] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.136 [2024-04-26 13:15:53.998179] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.136 [2024-04-26 13:15:53.998190] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.136 [2024-04-26 13:15:53.998195] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.136 [2024-04-26 13:15:53.998199] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:49.136 [2024-04-26 13:15:53.998210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.136 qpair failed and we were unable to recover it. 00:32:49.136 [2024-04-26 13:15:54.008271] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.136 [2024-04-26 13:15:54.008323] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.136 [2024-04-26 13:15:54.008335] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.136 [2024-04-26 13:15:54.008340] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.136 [2024-04-26 13:15:54.008344] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:49.136 [2024-04-26 13:15:54.008354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.136 qpair failed and we were unable to recover it. 00:32:49.136 [2024-04-26 13:15:54.018257] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.136 [2024-04-26 13:15:54.018299] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.136 [2024-04-26 13:15:54.018310] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.136 [2024-04-26 13:15:54.018315] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.136 [2024-04-26 13:15:54.018322] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:49.136 [2024-04-26 13:15:54.018333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.136 qpair failed and we were unable to recover it. 00:32:49.136 [2024-04-26 13:15:54.028158] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.136 [2024-04-26 13:15:54.028226] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.136 [2024-04-26 13:15:54.028237] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.136 [2024-04-26 13:15:54.028242] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.136 [2024-04-26 13:15:54.028246] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:49.136 [2024-04-26 13:15:54.028256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.136 qpair failed and we were unable to recover it. 00:32:49.136 [2024-04-26 13:15:54.038354] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.136 [2024-04-26 13:15:54.038404] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.136 [2024-04-26 13:15:54.038415] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.136 [2024-04-26 13:15:54.038419] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.136 [2024-04-26 13:15:54.038423] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:49.136 [2024-04-26 13:15:54.038433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.136 qpair failed and we were unable to recover it. 00:32:49.136 [2024-04-26 13:15:54.048390] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.136 [2024-04-26 13:15:54.048449] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.136 [2024-04-26 13:15:54.048460] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.136 [2024-04-26 13:15:54.048465] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.136 [2024-04-26 13:15:54.048469] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:49.136 [2024-04-26 13:15:54.048479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.136 qpair failed and we were unable to recover it. 00:32:49.136 [2024-04-26 13:15:54.058340] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.136 [2024-04-26 13:15:54.058384] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.136 [2024-04-26 13:15:54.058395] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.136 [2024-04-26 13:15:54.058399] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.136 [2024-04-26 13:15:54.058403] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:49.136 [2024-04-26 13:15:54.058414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.136 qpair failed and we were unable to recover it. 00:32:49.136 [2024-04-26 13:15:54.068438] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.136 [2024-04-26 13:15:54.068510] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.136 [2024-04-26 13:15:54.068521] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.136 [2024-04-26 13:15:54.068526] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.136 [2024-04-26 13:15:54.068530] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:49.136 [2024-04-26 13:15:54.068540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.136 qpair failed and we were unable to recover it. 00:32:49.136 [2024-04-26 13:15:54.078435] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.136 [2024-04-26 13:15:54.078486] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.136 [2024-04-26 13:15:54.078497] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.136 [2024-04-26 13:15:54.078501] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.136 [2024-04-26 13:15:54.078505] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:49.136 [2024-04-26 13:15:54.078515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.136 qpair failed and we were unable to recover it. 00:32:49.136 [2024-04-26 13:15:54.088495] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.136 [2024-04-26 13:15:54.088553] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.136 [2024-04-26 13:15:54.088564] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.136 [2024-04-26 13:15:54.088568] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.136 [2024-04-26 13:15:54.088573] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:49.136 [2024-04-26 13:15:54.088583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.136 qpair failed and we were unable to recover it. 00:32:49.136 [2024-04-26 13:15:54.098484] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.136 [2024-04-26 13:15:54.098550] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.136 [2024-04-26 13:15:54.098560] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.136 [2024-04-26 13:15:54.098565] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.136 [2024-04-26 13:15:54.098569] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:49.136 [2024-04-26 13:15:54.098580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.136 qpair failed and we were unable to recover it. 00:32:49.136 [2024-04-26 13:15:54.108502] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.136 [2024-04-26 13:15:54.108541] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.136 [2024-04-26 13:15:54.108551] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.136 [2024-04-26 13:15:54.108559] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.136 [2024-04-26 13:15:54.108563] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:49.136 [2024-04-26 13:15:54.108573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.137 qpair failed and we were unable to recover it. 00:32:49.137 [2024-04-26 13:15:54.118447] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.137 [2024-04-26 13:15:54.118497] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.137 [2024-04-26 13:15:54.118509] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.137 [2024-04-26 13:15:54.118514] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.137 [2024-04-26 13:15:54.118518] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:49.137 [2024-04-26 13:15:54.118528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.137 qpair failed and we were unable to recover it. 00:32:49.137 [2024-04-26 13:15:54.128710] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.137 [2024-04-26 13:15:54.128768] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.137 [2024-04-26 13:15:54.128780] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.137 [2024-04-26 13:15:54.128784] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.137 [2024-04-26 13:15:54.128788] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:49.137 [2024-04-26 13:15:54.128798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.137 qpair failed and we were unable to recover it. 00:32:49.137 [2024-04-26 13:15:54.138563] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.137 [2024-04-26 13:15:54.138608] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.137 [2024-04-26 13:15:54.138619] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.137 [2024-04-26 13:15:54.138624] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.137 [2024-04-26 13:15:54.138628] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:49.137 [2024-04-26 13:15:54.138638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.137 qpair failed and we were unable to recover it. 00:32:49.137 [2024-04-26 13:15:54.148492] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.137 [2024-04-26 13:15:54.148531] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.137 [2024-04-26 13:15:54.148542] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.137 [2024-04-26 13:15:54.148547] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.137 [2024-04-26 13:15:54.148551] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:49.137 [2024-04-26 13:15:54.148561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.137 qpair failed and we were unable to recover it. 00:32:49.137 [2024-04-26 13:15:54.158697] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.137 [2024-04-26 13:15:54.158747] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.137 [2024-04-26 13:15:54.158758] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.137 [2024-04-26 13:15:54.158763] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.137 [2024-04-26 13:15:54.158767] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:49.137 [2024-04-26 13:15:54.158777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.137 qpair failed and we were unable to recover it. 00:32:49.137 [2024-04-26 13:15:54.168710] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.137 [2024-04-26 13:15:54.168759] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.137 [2024-04-26 13:15:54.168770] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.137 [2024-04-26 13:15:54.168774] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.137 [2024-04-26 13:15:54.168779] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:49.137 [2024-04-26 13:15:54.168789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.137 qpair failed and we were unable to recover it. 00:32:49.137 [2024-04-26 13:15:54.178700] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.137 [2024-04-26 13:15:54.178751] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.137 [2024-04-26 13:15:54.178762] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.137 [2024-04-26 13:15:54.178766] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.137 [2024-04-26 13:15:54.178770] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:49.137 [2024-04-26 13:15:54.178781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.137 qpair failed and we were unable to recover it. 00:32:49.137 [2024-04-26 13:15:54.188603] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.137 [2024-04-26 13:15:54.188648] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.137 [2024-04-26 13:15:54.188659] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.137 [2024-04-26 13:15:54.188663] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.137 [2024-04-26 13:15:54.188668] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:49.137 [2024-04-26 13:15:54.188678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.137 qpair failed and we were unable to recover it. 00:32:49.398 [2024-04-26 13:15:54.198870] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.398 [2024-04-26 13:15:54.198917] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.398 [2024-04-26 13:15:54.198928] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.398 [2024-04-26 13:15:54.198938] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.398 [2024-04-26 13:15:54.198942] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:49.398 [2024-04-26 13:15:54.198953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.398 qpair failed and we were unable to recover it. 00:32:49.398 [2024-04-26 13:15:54.208875] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.398 [2024-04-26 13:15:54.208937] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.398 [2024-04-26 13:15:54.208948] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.398 [2024-04-26 13:15:54.208953] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.398 [2024-04-26 13:15:54.208957] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:49.398 [2024-04-26 13:15:54.208968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.398 qpair failed and we were unable to recover it. 00:32:49.398 [2024-04-26 13:15:54.218815] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.398 [2024-04-26 13:15:54.218868] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.398 [2024-04-26 13:15:54.218880] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.398 [2024-04-26 13:15:54.218884] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.398 [2024-04-26 13:15:54.218888] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:49.398 [2024-04-26 13:15:54.218898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.398 qpair failed and we were unable to recover it. 00:32:49.398 [2024-04-26 13:15:54.228713] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.398 [2024-04-26 13:15:54.228756] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.398 [2024-04-26 13:15:54.228768] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.398 [2024-04-26 13:15:54.228773] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.398 [2024-04-26 13:15:54.228777] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:49.398 [2024-04-26 13:15:54.228788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.398 qpair failed and we were unable to recover it. 00:32:49.399 [2024-04-26 13:15:54.238901] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.399 [2024-04-26 13:15:54.238954] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.399 [2024-04-26 13:15:54.238966] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.399 [2024-04-26 13:15:54.238971] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.399 [2024-04-26 13:15:54.238975] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:49.399 [2024-04-26 13:15:54.238985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.399 qpair failed and we were unable to recover it. 00:32:49.399 [2024-04-26 13:15:54.248880] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.399 [2024-04-26 13:15:54.248932] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.399 [2024-04-26 13:15:54.248943] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.399 [2024-04-26 13:15:54.248947] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.399 [2024-04-26 13:15:54.248952] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:49.399 [2024-04-26 13:15:54.248962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.399 qpair failed and we were unable to recover it. 00:32:49.399 [2024-04-26 13:15:54.258891] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.399 [2024-04-26 13:15:54.258931] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.399 [2024-04-26 13:15:54.258942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.399 [2024-04-26 13:15:54.258946] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.399 [2024-04-26 13:15:54.258950] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:49.399 [2024-04-26 13:15:54.258960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.399 qpair failed and we were unable to recover it. 00:32:49.399 [2024-04-26 13:15:54.268975] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.399 [2024-04-26 13:15:54.269020] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.399 [2024-04-26 13:15:54.269030] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.399 [2024-04-26 13:15:54.269035] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.399 [2024-04-26 13:15:54.269039] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:49.399 [2024-04-26 13:15:54.269049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.399 qpair failed and we were unable to recover it. 00:32:49.399 [2024-04-26 13:15:54.279049] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.399 [2024-04-26 13:15:54.279096] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.399 [2024-04-26 13:15:54.279106] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.399 [2024-04-26 13:15:54.279111] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.399 [2024-04-26 13:15:54.279115] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:49.399 [2024-04-26 13:15:54.279125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.399 qpair failed and we were unable to recover it. 00:32:49.399 [2024-04-26 13:15:54.289077] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.399 [2024-04-26 13:15:54.289166] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.399 [2024-04-26 13:15:54.289180] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.399 [2024-04-26 13:15:54.289184] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.399 [2024-04-26 13:15:54.289189] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:49.399 [2024-04-26 13:15:54.289199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.399 qpair failed and we were unable to recover it. 00:32:49.399 [2024-04-26 13:15:54.299056] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.399 [2024-04-26 13:15:54.299100] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.399 [2024-04-26 13:15:54.299111] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.399 [2024-04-26 13:15:54.299115] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.399 [2024-04-26 13:15:54.299119] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:49.399 [2024-04-26 13:15:54.299130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.399 qpair failed and we were unable to recover it. 00:32:49.399 [2024-04-26 13:15:54.309079] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.399 [2024-04-26 13:15:54.309132] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.399 [2024-04-26 13:15:54.309142] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.399 [2024-04-26 13:15:54.309147] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.399 [2024-04-26 13:15:54.309151] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:49.399 [2024-04-26 13:15:54.309161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.399 qpair failed and we were unable to recover it. 00:32:49.399 [2024-04-26 13:15:54.319164] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.399 [2024-04-26 13:15:54.319263] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.399 [2024-04-26 13:15:54.319274] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.399 [2024-04-26 13:15:54.319279] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.399 [2024-04-26 13:15:54.319283] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:49.399 [2024-04-26 13:15:54.319293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.399 qpair failed and we were unable to recover it. 00:32:49.399 [2024-04-26 13:15:54.329030] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.399 [2024-04-26 13:15:54.329080] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.399 [2024-04-26 13:15:54.329091] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.399 [2024-04-26 13:15:54.329096] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.399 [2024-04-26 13:15:54.329100] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:49.399 [2024-04-26 13:15:54.329113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.399 qpair failed and we were unable to recover it. 00:32:49.399 [2024-04-26 13:15:54.339202] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.399 [2024-04-26 13:15:54.339241] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.399 [2024-04-26 13:15:54.339251] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.399 [2024-04-26 13:15:54.339256] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.399 [2024-04-26 13:15:54.339260] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:49.399 [2024-04-26 13:15:54.339271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.399 qpair failed and we were unable to recover it. 00:32:49.399 [2024-04-26 13:15:54.349179] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.399 [2024-04-26 13:15:54.349221] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.399 [2024-04-26 13:15:54.349232] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.399 [2024-04-26 13:15:54.349236] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.399 [2024-04-26 13:15:54.349241] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:49.399 [2024-04-26 13:15:54.349251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.399 qpair failed and we were unable to recover it. 00:32:49.399 [2024-04-26 13:15:54.359233] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.399 [2024-04-26 13:15:54.359280] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.399 [2024-04-26 13:15:54.359291] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.399 [2024-04-26 13:15:54.359296] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.399 [2024-04-26 13:15:54.359300] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:49.399 [2024-04-26 13:15:54.359310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.399 qpair failed and we were unable to recover it. 00:32:49.399 [2024-04-26 13:15:54.369253] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.399 [2024-04-26 13:15:54.369301] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.399 [2024-04-26 13:15:54.369311] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.399 [2024-04-26 13:15:54.369316] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.400 [2024-04-26 13:15:54.369320] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:49.400 [2024-04-26 13:15:54.369330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.400 qpair failed and we were unable to recover it. 00:32:49.400 [2024-04-26 13:15:54.379116] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.400 [2024-04-26 13:15:54.379155] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.400 [2024-04-26 13:15:54.379168] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.400 [2024-04-26 13:15:54.379173] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.400 [2024-04-26 13:15:54.379177] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:49.400 [2024-04-26 13:15:54.379187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.400 qpair failed and we were unable to recover it. 00:32:49.400 [2024-04-26 13:15:54.389279] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.400 [2024-04-26 13:15:54.389322] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.400 [2024-04-26 13:15:54.389332] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.400 [2024-04-26 13:15:54.389337] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.400 [2024-04-26 13:15:54.389341] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:49.400 [2024-04-26 13:15:54.389352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.400 qpair failed and we were unable to recover it. 00:32:49.400 [2024-04-26 13:15:54.399354] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.400 [2024-04-26 13:15:54.399417] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.400 [2024-04-26 13:15:54.399427] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.400 [2024-04-26 13:15:54.399432] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.400 [2024-04-26 13:15:54.399436] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:49.400 [2024-04-26 13:15:54.399447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.400 qpair failed and we were unable to recover it. 00:32:49.400 [2024-04-26 13:15:54.409369] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.400 [2024-04-26 13:15:54.409455] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.400 [2024-04-26 13:15:54.409466] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.400 [2024-04-26 13:15:54.409470] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.400 [2024-04-26 13:15:54.409474] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:49.400 [2024-04-26 13:15:54.409484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.400 qpair failed and we were unable to recover it. 00:32:49.400 [2024-04-26 13:15:54.419361] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.400 [2024-04-26 13:15:54.419403] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.400 [2024-04-26 13:15:54.419414] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.400 [2024-04-26 13:15:54.419418] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.400 [2024-04-26 13:15:54.419425] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:49.400 [2024-04-26 13:15:54.419436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.400 qpair failed and we were unable to recover it. 00:32:49.400 [2024-04-26 13:15:54.429259] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.400 [2024-04-26 13:15:54.429301] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.400 [2024-04-26 13:15:54.429312] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.400 [2024-04-26 13:15:54.429317] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.400 [2024-04-26 13:15:54.429321] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:49.400 [2024-04-26 13:15:54.429331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.400 qpair failed and we were unable to recover it. 00:32:49.400 [2024-04-26 13:15:54.439489] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.400 [2024-04-26 13:15:54.439540] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.400 [2024-04-26 13:15:54.439550] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.400 [2024-04-26 13:15:54.439555] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.400 [2024-04-26 13:15:54.439559] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:49.400 [2024-04-26 13:15:54.439569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.400 qpair failed and we were unable to recover it. 00:32:49.400 [2024-04-26 13:15:54.449484] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.400 [2024-04-26 13:15:54.449536] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.400 [2024-04-26 13:15:54.449547] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.400 [2024-04-26 13:15:54.449552] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.400 [2024-04-26 13:15:54.449556] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:49.400 [2024-04-26 13:15:54.449566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.400 qpair failed and we were unable to recover it. 00:32:49.662 [2024-04-26 13:15:54.459459] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.662 [2024-04-26 13:15:54.459510] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.662 [2024-04-26 13:15:54.459521] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.662 [2024-04-26 13:15:54.459526] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.662 [2024-04-26 13:15:54.459530] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:49.662 [2024-04-26 13:15:54.459540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.662 qpair failed and we were unable to recover it. 00:32:49.662 [2024-04-26 13:15:54.469368] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.662 [2024-04-26 13:15:54.469409] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.662 [2024-04-26 13:15:54.469420] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.662 [2024-04-26 13:15:54.469425] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.662 [2024-04-26 13:15:54.469429] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:49.662 [2024-04-26 13:15:54.469439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.662 qpair failed and we were unable to recover it. 00:32:49.662 [2024-04-26 13:15:54.479567] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.662 [2024-04-26 13:15:54.479614] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.662 [2024-04-26 13:15:54.479625] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.662 [2024-04-26 13:15:54.479630] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.662 [2024-04-26 13:15:54.479634] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:49.662 [2024-04-26 13:15:54.479644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.662 qpair failed and we were unable to recover it. 00:32:49.662 [2024-04-26 13:15:54.489469] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.662 [2024-04-26 13:15:54.489531] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.662 [2024-04-26 13:15:54.489542] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.662 [2024-04-26 13:15:54.489546] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.662 [2024-04-26 13:15:54.489551] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:49.662 [2024-04-26 13:15:54.489561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.662 qpair failed and we were unable to recover it. 00:32:49.662 [2024-04-26 13:15:54.499589] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.662 [2024-04-26 13:15:54.499673] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.662 [2024-04-26 13:15:54.499684] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.662 [2024-04-26 13:15:54.499689] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.662 [2024-04-26 13:15:54.499693] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:49.662 [2024-04-26 13:15:54.499703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.662 qpair failed and we were unable to recover it. 00:32:49.662 [2024-04-26 13:15:54.509603] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.662 [2024-04-26 13:15:54.509643] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.662 [2024-04-26 13:15:54.509654] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.662 [2024-04-26 13:15:54.509662] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.662 [2024-04-26 13:15:54.509666] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:49.662 [2024-04-26 13:15:54.509676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.662 qpair failed and we were unable to recover it. 00:32:49.662 [2024-04-26 13:15:54.519560] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.662 [2024-04-26 13:15:54.519617] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.662 [2024-04-26 13:15:54.519628] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.662 [2024-04-26 13:15:54.519633] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.662 [2024-04-26 13:15:54.519637] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:49.662 [2024-04-26 13:15:54.519647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.662 qpair failed and we were unable to recover it. 00:32:49.662 [2024-04-26 13:15:54.529589] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.662 [2024-04-26 13:15:54.529681] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.662 [2024-04-26 13:15:54.529693] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.662 [2024-04-26 13:15:54.529697] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.662 [2024-04-26 13:15:54.529702] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:49.662 [2024-04-26 13:15:54.529712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.662 qpair failed and we were unable to recover it. 00:32:49.662 [2024-04-26 13:15:54.539702] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.662 [2024-04-26 13:15:54.539744] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.662 [2024-04-26 13:15:54.539755] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.662 [2024-04-26 13:15:54.539760] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.662 [2024-04-26 13:15:54.539764] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:49.662 [2024-04-26 13:15:54.539774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.663 qpair failed and we were unable to recover it. 00:32:49.663 [2024-04-26 13:15:54.549721] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.663 [2024-04-26 13:15:54.549761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.663 [2024-04-26 13:15:54.549772] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.663 [2024-04-26 13:15:54.549776] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.663 [2024-04-26 13:15:54.549780] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:49.663 [2024-04-26 13:15:54.549791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.663 qpair failed and we were unable to recover it. 00:32:49.663 [2024-04-26 13:15:54.559678] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.663 [2024-04-26 13:15:54.559727] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.663 [2024-04-26 13:15:54.559738] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.663 [2024-04-26 13:15:54.559742] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.663 [2024-04-26 13:15:54.559746] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:49.663 [2024-04-26 13:15:54.559756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.663 qpair failed and we were unable to recover it. 00:32:49.663 [2024-04-26 13:15:54.569834] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.663 [2024-04-26 13:15:54.569890] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.663 [2024-04-26 13:15:54.569901] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.663 [2024-04-26 13:15:54.569906] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.663 [2024-04-26 13:15:54.569910] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:49.663 [2024-04-26 13:15:54.569920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.663 qpair failed and we were unable to recover it. 00:32:49.663 [2024-04-26 13:15:54.579819] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.663 [2024-04-26 13:15:54.579866] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.663 [2024-04-26 13:15:54.579877] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.663 [2024-04-26 13:15:54.579881] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.663 [2024-04-26 13:15:54.579886] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:49.663 [2024-04-26 13:15:54.579896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.663 qpair failed and we were unable to recover it. 00:32:49.663 [2024-04-26 13:15:54.589842] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.663 [2024-04-26 13:15:54.589890] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.663 [2024-04-26 13:15:54.589902] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.663 [2024-04-26 13:15:54.589906] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.663 [2024-04-26 13:15:54.589911] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:49.663 [2024-04-26 13:15:54.589921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.663 qpair failed and we were unable to recover it. 00:32:49.663 [2024-04-26 13:15:54.599917] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.663 [2024-04-26 13:15:54.599965] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.663 [2024-04-26 13:15:54.599976] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.663 [2024-04-26 13:15:54.599983] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.663 [2024-04-26 13:15:54.599988] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:49.663 [2024-04-26 13:15:54.599998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.663 qpair failed and we were unable to recover it. 00:32:49.663 [2024-04-26 13:15:54.609946] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.663 [2024-04-26 13:15:54.609999] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.663 [2024-04-26 13:15:54.610009] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.663 [2024-04-26 13:15:54.610014] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.663 [2024-04-26 13:15:54.610018] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:49.663 [2024-04-26 13:15:54.610029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.663 qpair failed and we were unable to recover it. 00:32:49.663 [2024-04-26 13:15:54.619936] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.663 [2024-04-26 13:15:54.619979] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.663 [2024-04-26 13:15:54.619990] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.663 [2024-04-26 13:15:54.619995] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.663 [2024-04-26 13:15:54.619999] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:49.663 [2024-04-26 13:15:54.620009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.663 qpair failed and we were unable to recover it. 00:32:49.663 [2024-04-26 13:15:54.629863] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.663 [2024-04-26 13:15:54.629907] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.663 [2024-04-26 13:15:54.629918] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.663 [2024-04-26 13:15:54.629923] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.663 [2024-04-26 13:15:54.629928] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:49.663 [2024-04-26 13:15:54.629938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.663 qpair failed and we were unable to recover it. 00:32:49.663 [2024-04-26 13:15:54.640021] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.663 [2024-04-26 13:15:54.640084] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.663 [2024-04-26 13:15:54.640096] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.663 [2024-04-26 13:15:54.640101] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.663 [2024-04-26 13:15:54.640105] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:49.663 [2024-04-26 13:15:54.640115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.663 qpair failed and we were unable to recover it. 00:32:49.663 [2024-04-26 13:15:54.650061] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.663 [2024-04-26 13:15:54.650155] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.663 [2024-04-26 13:15:54.650167] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.663 [2024-04-26 13:15:54.650171] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.663 [2024-04-26 13:15:54.650176] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:49.663 [2024-04-26 13:15:54.650186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.663 qpair failed and we were unable to recover it. 00:32:49.663 [2024-04-26 13:15:54.659938] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.663 [2024-04-26 13:15:54.659978] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.663 [2024-04-26 13:15:54.659989] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.663 [2024-04-26 13:15:54.659994] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.663 [2024-04-26 13:15:54.659998] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:49.663 [2024-04-26 13:15:54.660008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.663 qpair failed and we were unable to recover it. 00:32:49.663 [2024-04-26 13:15:54.670042] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.663 [2024-04-26 13:15:54.670085] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.663 [2024-04-26 13:15:54.670096] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.663 [2024-04-26 13:15:54.670101] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.663 [2024-04-26 13:15:54.670105] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:49.663 [2024-04-26 13:15:54.670115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.663 qpair failed and we were unable to recover it. 00:32:49.663 [2024-04-26 13:15:54.680011] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.664 [2024-04-26 13:15:54.680060] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.664 [2024-04-26 13:15:54.680071] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.664 [2024-04-26 13:15:54.680075] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.664 [2024-04-26 13:15:54.680079] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:49.664 [2024-04-26 13:15:54.680089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.664 qpair failed and we were unable to recover it. 00:32:49.664 [2024-04-26 13:15:54.690032] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.664 [2024-04-26 13:15:54.690097] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.664 [2024-04-26 13:15:54.690111] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.664 [2024-04-26 13:15:54.690116] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.664 [2024-04-26 13:15:54.690120] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:49.664 [2024-04-26 13:15:54.690130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.664 qpair failed and we were unable to recover it. 00:32:49.664 [2024-04-26 13:15:54.700145] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.664 [2024-04-26 13:15:54.700191] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.664 [2024-04-26 13:15:54.700201] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.664 [2024-04-26 13:15:54.700206] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.664 [2024-04-26 13:15:54.700210] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:49.664 [2024-04-26 13:15:54.700220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.664 qpair failed and we were unable to recover it. 00:32:49.664 [2024-04-26 13:15:54.710181] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.664 [2024-04-26 13:15:54.710228] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.664 [2024-04-26 13:15:54.710238] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.664 [2024-04-26 13:15:54.710243] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.664 [2024-04-26 13:15:54.710247] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:49.664 [2024-04-26 13:15:54.710257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.664 qpair failed and we were unable to recover it. 00:32:49.664 [2024-04-26 13:15:54.720240] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.664 [2024-04-26 13:15:54.720289] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.664 [2024-04-26 13:15:54.720300] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.664 [2024-04-26 13:15:54.720304] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.664 [2024-04-26 13:15:54.720309] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:49.664 [2024-04-26 13:15:54.720319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.664 qpair failed and we were unable to recover it. 00:32:49.926 [2024-04-26 13:15:54.730253] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.926 [2024-04-26 13:15:54.730306] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.926 [2024-04-26 13:15:54.730317] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.926 [2024-04-26 13:15:54.730322] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.926 [2024-04-26 13:15:54.730326] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:49.926 [2024-04-26 13:15:54.730339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.926 qpair failed and we were unable to recover it. 00:32:49.926 [2024-04-26 13:15:54.740164] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.926 [2024-04-26 13:15:54.740254] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.926 [2024-04-26 13:15:54.740264] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.926 [2024-04-26 13:15:54.740269] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.926 [2024-04-26 13:15:54.740273] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:49.926 [2024-04-26 13:15:54.740283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.926 qpair failed and we were unable to recover it. 00:32:49.926 [2024-04-26 13:15:54.750286] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.926 [2024-04-26 13:15:54.750326] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.926 [2024-04-26 13:15:54.750337] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.926 [2024-04-26 13:15:54.750341] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.926 [2024-04-26 13:15:54.750346] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:49.926 [2024-04-26 13:15:54.750356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.926 qpair failed and we were unable to recover it. 00:32:49.926 [2024-04-26 13:15:54.760359] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.926 [2024-04-26 13:15:54.760422] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.926 [2024-04-26 13:15:54.760433] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.926 [2024-04-26 13:15:54.760438] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.926 [2024-04-26 13:15:54.760442] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:49.926 [2024-04-26 13:15:54.760452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.926 qpair failed and we were unable to recover it. 00:32:49.926 [2024-04-26 13:15:54.770376] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.926 [2024-04-26 13:15:54.770426] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.926 [2024-04-26 13:15:54.770436] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.926 [2024-04-26 13:15:54.770441] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.926 [2024-04-26 13:15:54.770445] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:49.926 [2024-04-26 13:15:54.770455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.926 qpair failed and we were unable to recover it. 00:32:49.926 [2024-04-26 13:15:54.780358] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.926 [2024-04-26 13:15:54.780401] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.926 [2024-04-26 13:15:54.780415] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.926 [2024-04-26 13:15:54.780420] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.926 [2024-04-26 13:15:54.780425] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:49.926 [2024-04-26 13:15:54.780435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.927 qpair failed and we were unable to recover it. 00:32:49.927 [2024-04-26 13:15:54.790408] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.927 [2024-04-26 13:15:54.790449] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.927 [2024-04-26 13:15:54.790460] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.927 [2024-04-26 13:15:54.790465] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.927 [2024-04-26 13:15:54.790469] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:49.927 [2024-04-26 13:15:54.790480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.927 qpair failed and we were unable to recover it. 00:32:49.927 [2024-04-26 13:15:54.800365] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.927 [2024-04-26 13:15:54.800415] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.927 [2024-04-26 13:15:54.800426] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.927 [2024-04-26 13:15:54.800431] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.927 [2024-04-26 13:15:54.800435] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:49.927 [2024-04-26 13:15:54.800445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.927 qpair failed and we were unable to recover it. 00:32:49.927 [2024-04-26 13:15:54.810489] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.927 [2024-04-26 13:15:54.810543] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.927 [2024-04-26 13:15:54.810554] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.927 [2024-04-26 13:15:54.810559] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.927 [2024-04-26 13:15:54.810563] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:49.927 [2024-04-26 13:15:54.810573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.927 qpair failed and we were unable to recover it. 00:32:49.927 [2024-04-26 13:15:54.820514] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.927 [2024-04-26 13:15:54.820553] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.927 [2024-04-26 13:15:54.820564] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.927 [2024-04-26 13:15:54.820568] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.927 [2024-04-26 13:15:54.820576] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:49.927 [2024-04-26 13:15:54.820586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.927 qpair failed and we were unable to recover it. 00:32:49.927 [2024-04-26 13:15:54.830486] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.927 [2024-04-26 13:15:54.830541] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.927 [2024-04-26 13:15:54.830552] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.927 [2024-04-26 13:15:54.830556] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.927 [2024-04-26 13:15:54.830561] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:49.927 [2024-04-26 13:15:54.830571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.927 qpair failed and we were unable to recover it. 00:32:49.927 [2024-04-26 13:15:54.840583] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.927 [2024-04-26 13:15:54.840638] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.927 [2024-04-26 13:15:54.840648] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.927 [2024-04-26 13:15:54.840653] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.927 [2024-04-26 13:15:54.840657] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:49.927 [2024-04-26 13:15:54.840668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.927 qpair failed and we were unable to recover it. 00:32:49.927 [2024-04-26 13:15:54.850592] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.927 [2024-04-26 13:15:54.850718] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.927 [2024-04-26 13:15:54.850729] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.927 [2024-04-26 13:15:54.850734] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.927 [2024-04-26 13:15:54.850738] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:49.927 [2024-04-26 13:15:54.850748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.927 qpair failed and we were unable to recover it. 00:32:49.927 [2024-04-26 13:15:54.860566] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.927 [2024-04-26 13:15:54.860609] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.927 [2024-04-26 13:15:54.860619] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.927 [2024-04-26 13:15:54.860624] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.927 [2024-04-26 13:15:54.860628] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:49.927 [2024-04-26 13:15:54.860638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.927 qpair failed and we were unable to recover it. 00:32:49.927 [2024-04-26 13:15:54.870635] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.927 [2024-04-26 13:15:54.870732] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.927 [2024-04-26 13:15:54.870743] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.927 [2024-04-26 13:15:54.870747] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.927 [2024-04-26 13:15:54.870752] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:49.927 [2024-04-26 13:15:54.870762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.927 qpair failed and we were unable to recover it. 00:32:49.927 [2024-04-26 13:15:54.880660] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.927 [2024-04-26 13:15:54.880738] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.927 [2024-04-26 13:15:54.880748] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.927 [2024-04-26 13:15:54.880753] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.927 [2024-04-26 13:15:54.880757] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:49.927 [2024-04-26 13:15:54.880767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.927 qpair failed and we were unable to recover it. 00:32:49.927 [2024-04-26 13:15:54.890612] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.927 [2024-04-26 13:15:54.890665] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.927 [2024-04-26 13:15:54.890678] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.927 [2024-04-26 13:15:54.890683] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.927 [2024-04-26 13:15:54.890687] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:49.927 [2024-04-26 13:15:54.890703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.927 qpair failed and we were unable to recover it. 00:32:49.927 [2024-04-26 13:15:54.900746] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.927 [2024-04-26 13:15:54.900810] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.927 [2024-04-26 13:15:54.900821] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.927 [2024-04-26 13:15:54.900826] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.927 [2024-04-26 13:15:54.900830] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:49.927 [2024-04-26 13:15:54.900843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.927 qpair failed and we were unable to recover it. 00:32:49.927 [2024-04-26 13:15:54.910598] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.927 [2024-04-26 13:15:54.910641] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.927 [2024-04-26 13:15:54.910652] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.927 [2024-04-26 13:15:54.910657] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.927 [2024-04-26 13:15:54.910664] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:49.927 [2024-04-26 13:15:54.910674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.928 qpair failed and we were unable to recover it. 00:32:49.928 [2024-04-26 13:15:54.920791] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.928 [2024-04-26 13:15:54.920840] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.928 [2024-04-26 13:15:54.920852] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.928 [2024-04-26 13:15:54.920856] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.928 [2024-04-26 13:15:54.920861] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:49.928 [2024-04-26 13:15:54.920871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.928 qpair failed and we were unable to recover it. 00:32:49.928 [2024-04-26 13:15:54.930805] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.928 [2024-04-26 13:15:54.930884] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.928 [2024-04-26 13:15:54.930894] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.928 [2024-04-26 13:15:54.930899] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.928 [2024-04-26 13:15:54.930903] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:49.928 [2024-04-26 13:15:54.930914] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.928 qpair failed and we were unable to recover it. 00:32:49.928 [2024-04-26 13:15:54.940813] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.928 [2024-04-26 13:15:54.940857] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.928 [2024-04-26 13:15:54.940868] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.928 [2024-04-26 13:15:54.940872] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.928 [2024-04-26 13:15:54.940877] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:49.928 [2024-04-26 13:15:54.940887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.928 qpair failed and we were unable to recover it. 00:32:49.928 [2024-04-26 13:15:54.950821] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.928 [2024-04-26 13:15:54.950866] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.928 [2024-04-26 13:15:54.950877] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.928 [2024-04-26 13:15:54.950882] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.928 [2024-04-26 13:15:54.950886] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:49.928 [2024-04-26 13:15:54.950897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.928 qpair failed and we were unable to recover it. 00:32:49.928 [2024-04-26 13:15:54.960767] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.928 [2024-04-26 13:15:54.960829] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.928 [2024-04-26 13:15:54.960843] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.928 [2024-04-26 13:15:54.960848] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.928 [2024-04-26 13:15:54.960853] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:49.928 [2024-04-26 13:15:54.960863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.928 qpair failed and we were unable to recover it. 00:32:49.928 [2024-04-26 13:15:54.970798] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.928 [2024-04-26 13:15:54.970867] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.928 [2024-04-26 13:15:54.970880] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.928 [2024-04-26 13:15:54.970885] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.928 [2024-04-26 13:15:54.970889] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:49.928 [2024-04-26 13:15:54.970900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.928 qpair failed and we were unable to recover it. 00:32:49.928 [2024-04-26 13:15:54.980888] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.928 [2024-04-26 13:15:54.980931] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.928 [2024-04-26 13:15:54.980942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.928 [2024-04-26 13:15:54.980947] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.928 [2024-04-26 13:15:54.980951] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:49.928 [2024-04-26 13:15:54.980962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.928 qpair failed and we were unable to recover it. 00:32:50.191 [2024-04-26 13:15:54.990931] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.191 [2024-04-26 13:15:54.990973] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.191 [2024-04-26 13:15:54.990984] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.191 [2024-04-26 13:15:54.990989] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.191 [2024-04-26 13:15:54.990993] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:50.191 [2024-04-26 13:15:54.991003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.191 qpair failed and we were unable to recover it. 00:32:50.191 [2024-04-26 13:15:55.001010] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.191 [2024-04-26 13:15:55.001059] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.191 [2024-04-26 13:15:55.001070] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.191 [2024-04-26 13:15:55.001079] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.191 [2024-04-26 13:15:55.001083] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:50.191 [2024-04-26 13:15:55.001093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.191 qpair failed and we were unable to recover it. 00:32:50.191 [2024-04-26 13:15:55.010907] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.191 [2024-04-26 13:15:55.010959] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.191 [2024-04-26 13:15:55.010971] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.191 [2024-04-26 13:15:55.010975] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.191 [2024-04-26 13:15:55.010980] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:50.191 [2024-04-26 13:15:55.010990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.191 qpair failed and we were unable to recover it. 00:32:50.191 [2024-04-26 13:15:55.021048] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.191 [2024-04-26 13:15:55.021128] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.191 [2024-04-26 13:15:55.021139] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.191 [2024-04-26 13:15:55.021144] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.191 [2024-04-26 13:15:55.021148] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:50.191 [2024-04-26 13:15:55.021158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.191 qpair failed and we were unable to recover it. 00:32:50.191 [2024-04-26 13:15:55.031042] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.191 [2024-04-26 13:15:55.031092] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.191 [2024-04-26 13:15:55.031103] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.191 [2024-04-26 13:15:55.031107] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.191 [2024-04-26 13:15:55.031112] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:50.191 [2024-04-26 13:15:55.031122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.191 qpair failed and we were unable to recover it. 00:32:50.191 [2024-04-26 13:15:55.041124] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.191 [2024-04-26 13:15:55.041173] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.191 [2024-04-26 13:15:55.041183] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.191 [2024-04-26 13:15:55.041188] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.191 [2024-04-26 13:15:55.041192] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:50.191 [2024-04-26 13:15:55.041202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.191 qpair failed and we were unable to recover it. 00:32:50.191 [2024-04-26 13:15:55.051192] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.191 [2024-04-26 13:15:55.051252] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.191 [2024-04-26 13:15:55.051262] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.191 [2024-04-26 13:15:55.051267] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.191 [2024-04-26 13:15:55.051271] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:50.191 [2024-04-26 13:15:55.051281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.191 qpair failed and we were unable to recover it. 00:32:50.191 [2024-04-26 13:15:55.061131] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.191 [2024-04-26 13:15:55.061175] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.191 [2024-04-26 13:15:55.061186] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.191 [2024-04-26 13:15:55.061191] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.191 [2024-04-26 13:15:55.061195] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:50.191 [2024-04-26 13:15:55.061205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.191 qpair failed and we were unable to recover it. 00:32:50.191 [2024-04-26 13:15:55.071157] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.191 [2024-04-26 13:15:55.071246] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.191 [2024-04-26 13:15:55.071257] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.191 [2024-04-26 13:15:55.071261] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.191 [2024-04-26 13:15:55.071265] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:50.191 [2024-04-26 13:15:55.071275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.191 qpair failed and we were unable to recover it. 00:32:50.191 [2024-04-26 13:15:55.081223] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.191 [2024-04-26 13:15:55.081269] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.191 [2024-04-26 13:15:55.081279] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.191 [2024-04-26 13:15:55.081284] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.191 [2024-04-26 13:15:55.081288] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:50.191 [2024-04-26 13:15:55.081298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.191 qpair failed and we were unable to recover it. 00:32:50.191 [2024-04-26 13:15:55.091229] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.191 [2024-04-26 13:15:55.091282] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.192 [2024-04-26 13:15:55.091296] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.192 [2024-04-26 13:15:55.091301] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.192 [2024-04-26 13:15:55.091305] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:50.192 [2024-04-26 13:15:55.091315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.192 qpair failed and we were unable to recover it. 00:32:50.192 [2024-04-26 13:15:55.101095] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.192 [2024-04-26 13:15:55.101143] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.192 [2024-04-26 13:15:55.101154] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.192 [2024-04-26 13:15:55.101158] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.192 [2024-04-26 13:15:55.101163] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:50.192 [2024-04-26 13:15:55.101173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.192 qpair failed and we were unable to recover it. 00:32:50.192 [2024-04-26 13:15:55.111252] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.192 [2024-04-26 13:15:55.111293] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.192 [2024-04-26 13:15:55.111304] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.192 [2024-04-26 13:15:55.111309] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.192 [2024-04-26 13:15:55.111313] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:50.192 [2024-04-26 13:15:55.111323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.192 qpair failed and we were unable to recover it. 00:32:50.192 [2024-04-26 13:15:55.121304] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.192 [2024-04-26 13:15:55.121384] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.192 [2024-04-26 13:15:55.121395] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.192 [2024-04-26 13:15:55.121400] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.192 [2024-04-26 13:15:55.121404] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:50.192 [2024-04-26 13:15:55.121415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.192 qpair failed and we were unable to recover it. 00:32:50.192 [2024-04-26 13:15:55.131346] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.192 [2024-04-26 13:15:55.131396] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.192 [2024-04-26 13:15:55.131407] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.192 [2024-04-26 13:15:55.131412] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.192 [2024-04-26 13:15:55.131416] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:50.192 [2024-04-26 13:15:55.131430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.192 qpair failed and we were unable to recover it. 00:32:50.192 [2024-04-26 13:15:55.141334] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.192 [2024-04-26 13:15:55.141384] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.192 [2024-04-26 13:15:55.141394] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.192 [2024-04-26 13:15:55.141399] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.192 [2024-04-26 13:15:55.141403] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:50.192 [2024-04-26 13:15:55.141413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.192 qpair failed and we were unable to recover it. 00:32:50.192 [2024-04-26 13:15:55.151356] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.192 [2024-04-26 13:15:55.151418] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.192 [2024-04-26 13:15:55.151429] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.192 [2024-04-26 13:15:55.151434] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.192 [2024-04-26 13:15:55.151438] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:50.192 [2024-04-26 13:15:55.151448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.192 qpair failed and we were unable to recover it. 00:32:50.192 [2024-04-26 13:15:55.161431] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.192 [2024-04-26 13:15:55.161478] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.192 [2024-04-26 13:15:55.161489] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.192 [2024-04-26 13:15:55.161494] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.192 [2024-04-26 13:15:55.161498] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:50.192 [2024-04-26 13:15:55.161508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.192 qpair failed and we were unable to recover it. 00:32:50.192 [2024-04-26 13:15:55.171417] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.192 [2024-04-26 13:15:55.171467] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.192 [2024-04-26 13:15:55.171478] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.192 [2024-04-26 13:15:55.171482] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.192 [2024-04-26 13:15:55.171487] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:50.192 [2024-04-26 13:15:55.171498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.192 qpair failed and we were unable to recover it. 00:32:50.192 [2024-04-26 13:15:55.181440] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.192 [2024-04-26 13:15:55.181481] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.192 [2024-04-26 13:15:55.181494] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.192 [2024-04-26 13:15:55.181499] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.192 [2024-04-26 13:15:55.181503] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:50.192 [2024-04-26 13:15:55.181513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.192 qpair failed and we were unable to recover it. 00:32:50.192 [2024-04-26 13:15:55.191503] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.192 [2024-04-26 13:15:55.191545] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.192 [2024-04-26 13:15:55.191555] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.192 [2024-04-26 13:15:55.191560] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.192 [2024-04-26 13:15:55.191564] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:50.192 [2024-04-26 13:15:55.191574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.192 qpair failed and we were unable to recover it. 00:32:50.192 [2024-04-26 13:15:55.201549] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.192 [2024-04-26 13:15:55.201603] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.192 [2024-04-26 13:15:55.201621] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.192 [2024-04-26 13:15:55.201627] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.192 [2024-04-26 13:15:55.201632] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:50.192 [2024-04-26 13:15:55.201645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.192 qpair failed and we were unable to recover it. 00:32:50.192 [2024-04-26 13:15:55.211568] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.192 [2024-04-26 13:15:55.211623] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.192 [2024-04-26 13:15:55.211641] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.192 [2024-04-26 13:15:55.211646] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.192 [2024-04-26 13:15:55.211651] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:50.192 [2024-04-26 13:15:55.211664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.192 qpair failed and we were unable to recover it. 00:32:50.192 [2024-04-26 13:15:55.221551] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.192 [2024-04-26 13:15:55.221634] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.192 [2024-04-26 13:15:55.221652] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.192 [2024-04-26 13:15:55.221658] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.193 [2024-04-26 13:15:55.221666] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:50.193 [2024-04-26 13:15:55.221679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.193 qpair failed and we were unable to recover it. 00:32:50.193 [2024-04-26 13:15:55.231458] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.193 [2024-04-26 13:15:55.231501] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.193 [2024-04-26 13:15:55.231512] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.193 [2024-04-26 13:15:55.231517] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.193 [2024-04-26 13:15:55.231522] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:50.193 [2024-04-26 13:15:55.231533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.193 qpair failed and we were unable to recover it. 00:32:50.193 [2024-04-26 13:15:55.241655] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.193 [2024-04-26 13:15:55.241710] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.193 [2024-04-26 13:15:55.241721] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.193 [2024-04-26 13:15:55.241726] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.193 [2024-04-26 13:15:55.241730] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:50.193 [2024-04-26 13:15:55.241740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.193 qpair failed and we were unable to recover it. 00:32:50.454 [2024-04-26 13:15:55.251703] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.454 [2024-04-26 13:15:55.251770] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.454 [2024-04-26 13:15:55.251781] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.454 [2024-04-26 13:15:55.251786] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.454 [2024-04-26 13:15:55.251790] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:50.454 [2024-04-26 13:15:55.251801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.454 qpair failed and we were unable to recover it. 00:32:50.454 [2024-04-26 13:15:55.261673] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.454 [2024-04-26 13:15:55.261722] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.454 [2024-04-26 13:15:55.261733] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.455 [2024-04-26 13:15:55.261738] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.455 [2024-04-26 13:15:55.261742] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:50.455 [2024-04-26 13:15:55.261752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.455 qpair failed and we were unable to recover it. 00:32:50.455 [2024-04-26 13:15:55.271707] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.455 [2024-04-26 13:15:55.271803] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.455 [2024-04-26 13:15:55.271814] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.455 [2024-04-26 13:15:55.271819] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.455 [2024-04-26 13:15:55.271823] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:50.455 [2024-04-26 13:15:55.271834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.455 qpair failed and we were unable to recover it. 00:32:50.455 [2024-04-26 13:15:55.281639] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.455 [2024-04-26 13:15:55.281688] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.455 [2024-04-26 13:15:55.281699] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.455 [2024-04-26 13:15:55.281703] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.455 [2024-04-26 13:15:55.281708] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:50.455 [2024-04-26 13:15:55.281718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.455 qpair failed and we were unable to recover it. 00:32:50.455 [2024-04-26 13:15:55.291799] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.455 [2024-04-26 13:15:55.291855] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.455 [2024-04-26 13:15:55.291866] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.455 [2024-04-26 13:15:55.291871] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.455 [2024-04-26 13:15:55.291875] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:50.455 [2024-04-26 13:15:55.291886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.455 qpair failed and we were unable to recover it. 00:32:50.455 [2024-04-26 13:15:55.301804] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.455 [2024-04-26 13:15:55.301897] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.455 [2024-04-26 13:15:55.301908] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.455 [2024-04-26 13:15:55.301912] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.455 [2024-04-26 13:15:55.301917] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:50.455 [2024-04-26 13:15:55.301927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.455 qpair failed and we were unable to recover it. 00:32:50.455 [2024-04-26 13:15:55.311818] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.455 [2024-04-26 13:15:55.311906] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.455 [2024-04-26 13:15:55.311917] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.455 [2024-04-26 13:15:55.311922] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.455 [2024-04-26 13:15:55.311929] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:50.455 [2024-04-26 13:15:55.311940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.455 qpair failed and we were unable to recover it. 00:32:50.455 [2024-04-26 13:15:55.321884] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.455 [2024-04-26 13:15:55.321943] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.455 [2024-04-26 13:15:55.321954] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.455 [2024-04-26 13:15:55.321958] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.455 [2024-04-26 13:15:55.321963] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:50.455 [2024-04-26 13:15:55.321973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.455 qpair failed and we were unable to recover it. 00:32:50.455 [2024-04-26 13:15:55.331897] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.455 [2024-04-26 13:15:55.331950] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.455 [2024-04-26 13:15:55.331960] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.455 [2024-04-26 13:15:55.331965] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.455 [2024-04-26 13:15:55.331969] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:50.455 [2024-04-26 13:15:55.331979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.455 qpair failed and we were unable to recover it. 00:32:50.455 [2024-04-26 13:15:55.341896] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.455 [2024-04-26 13:15:55.341974] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.455 [2024-04-26 13:15:55.341985] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.455 [2024-04-26 13:15:55.341989] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.455 [2024-04-26 13:15:55.341994] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:50.455 [2024-04-26 13:15:55.342004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.455 qpair failed and we were unable to recover it. 00:32:50.455 [2024-04-26 13:15:55.351803] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.455 [2024-04-26 13:15:55.351848] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.455 [2024-04-26 13:15:55.351859] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.455 [2024-04-26 13:15:55.351864] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.455 [2024-04-26 13:15:55.351868] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:50.455 [2024-04-26 13:15:55.351879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.455 qpair failed and we were unable to recover it. 00:32:50.455 [2024-04-26 13:15:55.361905] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.455 [2024-04-26 13:15:55.361977] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.455 [2024-04-26 13:15:55.361990] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.455 [2024-04-26 13:15:55.361995] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.455 [2024-04-26 13:15:55.361999] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:50.455 [2024-04-26 13:15:55.362009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.455 qpair failed and we were unable to recover it. 00:32:50.456 [2024-04-26 13:15:55.372011] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.456 [2024-04-26 13:15:55.372066] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.456 [2024-04-26 13:15:55.372077] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.456 [2024-04-26 13:15:55.372082] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.456 [2024-04-26 13:15:55.372087] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:50.456 [2024-04-26 13:15:55.372097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.456 qpair failed and we were unable to recover it. 00:32:50.456 [2024-04-26 13:15:55.381970] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.456 [2024-04-26 13:15:55.382014] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.456 [2024-04-26 13:15:55.382025] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.456 [2024-04-26 13:15:55.382030] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.456 [2024-04-26 13:15:55.382034] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:50.456 [2024-04-26 13:15:55.382044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.456 qpair failed and we were unable to recover it. 00:32:50.456 [2024-04-26 13:15:55.392037] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.456 [2024-04-26 13:15:55.392078] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.456 [2024-04-26 13:15:55.392088] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.456 [2024-04-26 13:15:55.392093] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.456 [2024-04-26 13:15:55.392097] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:50.456 [2024-04-26 13:15:55.392108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.456 qpair failed and we were unable to recover it. 00:32:50.456 [2024-04-26 13:15:55.402113] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.456 [2024-04-26 13:15:55.402161] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.456 [2024-04-26 13:15:55.402172] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.456 [2024-04-26 13:15:55.402179] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.456 [2024-04-26 13:15:55.402184] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:50.456 [2024-04-26 13:15:55.402194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.456 qpair failed and we were unable to recover it. 00:32:50.456 [2024-04-26 13:15:55.412123] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.456 [2024-04-26 13:15:55.412174] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.456 [2024-04-26 13:15:55.412185] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.456 [2024-04-26 13:15:55.412190] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.456 [2024-04-26 13:15:55.412194] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:50.456 [2024-04-26 13:15:55.412204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.456 qpair failed and we were unable to recover it. 00:32:50.456 [2024-04-26 13:15:55.421975] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.456 [2024-04-26 13:15:55.422020] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.456 [2024-04-26 13:15:55.422031] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.456 [2024-04-26 13:15:55.422035] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.456 [2024-04-26 13:15:55.422040] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:50.456 [2024-04-26 13:15:55.422050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.456 qpair failed and we were unable to recover it. 00:32:50.456 [2024-04-26 13:15:55.432155] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.456 [2024-04-26 13:15:55.432200] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.456 [2024-04-26 13:15:55.432211] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.456 [2024-04-26 13:15:55.432216] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.456 [2024-04-26 13:15:55.432220] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:50.456 [2024-04-26 13:15:55.432230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.456 qpair failed and we were unable to recover it. 00:32:50.456 [2024-04-26 13:15:55.442222] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.456 [2024-04-26 13:15:55.442271] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.456 [2024-04-26 13:15:55.442282] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.456 [2024-04-26 13:15:55.442287] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.456 [2024-04-26 13:15:55.442291] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:50.456 [2024-04-26 13:15:55.442301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.456 qpair failed and we were unable to recover it. 00:32:50.456 [2024-04-26 13:15:55.452099] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.456 [2024-04-26 13:15:55.452165] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.456 [2024-04-26 13:15:55.452177] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.456 [2024-04-26 13:15:55.452181] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.456 [2024-04-26 13:15:55.452186] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:50.456 [2024-04-26 13:15:55.452196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.456 qpair failed and we were unable to recover it. 00:32:50.456 [2024-04-26 13:15:55.462208] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.456 [2024-04-26 13:15:55.462301] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.456 [2024-04-26 13:15:55.462312] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.456 [2024-04-26 13:15:55.462317] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.456 [2024-04-26 13:15:55.462321] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:50.456 [2024-04-26 13:15:55.462332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.456 qpair failed and we were unable to recover it. 00:32:50.456 [2024-04-26 13:15:55.472244] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.456 [2024-04-26 13:15:55.472289] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.456 [2024-04-26 13:15:55.472300] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.456 [2024-04-26 13:15:55.472304] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.456 [2024-04-26 13:15:55.472309] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:50.456 [2024-04-26 13:15:55.472319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.456 qpair failed and we were unable to recover it. 00:32:50.457 [2024-04-26 13:15:55.482329] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.457 [2024-04-26 13:15:55.482384] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.457 [2024-04-26 13:15:55.482395] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.457 [2024-04-26 13:15:55.482399] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.457 [2024-04-26 13:15:55.482404] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:50.457 [2024-04-26 13:15:55.482414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.457 qpair failed and we were unable to recover it. 00:32:50.457 [2024-04-26 13:15:55.492342] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.457 [2024-04-26 13:15:55.492393] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.457 [2024-04-26 13:15:55.492408] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.457 [2024-04-26 13:15:55.492412] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.457 [2024-04-26 13:15:55.492417] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:50.457 [2024-04-26 13:15:55.492427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.457 qpair failed and we were unable to recover it. 00:32:50.457 [2024-04-26 13:15:55.502212] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.457 [2024-04-26 13:15:55.502264] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.457 [2024-04-26 13:15:55.502275] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.457 [2024-04-26 13:15:55.502279] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.457 [2024-04-26 13:15:55.502283] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:50.457 [2024-04-26 13:15:55.502293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.457 qpair failed and we were unable to recover it. 00:32:50.457 [2024-04-26 13:15:55.512366] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.457 [2024-04-26 13:15:55.512405] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.457 [2024-04-26 13:15:55.512416] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.457 [2024-04-26 13:15:55.512421] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.457 [2024-04-26 13:15:55.512425] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:50.457 [2024-04-26 13:15:55.512435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.457 qpair failed and we were unable to recover it. 00:32:50.719 [2024-04-26 13:15:55.522403] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.719 [2024-04-26 13:15:55.522449] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.720 [2024-04-26 13:15:55.522461] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.720 [2024-04-26 13:15:55.522466] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.720 [2024-04-26 13:15:55.522470] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:50.720 [2024-04-26 13:15:55.522481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.720 qpair failed and we were unable to recover it. 00:32:50.720 [2024-04-26 13:15:55.532510] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.720 [2024-04-26 13:15:55.532563] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.720 [2024-04-26 13:15:55.532574] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.720 [2024-04-26 13:15:55.532579] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.720 [2024-04-26 13:15:55.532583] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:50.720 [2024-04-26 13:15:55.532596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.720 qpair failed and we were unable to recover it. 00:32:50.720 [2024-04-26 13:15:55.542449] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.720 [2024-04-26 13:15:55.542540] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.720 [2024-04-26 13:15:55.542551] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.720 [2024-04-26 13:15:55.542556] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.720 [2024-04-26 13:15:55.542560] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:50.720 [2024-04-26 13:15:55.542570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.720 qpair failed and we were unable to recover it. 00:32:50.720 [2024-04-26 13:15:55.552435] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.720 [2024-04-26 13:15:55.552478] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.720 [2024-04-26 13:15:55.552489] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.720 [2024-04-26 13:15:55.552494] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.720 [2024-04-26 13:15:55.552499] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:50.720 [2024-04-26 13:15:55.552509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.720 qpair failed and we were unable to recover it. 00:32:50.720 [2024-04-26 13:15:55.562393] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.720 [2024-04-26 13:15:55.562443] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.720 [2024-04-26 13:15:55.562454] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.720 [2024-04-26 13:15:55.562459] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.720 [2024-04-26 13:15:55.562463] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:50.720 [2024-04-26 13:15:55.562473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.720 qpair failed and we were unable to recover it. 00:32:50.720 [2024-04-26 13:15:55.572562] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.720 [2024-04-26 13:15:55.572614] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.720 [2024-04-26 13:15:55.572625] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.720 [2024-04-26 13:15:55.572630] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.720 [2024-04-26 13:15:55.572634] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:50.720 [2024-04-26 13:15:55.572644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.720 qpair failed and we were unable to recover it. 00:32:50.720 [2024-04-26 13:15:55.582545] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.720 [2024-04-26 13:15:55.582589] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.720 [2024-04-26 13:15:55.582612] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.720 [2024-04-26 13:15:55.582618] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.720 [2024-04-26 13:15:55.582622] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:50.720 [2024-04-26 13:15:55.582636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.720 qpair failed and we were unable to recover it. 00:32:50.720 [2024-04-26 13:15:55.592536] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.720 [2024-04-26 13:15:55.592580] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.720 [2024-04-26 13:15:55.592592] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.720 [2024-04-26 13:15:55.592597] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.720 [2024-04-26 13:15:55.592602] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:50.720 [2024-04-26 13:15:55.592613] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.720 qpair failed and we were unable to recover it. 00:32:50.720 [2024-04-26 13:15:55.602649] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.720 [2024-04-26 13:15:55.602702] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.720 [2024-04-26 13:15:55.602713] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.720 [2024-04-26 13:15:55.602718] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.720 [2024-04-26 13:15:55.602722] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:50.720 [2024-04-26 13:15:55.602733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.720 qpair failed and we were unable to recover it. 00:32:50.720 [2024-04-26 13:15:55.612681] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.720 [2024-04-26 13:15:55.612782] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.720 [2024-04-26 13:15:55.612793] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.720 [2024-04-26 13:15:55.612798] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.720 [2024-04-26 13:15:55.612802] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:50.720 [2024-04-26 13:15:55.612812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.720 qpair failed and we were unable to recover it. 00:32:50.720 [2024-04-26 13:15:55.622679] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.720 [2024-04-26 13:15:55.622721] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.720 [2024-04-26 13:15:55.622732] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.720 [2024-04-26 13:15:55.622737] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.720 [2024-04-26 13:15:55.622741] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:50.720 [2024-04-26 13:15:55.622755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.720 qpair failed and we were unable to recover it. 00:32:50.720 [2024-04-26 13:15:55.632677] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.720 [2024-04-26 13:15:55.632725] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.720 [2024-04-26 13:15:55.632736] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.720 [2024-04-26 13:15:55.632741] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.720 [2024-04-26 13:15:55.632745] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:50.720 [2024-04-26 13:15:55.632755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.720 qpair failed and we were unable to recover it. 00:32:50.720 [2024-04-26 13:15:55.642767] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.720 [2024-04-26 13:15:55.642819] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.720 [2024-04-26 13:15:55.642830] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.720 [2024-04-26 13:15:55.642835] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.720 [2024-04-26 13:15:55.642844] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:50.721 [2024-04-26 13:15:55.642855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.721 qpair failed and we were unable to recover it. 00:32:50.721 [2024-04-26 13:15:55.652775] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.721 [2024-04-26 13:15:55.652859] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.721 [2024-04-26 13:15:55.652870] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.721 [2024-04-26 13:15:55.652875] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.721 [2024-04-26 13:15:55.652879] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:50.721 [2024-04-26 13:15:55.652890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.721 qpair failed and we were unable to recover it. 00:32:50.721 [2024-04-26 13:15:55.662805] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.721 [2024-04-26 13:15:55.662894] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.721 [2024-04-26 13:15:55.662905] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.721 [2024-04-26 13:15:55.662910] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.721 [2024-04-26 13:15:55.662914] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:50.721 [2024-04-26 13:15:55.662925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.721 qpair failed and we were unable to recover it. 00:32:50.721 [2024-04-26 13:15:55.672805] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.721 [2024-04-26 13:15:55.672853] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.721 [2024-04-26 13:15:55.672864] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.721 [2024-04-26 13:15:55.672869] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.721 [2024-04-26 13:15:55.672873] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:50.721 [2024-04-26 13:15:55.672884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.721 qpair failed and we were unable to recover it. 00:32:50.721 [2024-04-26 13:15:55.682784] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.721 [2024-04-26 13:15:55.682850] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.721 [2024-04-26 13:15:55.682860] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.721 [2024-04-26 13:15:55.682865] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.721 [2024-04-26 13:15:55.682869] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:50.721 [2024-04-26 13:15:55.682880] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.721 qpair failed and we were unable to recover it. 00:32:50.721 [2024-04-26 13:15:55.692887] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.721 [2024-04-26 13:15:55.692939] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.721 [2024-04-26 13:15:55.692952] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.721 [2024-04-26 13:15:55.692957] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.721 [2024-04-26 13:15:55.692961] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:50.721 [2024-04-26 13:15:55.692971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.721 qpair failed and we were unable to recover it. 00:32:50.721 [2024-04-26 13:15:55.702784] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.721 [2024-04-26 13:15:55.702844] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.721 [2024-04-26 13:15:55.702855] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.721 [2024-04-26 13:15:55.702860] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.721 [2024-04-26 13:15:55.702864] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:50.721 [2024-04-26 13:15:55.702875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.721 qpair failed and we were unable to recover it. 00:32:50.721 [2024-04-26 13:15:55.712916] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.721 [2024-04-26 13:15:55.712962] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.721 [2024-04-26 13:15:55.712973] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.721 [2024-04-26 13:15:55.712978] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.721 [2024-04-26 13:15:55.712988] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:50.721 [2024-04-26 13:15:55.712998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.721 qpair failed and we were unable to recover it. 00:32:50.721 [2024-04-26 13:15:55.723002] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.721 [2024-04-26 13:15:55.723052] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.721 [2024-04-26 13:15:55.723063] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.721 [2024-04-26 13:15:55.723067] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.721 [2024-04-26 13:15:55.723071] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:50.721 [2024-04-26 13:15:55.723082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.721 qpair failed and we were unable to recover it. 00:32:50.721 [2024-04-26 13:15:55.732991] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.721 [2024-04-26 13:15:55.733047] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.721 [2024-04-26 13:15:55.733059] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.721 [2024-04-26 13:15:55.733064] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.721 [2024-04-26 13:15:55.733069] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:50.721 [2024-04-26 13:15:55.733079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.721 qpair failed and we were unable to recover it. 00:32:50.721 [2024-04-26 13:15:55.742863] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.721 [2024-04-26 13:15:55.742918] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.721 [2024-04-26 13:15:55.742929] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.721 [2024-04-26 13:15:55.742933] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.721 [2024-04-26 13:15:55.742938] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:50.721 [2024-04-26 13:15:55.742948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.721 qpair failed and we were unable to recover it. 00:32:50.721 [2024-04-26 13:15:55.752988] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.721 [2024-04-26 13:15:55.753029] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.721 [2024-04-26 13:15:55.753041] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.721 [2024-04-26 13:15:55.753045] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.721 [2024-04-26 13:15:55.753050] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:50.721 [2024-04-26 13:15:55.753060] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.721 qpair failed and we were unable to recover it. 00:32:50.721 [2024-04-26 13:15:55.763102] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.721 [2024-04-26 13:15:55.763159] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.721 [2024-04-26 13:15:55.763170] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.721 [2024-04-26 13:15:55.763175] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.721 [2024-04-26 13:15:55.763179] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:50.721 [2024-04-26 13:15:55.763189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.721 qpair failed and we were unable to recover it. 00:32:50.721 [2024-04-26 13:15:55.773135] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.721 [2024-04-26 13:15:55.773187] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.721 [2024-04-26 13:15:55.773197] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.721 [2024-04-26 13:15:55.773202] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.721 [2024-04-26 13:15:55.773206] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:50.721 [2024-04-26 13:15:55.773216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.721 qpair failed and we were unable to recover it. 00:32:50.985 [2024-04-26 13:15:55.783110] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.985 [2024-04-26 13:15:55.783152] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.985 [2024-04-26 13:15:55.783163] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.985 [2024-04-26 13:15:55.783168] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.985 [2024-04-26 13:15:55.783172] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:50.985 [2024-04-26 13:15:55.783183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.985 qpair failed and we were unable to recover it. 00:32:50.985 [2024-04-26 13:15:55.793124] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.985 [2024-04-26 13:15:55.793164] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.985 [2024-04-26 13:15:55.793175] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.985 [2024-04-26 13:15:55.793180] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.985 [2024-04-26 13:15:55.793185] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:50.985 [2024-04-26 13:15:55.793195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.985 qpair failed and we were unable to recover it. 00:32:50.985 [2024-04-26 13:15:55.803197] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.985 [2024-04-26 13:15:55.803245] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.985 [2024-04-26 13:15:55.803255] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.985 [2024-04-26 13:15:55.803263] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.985 [2024-04-26 13:15:55.803267] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:50.985 [2024-04-26 13:15:55.803278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.985 qpair failed and we were unable to recover it. 00:32:50.985 [2024-04-26 13:15:55.813214] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.985 [2024-04-26 13:15:55.813268] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.985 [2024-04-26 13:15:55.813278] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.985 [2024-04-26 13:15:55.813283] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.985 [2024-04-26 13:15:55.813287] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:50.985 [2024-04-26 13:15:55.813297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.985 qpair failed and we were unable to recover it. 00:32:50.985 [2024-04-26 13:15:55.823160] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.985 [2024-04-26 13:15:55.823205] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.985 [2024-04-26 13:15:55.823215] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.985 [2024-04-26 13:15:55.823220] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.985 [2024-04-26 13:15:55.823224] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:50.985 [2024-04-26 13:15:55.823234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.985 qpair failed and we were unable to recover it. 00:32:50.985 [2024-04-26 13:15:55.833226] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.985 [2024-04-26 13:15:55.833270] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.985 [2024-04-26 13:15:55.833281] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.985 [2024-04-26 13:15:55.833286] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.985 [2024-04-26 13:15:55.833290] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:50.985 [2024-04-26 13:15:55.833300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.985 qpair failed and we were unable to recover it. 00:32:50.985 [2024-04-26 13:15:55.843312] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.985 [2024-04-26 13:15:55.843357] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.985 [2024-04-26 13:15:55.843368] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.985 [2024-04-26 13:15:55.843372] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.985 [2024-04-26 13:15:55.843377] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:50.985 [2024-04-26 13:15:55.843387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.985 qpair failed and we were unable to recover it. 00:32:50.986 [2024-04-26 13:15:55.853334] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.986 [2024-04-26 13:15:55.853430] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.986 [2024-04-26 13:15:55.853441] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.986 [2024-04-26 13:15:55.853446] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.986 [2024-04-26 13:15:55.853450] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:50.986 [2024-04-26 13:15:55.853461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.986 qpair failed and we were unable to recover it. 00:32:50.986 [2024-04-26 13:15:55.863336] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.986 [2024-04-26 13:15:55.863415] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.986 [2024-04-26 13:15:55.863426] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.986 [2024-04-26 13:15:55.863431] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.986 [2024-04-26 13:15:55.863435] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:50.986 [2024-04-26 13:15:55.863446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.986 qpair failed and we were unable to recover it. 00:32:50.986 [2024-04-26 13:15:55.873347] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.986 [2024-04-26 13:15:55.873435] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.986 [2024-04-26 13:15:55.873446] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.986 [2024-04-26 13:15:55.873451] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.986 [2024-04-26 13:15:55.873456] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:50.986 [2024-04-26 13:15:55.873466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.986 qpair failed and we were unable to recover it. 00:32:50.986 [2024-04-26 13:15:55.883429] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.986 [2024-04-26 13:15:55.883495] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.986 [2024-04-26 13:15:55.883505] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.986 [2024-04-26 13:15:55.883510] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.986 [2024-04-26 13:15:55.883515] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:50.986 [2024-04-26 13:15:55.883525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.986 qpair failed and we were unable to recover it. 00:32:50.986 [2024-04-26 13:15:55.893462] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.986 [2024-04-26 13:15:55.893520] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.986 [2024-04-26 13:15:55.893531] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.986 [2024-04-26 13:15:55.893539] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.986 [2024-04-26 13:15:55.893544] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:50.986 [2024-04-26 13:15:55.893554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.986 qpair failed and we were unable to recover it. 00:32:50.986 [2024-04-26 13:15:55.903440] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.986 [2024-04-26 13:15:55.903485] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.986 [2024-04-26 13:15:55.903496] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.986 [2024-04-26 13:15:55.903501] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.986 [2024-04-26 13:15:55.903505] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:50.986 [2024-04-26 13:15:55.903515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.986 qpair failed and we were unable to recover it. 00:32:50.986 [2024-04-26 13:15:55.913468] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.986 [2024-04-26 13:15:55.913515] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.986 [2024-04-26 13:15:55.913525] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.986 [2024-04-26 13:15:55.913530] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.986 [2024-04-26 13:15:55.913534] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:50.986 [2024-04-26 13:15:55.913544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.986 qpair failed and we were unable to recover it. 00:32:50.986 [2024-04-26 13:15:55.923537] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.986 [2024-04-26 13:15:55.923588] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.986 [2024-04-26 13:15:55.923599] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.986 [2024-04-26 13:15:55.923604] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.986 [2024-04-26 13:15:55.923608] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:50.986 [2024-04-26 13:15:55.923619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.986 qpair failed and we were unable to recover it. 00:32:50.986 [2024-04-26 13:15:55.933558] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.986 [2024-04-26 13:15:55.933642] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.986 [2024-04-26 13:15:55.933654] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.986 [2024-04-26 13:15:55.933659] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.986 [2024-04-26 13:15:55.933663] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:50.986 [2024-04-26 13:15:55.933673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.986 qpair failed and we were unable to recover it. 00:32:50.986 [2024-04-26 13:15:55.943546] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.986 [2024-04-26 13:15:55.943586] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.986 [2024-04-26 13:15:55.943597] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.986 [2024-04-26 13:15:55.943601] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.986 [2024-04-26 13:15:55.943605] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:50.986 [2024-04-26 13:15:55.943616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.986 qpair failed and we were unable to recover it. 00:32:50.986 [2024-04-26 13:15:55.953447] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.986 [2024-04-26 13:15:55.953494] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.986 [2024-04-26 13:15:55.953505] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.986 [2024-04-26 13:15:55.953509] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.986 [2024-04-26 13:15:55.953513] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:50.986 [2024-04-26 13:15:55.953524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.986 qpair failed and we were unable to recover it. 00:32:50.986 [2024-04-26 13:15:55.963658] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.986 [2024-04-26 13:15:55.963707] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.986 [2024-04-26 13:15:55.963718] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.986 [2024-04-26 13:15:55.963723] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.986 [2024-04-26 13:15:55.963727] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:50.986 [2024-04-26 13:15:55.963737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.986 qpair failed and we were unable to recover it. 00:32:50.986 [2024-04-26 13:15:55.973663] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.986 [2024-04-26 13:15:55.973717] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.986 [2024-04-26 13:15:55.973735] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.986 [2024-04-26 13:15:55.973741] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.986 [2024-04-26 13:15:55.973746] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:50.986 [2024-04-26 13:15:55.973759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.986 qpair failed and we were unable to recover it. 00:32:50.986 [2024-04-26 13:15:55.983668] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.986 [2024-04-26 13:15:55.983715] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.986 [2024-04-26 13:15:55.983730] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.986 [2024-04-26 13:15:55.983735] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.986 [2024-04-26 13:15:55.983740] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:50.986 [2024-04-26 13:15:55.983751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.986 qpair failed and we were unable to recover it. 00:32:50.986 [2024-04-26 13:15:55.993654] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.986 [2024-04-26 13:15:55.993703] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.986 [2024-04-26 13:15:55.993714] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.986 [2024-04-26 13:15:55.993719] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.987 [2024-04-26 13:15:55.993724] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3198000b90 00:32:50.987 [2024-04-26 13:15:55.993734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.987 qpair failed and we were unable to recover it. 00:32:50.987 [2024-04-26 13:15:55.994119] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1221160 is same with the state(5) to be set 00:32:50.987 Read completed with error (sct=0, sc=8) 00:32:50.987 starting I/O failed 00:32:50.987 Read completed with error (sct=0, sc=8) 00:32:50.987 starting I/O failed 00:32:50.987 Read completed with error (sct=0, sc=8) 00:32:50.987 starting I/O failed 00:32:50.987 Read completed with error (sct=0, sc=8) 00:32:50.987 starting I/O failed 00:32:50.987 Read completed with error (sct=0, sc=8) 00:32:50.987 starting I/O failed 00:32:50.987 Read completed with error (sct=0, sc=8) 00:32:50.987 starting I/O failed 00:32:50.987 Read completed with error (sct=0, sc=8) 00:32:50.987 starting I/O failed 00:32:50.987 Read completed with error (sct=0, sc=8) 00:32:50.987 starting I/O failed 00:32:50.987 Read completed with error (sct=0, sc=8) 00:32:50.987 starting I/O failed 00:32:50.987 Read completed with error (sct=0, sc=8) 00:32:50.987 starting I/O failed 00:32:50.987 Read completed with error (sct=0, sc=8) 00:32:50.987 starting I/O failed 00:32:50.987 Read completed with error (sct=0, sc=8) 00:32:50.987 starting I/O failed 00:32:50.987 Read completed with error (sct=0, sc=8) 00:32:50.987 starting I/O failed 00:32:50.987 Read completed with error (sct=0, sc=8) 00:32:50.987 starting I/O failed 00:32:50.987 Read completed with error (sct=0, sc=8) 00:32:50.987 starting I/O failed 00:32:50.987 Read completed with error (sct=0, sc=8) 00:32:50.987 starting I/O failed 00:32:50.987 Write completed with error (sct=0, sc=8) 00:32:50.987 starting I/O failed 00:32:50.987 Read completed with error (sct=0, sc=8) 00:32:50.987 starting I/O failed 00:32:50.987 Read completed with error (sct=0, sc=8) 00:32:50.987 starting I/O failed 00:32:50.987 Write completed with error (sct=0, sc=8) 00:32:50.987 starting I/O failed 00:32:50.987 Write completed with error (sct=0, sc=8) 00:32:50.987 starting I/O failed 00:32:50.987 Write completed with error (sct=0, sc=8) 00:32:50.987 starting I/O failed 00:32:50.987 Write completed with error (sct=0, sc=8) 00:32:50.987 starting I/O failed 00:32:50.987 Read completed with error (sct=0, sc=8) 00:32:50.987 starting I/O failed 00:32:50.987 Write completed with error (sct=0, sc=8) 00:32:50.987 starting I/O failed 00:32:50.987 Read completed with error (sct=0, sc=8) 00:32:50.987 starting I/O failed 00:32:50.987 Read completed with error (sct=0, sc=8) 00:32:50.987 starting I/O failed 00:32:50.987 Read completed with error (sct=0, sc=8) 00:32:50.987 starting I/O failed 00:32:50.987 Read completed with error (sct=0, sc=8) 00:32:50.987 starting I/O failed 00:32:50.987 Read completed with error (sct=0, sc=8) 00:32:50.987 starting I/O failed 00:32:50.987 Write completed with error (sct=0, sc=8) 00:32:50.987 starting I/O failed 00:32:50.987 Read completed with error (sct=0, sc=8) 00:32:50.987 starting I/O failed 00:32:50.987 [2024-04-26 13:15:55.994654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.987 Read completed with error (sct=0, sc=8) 00:32:50.987 starting I/O failed 00:32:50.987 Read completed with error (sct=0, sc=8) 00:32:50.987 starting I/O failed 00:32:50.987 Read completed with error (sct=0, sc=8) 00:32:50.987 starting I/O failed 00:32:50.987 Read completed with error (sct=0, sc=8) 00:32:50.987 starting I/O failed 00:32:50.987 Read completed with error (sct=0, sc=8) 00:32:50.987 starting I/O failed 00:32:50.987 Read completed with error (sct=0, sc=8) 00:32:50.987 starting I/O failed 00:32:50.987 Read completed with error (sct=0, sc=8) 00:32:50.987 starting I/O failed 00:32:50.987 Read completed with error (sct=0, sc=8) 00:32:50.987 starting I/O failed 00:32:50.987 Read completed with error (sct=0, sc=8) 00:32:50.987 starting I/O failed 00:32:50.987 Read completed with error (sct=0, sc=8) 00:32:50.987 starting I/O failed 00:32:50.987 Read completed with error (sct=0, sc=8) 00:32:50.987 starting I/O failed 00:32:50.987 Read completed with error (sct=0, sc=8) 00:32:50.987 starting I/O failed 00:32:50.987 Read completed with error (sct=0, sc=8) 00:32:50.987 starting I/O failed 00:32:50.987 Read completed with error (sct=0, sc=8) 00:32:50.987 starting I/O failed 00:32:50.987 Read completed with error (sct=0, sc=8) 00:32:50.987 starting I/O failed 00:32:50.987 Write completed with error (sct=0, sc=8) 00:32:50.987 starting I/O failed 00:32:50.987 Write completed with error (sct=0, sc=8) 00:32:50.987 starting I/O failed 00:32:50.987 Write completed with error (sct=0, sc=8) 00:32:50.987 starting I/O failed 00:32:50.987 Read completed with error (sct=0, sc=8) 00:32:50.987 starting I/O failed 00:32:50.987 Read completed with error (sct=0, sc=8) 00:32:50.987 starting I/O failed 00:32:50.987 Write completed with error (sct=0, sc=8) 00:32:50.987 starting I/O failed 00:32:50.987 Write completed with error (sct=0, sc=8) 00:32:50.987 starting I/O failed 00:32:50.987 Read completed with error (sct=0, sc=8) 00:32:50.987 starting I/O failed 00:32:50.987 Write completed with error (sct=0, sc=8) 00:32:50.987 starting I/O failed 00:32:50.987 Write completed with error (sct=0, sc=8) 00:32:50.987 starting I/O failed 00:32:50.987 Read completed with error (sct=0, sc=8) 00:32:50.987 starting I/O failed 00:32:50.987 Write completed with error (sct=0, sc=8) 00:32:50.987 starting I/O failed 00:32:50.987 Read completed with error (sct=0, sc=8) 00:32:50.987 starting I/O failed 00:32:50.987 Read completed with error (sct=0, sc=8) 00:32:50.987 starting I/O failed 00:32:50.987 Write completed with error (sct=0, sc=8) 00:32:50.987 starting I/O failed 00:32:50.987 Write completed with error (sct=0, sc=8) 00:32:50.987 starting I/O failed 00:32:50.987 Write completed with error (sct=0, sc=8) 00:32:50.987 starting I/O failed 00:32:50.987 [2024-04-26 13:15:55.995372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:50.987 [2024-04-26 13:15:56.003791] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.987 [2024-04-26 13:15:56.003921] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.987 [2024-04-26 13:15:56.003972] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.987 [2024-04-26 13:15:56.003995] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.987 [2024-04-26 13:15:56.004014] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f31a0000b90 00:32:50.987 [2024-04-26 13:15:56.004059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:50.987 qpair failed and we were unable to recover it. 00:32:50.987 [2024-04-26 13:15:56.013801] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.987 [2024-04-26 13:15:56.013902] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.987 [2024-04-26 13:15:56.013934] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.987 [2024-04-26 13:15:56.013949] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.987 [2024-04-26 13:15:56.013962] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f31a0000b90 00:32:50.987 [2024-04-26 13:15:56.013993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:50.987 qpair failed and we were unable to recover it. 00:32:50.987 [2024-04-26 13:15:56.023788] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.987 [2024-04-26 13:15:56.023903] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.987 [2024-04-26 13:15:56.023966] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.987 [2024-04-26 13:15:56.024006] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.987 [2024-04-26 13:15:56.024026] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3190000b90 00:32:50.987 [2024-04-26 13:15:56.024079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:32:50.987 qpair failed and we were unable to recover it. 00:32:50.987 [2024-04-26 13:15:56.033790] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.987 [2024-04-26 13:15:56.033873] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.987 [2024-04-26 13:15:56.033904] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.987 [2024-04-26 13:15:56.033920] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.987 [2024-04-26 13:15:56.033933] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3190000b90 00:32:50.987 [2024-04-26 13:15:56.033964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:32:50.987 qpair failed and we were unable to recover it. 00:32:51.248 [2024-04-26 13:15:56.043946] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.248 [2024-04-26 13:15:56.044009] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.248 [2024-04-26 13:15:56.044034] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.248 [2024-04-26 13:15:56.044043] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.248 [2024-04-26 13:15:56.044050] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1213650 00:32:51.248 [2024-04-26 13:15:56.044068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.248 qpair failed and we were unable to recover it. 00:32:51.248 [2024-04-26 13:15:56.053805] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.248 [2024-04-26 13:15:56.053874] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.248 [2024-04-26 13:15:56.053890] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.248 [2024-04-26 13:15:56.053898] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.248 [2024-04-26 13:15:56.053905] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1213650 00:32:51.248 [2024-04-26 13:15:56.053920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.248 qpair failed and we were unable to recover it. 00:32:51.248 [2024-04-26 13:15:56.054303] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1221160 (9): Bad file descriptor 00:32:51.248 Initializing NVMe Controllers 00:32:51.248 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:51.248 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:51.248 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:32:51.248 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:32:51.248 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:32:51.248 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:32:51.248 Initialization complete. Launching workers. 00:32:51.248 Starting thread on core 1 00:32:51.248 Starting thread on core 2 00:32:51.248 Starting thread on core 3 00:32:51.248 Starting thread on core 0 00:32:51.248 13:15:56 -- host/target_disconnect.sh@59 -- # sync 00:32:51.248 00:32:51.248 real 0m11.267s 00:32:51.248 user 0m21.471s 00:32:51.248 sys 0m3.444s 00:32:51.248 13:15:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:32:51.248 13:15:56 -- common/autotest_common.sh@10 -- # set +x 00:32:51.248 ************************************ 00:32:51.248 END TEST nvmf_target_disconnect_tc2 00:32:51.248 ************************************ 00:32:51.248 13:15:56 -- host/target_disconnect.sh@80 -- # '[' -n '' ']' 00:32:51.248 13:15:56 -- host/target_disconnect.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:32:51.248 13:15:56 -- host/target_disconnect.sh@85 -- # nvmftestfini 00:32:51.248 13:15:56 -- nvmf/common.sh@477 -- # nvmfcleanup 00:32:51.248 13:15:56 -- nvmf/common.sh@117 -- # sync 00:32:51.248 13:15:56 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:51.248 13:15:56 -- nvmf/common.sh@120 -- # set +e 00:32:51.248 13:15:56 -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:51.248 13:15:56 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:51.248 rmmod nvme_tcp 00:32:51.248 rmmod nvme_fabrics 00:32:51.248 rmmod nvme_keyring 00:32:51.248 13:15:56 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:51.248 13:15:56 -- nvmf/common.sh@124 -- # set -e 00:32:51.248 13:15:56 -- nvmf/common.sh@125 -- # return 0 00:32:51.248 13:15:56 -- nvmf/common.sh@478 -- # '[' -n 19886 ']' 00:32:51.249 13:15:56 -- nvmf/common.sh@479 -- # killprocess 19886 00:32:51.249 13:15:56 -- common/autotest_common.sh@936 -- # '[' -z 19886 ']' 00:32:51.249 13:15:56 -- common/autotest_common.sh@940 -- # kill -0 19886 00:32:51.249 13:15:56 -- common/autotest_common.sh@941 -- # uname 00:32:51.249 13:15:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:32:51.249 13:15:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 19886 00:32:51.249 13:15:56 -- common/autotest_common.sh@942 -- # process_name=reactor_4 00:32:51.249 13:15:56 -- common/autotest_common.sh@946 -- # '[' reactor_4 = sudo ']' 00:32:51.249 13:15:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 19886' 00:32:51.249 killing process with pid 19886 00:32:51.249 13:15:56 -- common/autotest_common.sh@955 -- # kill 19886 00:32:51.249 13:15:56 -- common/autotest_common.sh@960 -- # wait 19886 00:32:51.509 13:15:56 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:32:51.509 13:15:56 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:32:51.509 13:15:56 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:32:51.509 13:15:56 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:51.509 13:15:56 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:51.509 13:15:56 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:51.509 13:15:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:51.509 13:15:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:53.492 13:15:58 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:53.492 00:32:53.492 real 0m21.379s 00:32:53.492 user 0m48.798s 00:32:53.492 sys 0m9.298s 00:32:53.492 13:15:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:32:53.492 13:15:58 -- common/autotest_common.sh@10 -- # set +x 00:32:53.492 ************************************ 00:32:53.492 END TEST nvmf_target_disconnect 00:32:53.492 ************************************ 00:32:53.492 13:15:58 -- nvmf/nvmf.sh@123 -- # timing_exit host 00:32:53.492 13:15:58 -- common/autotest_common.sh@716 -- # xtrace_disable 00:32:53.492 13:15:58 -- common/autotest_common.sh@10 -- # set +x 00:32:53.492 13:15:58 -- nvmf/nvmf.sh@125 -- # trap - SIGINT SIGTERM EXIT 00:32:53.492 00:32:53.492 real 26m16.409s 00:32:53.492 user 66m3.853s 00:32:53.492 sys 7m11.216s 00:32:53.492 13:15:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:32:53.492 13:15:58 -- common/autotest_common.sh@10 -- # set +x 00:32:53.492 ************************************ 00:32:53.492 END TEST nvmf_tcp 00:32:53.492 ************************************ 00:32:53.753 13:15:58 -- spdk/autotest.sh@286 -- # [[ 0 -eq 0 ]] 00:32:53.753 13:15:58 -- spdk/autotest.sh@287 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:32:53.753 13:15:58 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:32:53.753 13:15:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:32:53.753 13:15:58 -- common/autotest_common.sh@10 -- # set +x 00:32:53.753 ************************************ 00:32:53.753 START TEST spdkcli_nvmf_tcp 00:32:53.753 ************************************ 00:32:53.753 13:15:58 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:32:53.753 * Looking for test storage... 00:32:54.015 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:32:54.015 13:15:58 -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:32:54.015 13:15:58 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:32:54.015 13:15:58 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:32:54.015 13:15:58 -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:54.015 13:15:58 -- nvmf/common.sh@7 -- # uname -s 00:32:54.015 13:15:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:54.015 13:15:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:54.015 13:15:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:54.015 13:15:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:54.015 13:15:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:54.015 13:15:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:54.015 13:15:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:54.015 13:15:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:54.015 13:15:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:54.015 13:15:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:54.015 13:15:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:54.015 13:15:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:54.015 13:15:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:54.015 13:15:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:54.015 13:15:58 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:54.015 13:15:58 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:54.015 13:15:58 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:54.015 13:15:58 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:54.015 13:15:58 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:54.015 13:15:58 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:54.015 13:15:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:54.015 13:15:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:54.015 13:15:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:54.015 13:15:58 -- paths/export.sh@5 -- # export PATH 00:32:54.015 13:15:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:54.015 13:15:58 -- nvmf/common.sh@47 -- # : 0 00:32:54.015 13:15:58 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:54.015 13:15:58 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:54.015 13:15:58 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:54.015 13:15:58 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:54.015 13:15:58 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:54.015 13:15:58 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:54.015 13:15:58 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:54.015 13:15:58 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:54.015 13:15:58 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:32:54.015 13:15:58 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:32:54.015 13:15:58 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:32:54.015 13:15:58 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:32:54.015 13:15:58 -- common/autotest_common.sh@710 -- # xtrace_disable 00:32:54.015 13:15:58 -- common/autotest_common.sh@10 -- # set +x 00:32:54.015 13:15:58 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:32:54.015 13:15:58 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=21736 00:32:54.015 13:15:58 -- spdkcli/common.sh@34 -- # waitforlisten 21736 00:32:54.015 13:15:58 -- common/autotest_common.sh@817 -- # '[' -z 21736 ']' 00:32:54.015 13:15:58 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:54.015 13:15:58 -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:32:54.015 13:15:58 -- common/autotest_common.sh@822 -- # local max_retries=100 00:32:54.015 13:15:58 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:54.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:54.015 13:15:58 -- common/autotest_common.sh@826 -- # xtrace_disable 00:32:54.015 13:15:58 -- common/autotest_common.sh@10 -- # set +x 00:32:54.015 [2024-04-26 13:15:58.903792] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:32:54.015 [2024-04-26 13:15:58.903876] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid21736 ] 00:32:54.015 EAL: No free 2048 kB hugepages reported on node 1 00:32:54.015 [2024-04-26 13:15:58.968472] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:54.015 [2024-04-26 13:15:59.042158] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:54.015 [2024-04-26 13:15:59.042162] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:54.958 13:15:59 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:32:54.958 13:15:59 -- common/autotest_common.sh@850 -- # return 0 00:32:54.958 13:15:59 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:32:54.958 13:15:59 -- common/autotest_common.sh@716 -- # xtrace_disable 00:32:54.958 13:15:59 -- common/autotest_common.sh@10 -- # set +x 00:32:54.958 13:15:59 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:32:54.958 13:15:59 -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:32:54.958 13:15:59 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:32:54.958 13:15:59 -- common/autotest_common.sh@710 -- # xtrace_disable 00:32:54.958 13:15:59 -- common/autotest_common.sh@10 -- # set +x 00:32:54.958 13:15:59 -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:32:54.958 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:32:54.958 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:32:54.958 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:32:54.958 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:32:54.958 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:32:54.958 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:32:54.958 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:32:54.958 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:32:54.958 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:32:54.958 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:32:54.958 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:54.958 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:32:54.958 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:32:54.958 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:54.958 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:32:54.958 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:32:54.958 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:32:54.958 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:32:54.958 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:54.958 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:32:54.958 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:32:54.958 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:32:54.958 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:32:54.958 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:54.958 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:32:54.958 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:32:54.958 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:32:54.958 ' 00:32:55.218 [2024-04-26 13:16:00.035098] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:32:57.127 [2024-04-26 13:16:02.038670] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:58.507 [2024-04-26 13:16:03.202577] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:33:00.414 [2024-04-26 13:16:05.336790] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:33:02.326 [2024-04-26 13:16:07.170337] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:33:03.709 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:33:03.709 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:33:03.709 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:33:03.709 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:33:03.709 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:33:03.709 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:33:03.709 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:33:03.709 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:33:03.709 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:33:03.709 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:33:03.709 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:33:03.709 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:03.709 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:33:03.709 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:33:03.709 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:03.709 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:33:03.709 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:33:03.709 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:33:03.709 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:33:03.709 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:03.709 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:33:03.709 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:33:03.709 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:33:03.709 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:33:03.709 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:03.709 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:33:03.709 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:33:03.709 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:33:03.709 13:16:08 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:33:03.709 13:16:08 -- common/autotest_common.sh@716 -- # xtrace_disable 00:33:03.709 13:16:08 -- common/autotest_common.sh@10 -- # set +x 00:33:03.709 13:16:08 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:33:03.709 13:16:08 -- common/autotest_common.sh@710 -- # xtrace_disable 00:33:03.709 13:16:08 -- common/autotest_common.sh@10 -- # set +x 00:33:03.709 13:16:08 -- spdkcli/nvmf.sh@69 -- # check_match 00:33:03.709 13:16:08 -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:33:04.280 13:16:09 -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:33:04.280 13:16:09 -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:33:04.280 13:16:09 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:33:04.280 13:16:09 -- common/autotest_common.sh@716 -- # xtrace_disable 00:33:04.280 13:16:09 -- common/autotest_common.sh@10 -- # set +x 00:33:04.280 13:16:09 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:33:04.280 13:16:09 -- common/autotest_common.sh@710 -- # xtrace_disable 00:33:04.280 13:16:09 -- common/autotest_common.sh@10 -- # set +x 00:33:04.280 13:16:09 -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:33:04.280 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:33:04.280 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:33:04.280 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:33:04.280 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:33:04.280 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:33:04.280 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:33:04.280 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:33:04.280 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:33:04.280 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:33:04.280 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:33:04.280 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:33:04.280 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:33:04.280 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:33:04.280 ' 00:33:09.559 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:33:09.559 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:33:09.559 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:33:09.559 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:33:09.559 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:33:09.559 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:33:09.559 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:33:09.559 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:33:09.559 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:33:09.559 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:33:09.559 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:33:09.559 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:33:09.559 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:33:09.559 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:33:09.559 13:16:14 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:33:09.559 13:16:14 -- common/autotest_common.sh@716 -- # xtrace_disable 00:33:09.559 13:16:14 -- common/autotest_common.sh@10 -- # set +x 00:33:09.559 13:16:14 -- spdkcli/nvmf.sh@90 -- # killprocess 21736 00:33:09.559 13:16:14 -- common/autotest_common.sh@936 -- # '[' -z 21736 ']' 00:33:09.559 13:16:14 -- common/autotest_common.sh@940 -- # kill -0 21736 00:33:09.559 13:16:14 -- common/autotest_common.sh@941 -- # uname 00:33:09.559 13:16:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:33:09.559 13:16:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 21736 00:33:09.559 13:16:14 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:33:09.559 13:16:14 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:33:09.559 13:16:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 21736' 00:33:09.559 killing process with pid 21736 00:33:09.559 13:16:14 -- common/autotest_common.sh@955 -- # kill 21736 00:33:09.559 [2024-04-26 13:16:14.105950] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:33:09.559 13:16:14 -- common/autotest_common.sh@960 -- # wait 21736 00:33:09.559 13:16:14 -- spdkcli/nvmf.sh@1 -- # cleanup 00:33:09.559 13:16:14 -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:33:09.559 13:16:14 -- spdkcli/common.sh@13 -- # '[' -n 21736 ']' 00:33:09.559 13:16:14 -- spdkcli/common.sh@14 -- # killprocess 21736 00:33:09.559 13:16:14 -- common/autotest_common.sh@936 -- # '[' -z 21736 ']' 00:33:09.559 13:16:14 -- common/autotest_common.sh@940 -- # kill -0 21736 00:33:09.559 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (21736) - No such process 00:33:09.559 13:16:14 -- common/autotest_common.sh@963 -- # echo 'Process with pid 21736 is not found' 00:33:09.559 Process with pid 21736 is not found 00:33:09.559 13:16:14 -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:33:09.559 13:16:14 -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:33:09.559 13:16:14 -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:33:09.559 00:33:09.559 real 0m15.525s 00:33:09.559 user 0m31.938s 00:33:09.559 sys 0m0.687s 00:33:09.559 13:16:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:33:09.559 13:16:14 -- common/autotest_common.sh@10 -- # set +x 00:33:09.559 ************************************ 00:33:09.559 END TEST spdkcli_nvmf_tcp 00:33:09.559 ************************************ 00:33:09.559 13:16:14 -- spdk/autotest.sh@288 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:33:09.559 13:16:14 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:33:09.559 13:16:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:33:09.559 13:16:14 -- common/autotest_common.sh@10 -- # set +x 00:33:09.559 ************************************ 00:33:09.559 START TEST nvmf_identify_passthru 00:33:09.559 ************************************ 00:33:09.560 13:16:14 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:33:09.560 * Looking for test storage... 00:33:09.560 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:09.560 13:16:14 -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:09.560 13:16:14 -- nvmf/common.sh@7 -- # uname -s 00:33:09.560 13:16:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:09.560 13:16:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:09.560 13:16:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:09.560 13:16:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:09.560 13:16:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:09.560 13:16:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:09.560 13:16:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:09.560 13:16:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:09.560 13:16:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:09.560 13:16:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:09.560 13:16:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:09.560 13:16:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:09.560 13:16:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:09.560 13:16:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:09.560 13:16:14 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:09.560 13:16:14 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:09.560 13:16:14 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:09.560 13:16:14 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:09.560 13:16:14 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:09.560 13:16:14 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:09.560 13:16:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:09.560 13:16:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:09.560 13:16:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:09.560 13:16:14 -- paths/export.sh@5 -- # export PATH 00:33:09.560 13:16:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:09.560 13:16:14 -- nvmf/common.sh@47 -- # : 0 00:33:09.560 13:16:14 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:09.560 13:16:14 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:09.560 13:16:14 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:09.560 13:16:14 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:09.560 13:16:14 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:09.560 13:16:14 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:09.560 13:16:14 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:09.560 13:16:14 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:09.560 13:16:14 -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:09.560 13:16:14 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:09.560 13:16:14 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:09.560 13:16:14 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:09.560 13:16:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:09.560 13:16:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:09.560 13:16:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:09.560 13:16:14 -- paths/export.sh@5 -- # export PATH 00:33:09.560 13:16:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:09.560 13:16:14 -- target/identify_passthru.sh@12 -- # nvmftestinit 00:33:09.560 13:16:14 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:33:09.560 13:16:14 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:09.560 13:16:14 -- nvmf/common.sh@437 -- # prepare_net_devs 00:33:09.560 13:16:14 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:33:09.560 13:16:14 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:33:09.560 13:16:14 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:09.560 13:16:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:09.560 13:16:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:09.560 13:16:14 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:33:09.560 13:16:14 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:33:09.560 13:16:14 -- nvmf/common.sh@285 -- # xtrace_disable 00:33:09.560 13:16:14 -- common/autotest_common.sh@10 -- # set +x 00:33:17.697 13:16:21 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:33:17.697 13:16:21 -- nvmf/common.sh@291 -- # pci_devs=() 00:33:17.697 13:16:21 -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:17.697 13:16:21 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:17.697 13:16:21 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:17.697 13:16:21 -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:17.697 13:16:21 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:17.697 13:16:21 -- nvmf/common.sh@295 -- # net_devs=() 00:33:17.697 13:16:21 -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:17.697 13:16:21 -- nvmf/common.sh@296 -- # e810=() 00:33:17.697 13:16:21 -- nvmf/common.sh@296 -- # local -ga e810 00:33:17.697 13:16:21 -- nvmf/common.sh@297 -- # x722=() 00:33:17.697 13:16:21 -- nvmf/common.sh@297 -- # local -ga x722 00:33:17.697 13:16:21 -- nvmf/common.sh@298 -- # mlx=() 00:33:17.697 13:16:21 -- nvmf/common.sh@298 -- # local -ga mlx 00:33:17.697 13:16:21 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:17.697 13:16:21 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:17.697 13:16:21 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:17.697 13:16:21 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:17.697 13:16:21 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:17.697 13:16:21 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:17.697 13:16:21 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:17.697 13:16:21 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:17.697 13:16:21 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:17.697 13:16:21 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:17.697 13:16:21 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:17.697 13:16:21 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:17.697 13:16:21 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:17.697 13:16:21 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:17.697 13:16:21 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:17.697 13:16:21 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:17.697 13:16:21 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:17.697 13:16:21 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:17.697 13:16:21 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:33:17.697 Found 0000:31:00.0 (0x8086 - 0x159b) 00:33:17.697 13:16:21 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:17.697 13:16:21 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:17.697 13:16:21 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:17.697 13:16:21 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:17.697 13:16:21 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:17.697 13:16:21 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:17.697 13:16:21 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:33:17.697 Found 0000:31:00.1 (0x8086 - 0x159b) 00:33:17.697 13:16:21 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:17.697 13:16:21 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:17.697 13:16:21 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:17.697 13:16:21 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:17.697 13:16:21 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:17.697 13:16:21 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:17.697 13:16:21 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:17.697 13:16:21 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:17.697 13:16:21 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:17.697 13:16:21 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:17.697 13:16:21 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:33:17.697 13:16:21 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:17.697 13:16:21 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:33:17.697 Found net devices under 0000:31:00.0: cvl_0_0 00:33:17.697 13:16:21 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:33:17.697 13:16:21 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:17.697 13:16:21 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:17.697 13:16:21 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:33:17.697 13:16:21 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:17.697 13:16:21 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:33:17.697 Found net devices under 0000:31:00.1: cvl_0_1 00:33:17.697 13:16:21 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:33:17.697 13:16:21 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:33:17.698 13:16:21 -- nvmf/common.sh@403 -- # is_hw=yes 00:33:17.698 13:16:21 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:33:17.698 13:16:21 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:33:17.698 13:16:21 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:33:17.698 13:16:21 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:17.698 13:16:21 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:17.698 13:16:21 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:17.698 13:16:21 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:17.698 13:16:21 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:17.698 13:16:21 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:17.698 13:16:21 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:17.698 13:16:21 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:17.698 13:16:21 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:17.698 13:16:21 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:17.698 13:16:21 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:17.698 13:16:21 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:17.698 13:16:21 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:17.698 13:16:21 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:17.698 13:16:21 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:17.698 13:16:21 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:17.698 13:16:21 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:17.698 13:16:21 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:17.698 13:16:21 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:17.698 13:16:21 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:17.698 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:17.698 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.693 ms 00:33:17.698 00:33:17.698 --- 10.0.0.2 ping statistics --- 00:33:17.698 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:17.698 rtt min/avg/max/mdev = 0.693/0.693/0.693/0.000 ms 00:33:17.698 13:16:21 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:17.698 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:17.698 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.274 ms 00:33:17.698 00:33:17.698 --- 10.0.0.1 ping statistics --- 00:33:17.698 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:17.698 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:33:17.698 13:16:21 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:17.698 13:16:21 -- nvmf/common.sh@411 -- # return 0 00:33:17.698 13:16:21 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:33:17.698 13:16:21 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:17.698 13:16:21 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:33:17.698 13:16:21 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:33:17.698 13:16:21 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:17.698 13:16:21 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:33:17.698 13:16:21 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:33:17.698 13:16:21 -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:33:17.698 13:16:21 -- common/autotest_common.sh@710 -- # xtrace_disable 00:33:17.698 13:16:21 -- common/autotest_common.sh@10 -- # set +x 00:33:17.698 13:16:21 -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:33:17.698 13:16:21 -- common/autotest_common.sh@1510 -- # bdfs=() 00:33:17.698 13:16:21 -- common/autotest_common.sh@1510 -- # local bdfs 00:33:17.698 13:16:21 -- common/autotest_common.sh@1511 -- # bdfs=($(get_nvme_bdfs)) 00:33:17.698 13:16:21 -- common/autotest_common.sh@1511 -- # get_nvme_bdfs 00:33:17.698 13:16:21 -- common/autotest_common.sh@1499 -- # bdfs=() 00:33:17.698 13:16:21 -- common/autotest_common.sh@1499 -- # local bdfs 00:33:17.698 13:16:21 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:33:17.698 13:16:21 -- common/autotest_common.sh@1500 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:33:17.698 13:16:21 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:33:17.698 13:16:21 -- common/autotest_common.sh@1501 -- # (( 1 == 0 )) 00:33:17.698 13:16:21 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:65:00.0 00:33:17.698 13:16:21 -- common/autotest_common.sh@1513 -- # echo 0000:65:00.0 00:33:17.698 13:16:21 -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:33:17.698 13:16:21 -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:33:17.698 13:16:21 -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:33:17.698 13:16:21 -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:33:17.698 13:16:21 -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:33:17.698 EAL: No free 2048 kB hugepages reported on node 1 00:33:17.698 13:16:22 -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605494 00:33:17.698 13:16:22 -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:33:17.698 13:16:22 -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:33:17.698 13:16:22 -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:33:17.698 EAL: No free 2048 kB hugepages reported on node 1 00:33:17.698 13:16:22 -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:33:17.698 13:16:22 -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:33:17.698 13:16:22 -- common/autotest_common.sh@716 -- # xtrace_disable 00:33:17.698 13:16:22 -- common/autotest_common.sh@10 -- # set +x 00:33:17.958 13:16:22 -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:33:17.958 13:16:22 -- common/autotest_common.sh@710 -- # xtrace_disable 00:33:17.958 13:16:22 -- common/autotest_common.sh@10 -- # set +x 00:33:17.958 13:16:22 -- target/identify_passthru.sh@31 -- # nvmfpid=28733 00:33:17.958 13:16:22 -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:17.958 13:16:22 -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:33:17.958 13:16:22 -- target/identify_passthru.sh@35 -- # waitforlisten 28733 00:33:17.958 13:16:22 -- common/autotest_common.sh@817 -- # '[' -z 28733 ']' 00:33:17.958 13:16:22 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:17.958 13:16:22 -- common/autotest_common.sh@822 -- # local max_retries=100 00:33:17.958 13:16:22 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:17.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:17.958 13:16:22 -- common/autotest_common.sh@826 -- # xtrace_disable 00:33:17.958 13:16:22 -- common/autotest_common.sh@10 -- # set +x 00:33:17.958 [2024-04-26 13:16:22.849812] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:33:17.958 [2024-04-26 13:16:22.849881] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:17.958 EAL: No free 2048 kB hugepages reported on node 1 00:33:17.958 [2024-04-26 13:16:22.918599] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:17.958 [2024-04-26 13:16:22.986728] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:17.958 [2024-04-26 13:16:22.986768] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:17.958 [2024-04-26 13:16:22.986777] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:17.958 [2024-04-26 13:16:22.986785] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:17.958 [2024-04-26 13:16:22.986792] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:17.959 [2024-04-26 13:16:22.986944] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:17.959 [2024-04-26 13:16:22.987091] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:33:17.959 [2024-04-26 13:16:22.987248] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:33:17.959 [2024-04-26 13:16:22.987249] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:18.899 13:16:23 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:33:18.899 13:16:23 -- common/autotest_common.sh@850 -- # return 0 00:33:18.899 13:16:23 -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:33:18.899 13:16:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:18.899 13:16:23 -- common/autotest_common.sh@10 -- # set +x 00:33:18.899 INFO: Log level set to 20 00:33:18.899 INFO: Requests: 00:33:18.899 { 00:33:18.899 "jsonrpc": "2.0", 00:33:18.899 "method": "nvmf_set_config", 00:33:18.899 "id": 1, 00:33:18.899 "params": { 00:33:18.899 "admin_cmd_passthru": { 00:33:18.899 "identify_ctrlr": true 00:33:18.899 } 00:33:18.899 } 00:33:18.899 } 00:33:18.899 00:33:18.899 INFO: response: 00:33:18.899 { 00:33:18.899 "jsonrpc": "2.0", 00:33:18.899 "id": 1, 00:33:18.899 "result": true 00:33:18.899 } 00:33:18.899 00:33:18.899 13:16:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:18.899 13:16:23 -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:33:18.899 13:16:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:18.899 13:16:23 -- common/autotest_common.sh@10 -- # set +x 00:33:18.899 INFO: Setting log level to 20 00:33:18.899 INFO: Setting log level to 20 00:33:18.899 INFO: Log level set to 20 00:33:18.899 INFO: Log level set to 20 00:33:18.899 INFO: Requests: 00:33:18.899 { 00:33:18.899 "jsonrpc": "2.0", 00:33:18.899 "method": "framework_start_init", 00:33:18.899 "id": 1 00:33:18.899 } 00:33:18.899 00:33:18.899 INFO: Requests: 00:33:18.899 { 00:33:18.899 "jsonrpc": "2.0", 00:33:18.899 "method": "framework_start_init", 00:33:18.899 "id": 1 00:33:18.899 } 00:33:18.899 00:33:18.899 [2024-04-26 13:16:23.707266] nvmf_tgt.c: 453:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:33:18.899 INFO: response: 00:33:18.899 { 00:33:18.899 "jsonrpc": "2.0", 00:33:18.899 "id": 1, 00:33:18.899 "result": true 00:33:18.899 } 00:33:18.899 00:33:18.899 INFO: response: 00:33:18.899 { 00:33:18.899 "jsonrpc": "2.0", 00:33:18.899 "id": 1, 00:33:18.899 "result": true 00:33:18.899 } 00:33:18.899 00:33:18.899 13:16:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:18.899 13:16:23 -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:18.899 13:16:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:18.899 13:16:23 -- common/autotest_common.sh@10 -- # set +x 00:33:18.899 INFO: Setting log level to 40 00:33:18.899 INFO: Setting log level to 40 00:33:18.899 INFO: Setting log level to 40 00:33:18.899 [2024-04-26 13:16:23.720523] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:18.899 13:16:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:18.899 13:16:23 -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:33:18.899 13:16:23 -- common/autotest_common.sh@716 -- # xtrace_disable 00:33:18.899 13:16:23 -- common/autotest_common.sh@10 -- # set +x 00:33:18.899 13:16:23 -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:33:18.899 13:16:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:18.899 13:16:23 -- common/autotest_common.sh@10 -- # set +x 00:33:19.160 Nvme0n1 00:33:19.160 13:16:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:19.160 13:16:24 -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:33:19.160 13:16:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:19.160 13:16:24 -- common/autotest_common.sh@10 -- # set +x 00:33:19.160 13:16:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:19.160 13:16:24 -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:33:19.160 13:16:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:19.160 13:16:24 -- common/autotest_common.sh@10 -- # set +x 00:33:19.160 13:16:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:19.160 13:16:24 -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:19.160 13:16:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:19.160 13:16:24 -- common/autotest_common.sh@10 -- # set +x 00:33:19.160 [2024-04-26 13:16:24.108240] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:19.160 13:16:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:19.160 13:16:24 -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:33:19.160 13:16:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:19.160 13:16:24 -- common/autotest_common.sh@10 -- # set +x 00:33:19.160 [2024-04-26 13:16:24.120050] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:33:19.160 [ 00:33:19.160 { 00:33:19.160 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:33:19.160 "subtype": "Discovery", 00:33:19.160 "listen_addresses": [], 00:33:19.160 "allow_any_host": true, 00:33:19.160 "hosts": [] 00:33:19.160 }, 00:33:19.160 { 00:33:19.160 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:33:19.160 "subtype": "NVMe", 00:33:19.160 "listen_addresses": [ 00:33:19.160 { 00:33:19.160 "transport": "TCP", 00:33:19.160 "trtype": "TCP", 00:33:19.160 "adrfam": "IPv4", 00:33:19.160 "traddr": "10.0.0.2", 00:33:19.160 "trsvcid": "4420" 00:33:19.160 } 00:33:19.160 ], 00:33:19.160 "allow_any_host": true, 00:33:19.160 "hosts": [], 00:33:19.160 "serial_number": "SPDK00000000000001", 00:33:19.160 "model_number": "SPDK bdev Controller", 00:33:19.160 "max_namespaces": 1, 00:33:19.160 "min_cntlid": 1, 00:33:19.160 "max_cntlid": 65519, 00:33:19.160 "namespaces": [ 00:33:19.160 { 00:33:19.160 "nsid": 1, 00:33:19.160 "bdev_name": "Nvme0n1", 00:33:19.160 "name": "Nvme0n1", 00:33:19.160 "nguid": "3634473052605494002538450000001F", 00:33:19.160 "uuid": "36344730-5260-5494-0025-38450000001f" 00:33:19.160 } 00:33:19.160 ] 00:33:19.160 } 00:33:19.160 ] 00:33:19.160 13:16:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:19.160 13:16:24 -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:33:19.160 13:16:24 -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:33:19.160 13:16:24 -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:33:19.160 EAL: No free 2048 kB hugepages reported on node 1 00:33:19.419 13:16:24 -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605494 00:33:19.420 13:16:24 -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:33:19.420 13:16:24 -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:33:19.420 13:16:24 -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:33:19.420 EAL: No free 2048 kB hugepages reported on node 1 00:33:19.680 13:16:24 -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:33:19.680 13:16:24 -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605494 '!=' S64GNE0R605494 ']' 00:33:19.680 13:16:24 -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:33:19.680 13:16:24 -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:19.680 13:16:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:19.680 13:16:24 -- common/autotest_common.sh@10 -- # set +x 00:33:19.680 13:16:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:19.680 13:16:24 -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:33:19.680 13:16:24 -- target/identify_passthru.sh@77 -- # nvmftestfini 00:33:19.680 13:16:24 -- nvmf/common.sh@477 -- # nvmfcleanup 00:33:19.680 13:16:24 -- nvmf/common.sh@117 -- # sync 00:33:19.680 13:16:24 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:19.680 13:16:24 -- nvmf/common.sh@120 -- # set +e 00:33:19.680 13:16:24 -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:19.680 13:16:24 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:19.680 rmmod nvme_tcp 00:33:19.680 rmmod nvme_fabrics 00:33:19.680 rmmod nvme_keyring 00:33:19.680 13:16:24 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:19.680 13:16:24 -- nvmf/common.sh@124 -- # set -e 00:33:19.680 13:16:24 -- nvmf/common.sh@125 -- # return 0 00:33:19.680 13:16:24 -- nvmf/common.sh@478 -- # '[' -n 28733 ']' 00:33:19.680 13:16:24 -- nvmf/common.sh@479 -- # killprocess 28733 00:33:19.680 13:16:24 -- common/autotest_common.sh@936 -- # '[' -z 28733 ']' 00:33:19.680 13:16:24 -- common/autotest_common.sh@940 -- # kill -0 28733 00:33:19.680 13:16:24 -- common/autotest_common.sh@941 -- # uname 00:33:19.680 13:16:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:33:19.680 13:16:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 28733 00:33:19.680 13:16:24 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:33:19.680 13:16:24 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:33:19.680 13:16:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 28733' 00:33:19.680 killing process with pid 28733 00:33:19.680 13:16:24 -- common/autotest_common.sh@955 -- # kill 28733 00:33:19.680 [2024-04-26 13:16:24.659406] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:33:19.680 13:16:24 -- common/autotest_common.sh@960 -- # wait 28733 00:33:19.941 13:16:24 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:33:19.941 13:16:24 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:33:19.941 13:16:24 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:33:19.941 13:16:24 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:19.941 13:16:24 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:19.941 13:16:24 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:19.941 13:16:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:19.941 13:16:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:22.484 13:16:26 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:22.484 00:33:22.484 real 0m12.576s 00:33:22.484 user 0m10.131s 00:33:22.484 sys 0m5.923s 00:33:22.484 13:16:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:33:22.484 13:16:26 -- common/autotest_common.sh@10 -- # set +x 00:33:22.484 ************************************ 00:33:22.484 END TEST nvmf_identify_passthru 00:33:22.484 ************************************ 00:33:22.484 13:16:27 -- spdk/autotest.sh@290 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:33:22.484 13:16:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:33:22.484 13:16:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:33:22.484 13:16:27 -- common/autotest_common.sh@10 -- # set +x 00:33:22.484 ************************************ 00:33:22.484 START TEST nvmf_dif 00:33:22.484 ************************************ 00:33:22.484 13:16:27 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:33:22.484 * Looking for test storage... 00:33:22.484 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:22.484 13:16:27 -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:22.484 13:16:27 -- nvmf/common.sh@7 -- # uname -s 00:33:22.484 13:16:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:22.484 13:16:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:22.484 13:16:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:22.484 13:16:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:22.484 13:16:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:22.484 13:16:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:22.484 13:16:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:22.484 13:16:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:22.484 13:16:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:22.484 13:16:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:22.484 13:16:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:22.484 13:16:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:22.484 13:16:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:22.484 13:16:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:22.484 13:16:27 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:22.484 13:16:27 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:22.484 13:16:27 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:22.484 13:16:27 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:22.484 13:16:27 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:22.484 13:16:27 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:22.484 13:16:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:22.484 13:16:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:22.484 13:16:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:22.484 13:16:27 -- paths/export.sh@5 -- # export PATH 00:33:22.484 13:16:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:22.484 13:16:27 -- nvmf/common.sh@47 -- # : 0 00:33:22.484 13:16:27 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:22.484 13:16:27 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:22.484 13:16:27 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:22.484 13:16:27 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:22.484 13:16:27 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:22.484 13:16:27 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:22.484 13:16:27 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:22.484 13:16:27 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:22.484 13:16:27 -- target/dif.sh@15 -- # NULL_META=16 00:33:22.484 13:16:27 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:33:22.484 13:16:27 -- target/dif.sh@15 -- # NULL_SIZE=64 00:33:22.484 13:16:27 -- target/dif.sh@15 -- # NULL_DIF=1 00:33:22.484 13:16:27 -- target/dif.sh@135 -- # nvmftestinit 00:33:22.484 13:16:27 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:33:22.484 13:16:27 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:22.484 13:16:27 -- nvmf/common.sh@437 -- # prepare_net_devs 00:33:22.484 13:16:27 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:33:22.484 13:16:27 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:33:22.484 13:16:27 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:22.484 13:16:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:22.484 13:16:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:22.484 13:16:27 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:33:22.484 13:16:27 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:33:22.484 13:16:27 -- nvmf/common.sh@285 -- # xtrace_disable 00:33:22.485 13:16:27 -- common/autotest_common.sh@10 -- # set +x 00:33:30.619 13:16:34 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:33:30.619 13:16:34 -- nvmf/common.sh@291 -- # pci_devs=() 00:33:30.619 13:16:34 -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:30.619 13:16:34 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:30.619 13:16:34 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:30.619 13:16:34 -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:30.619 13:16:34 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:30.619 13:16:34 -- nvmf/common.sh@295 -- # net_devs=() 00:33:30.619 13:16:34 -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:30.619 13:16:34 -- nvmf/common.sh@296 -- # e810=() 00:33:30.619 13:16:34 -- nvmf/common.sh@296 -- # local -ga e810 00:33:30.619 13:16:34 -- nvmf/common.sh@297 -- # x722=() 00:33:30.619 13:16:34 -- nvmf/common.sh@297 -- # local -ga x722 00:33:30.619 13:16:34 -- nvmf/common.sh@298 -- # mlx=() 00:33:30.619 13:16:34 -- nvmf/common.sh@298 -- # local -ga mlx 00:33:30.619 13:16:34 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:30.619 13:16:34 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:30.619 13:16:34 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:30.619 13:16:34 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:30.619 13:16:34 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:30.619 13:16:34 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:30.619 13:16:34 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:30.619 13:16:34 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:30.619 13:16:34 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:30.619 13:16:34 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:30.619 13:16:34 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:30.619 13:16:34 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:30.619 13:16:34 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:30.619 13:16:34 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:30.619 13:16:34 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:30.619 13:16:34 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:30.619 13:16:34 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:30.619 13:16:34 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:30.619 13:16:34 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:33:30.619 Found 0000:31:00.0 (0x8086 - 0x159b) 00:33:30.619 13:16:34 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:30.619 13:16:34 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:30.619 13:16:34 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:30.619 13:16:34 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:30.619 13:16:34 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:30.619 13:16:34 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:30.619 13:16:34 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:33:30.619 Found 0000:31:00.1 (0x8086 - 0x159b) 00:33:30.619 13:16:34 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:30.619 13:16:34 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:30.619 13:16:34 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:30.619 13:16:34 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:30.619 13:16:34 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:30.619 13:16:34 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:30.619 13:16:34 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:30.619 13:16:34 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:30.619 13:16:34 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:30.619 13:16:34 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:30.619 13:16:34 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:33:30.619 13:16:34 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:30.619 13:16:34 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:33:30.619 Found net devices under 0000:31:00.0: cvl_0_0 00:33:30.619 13:16:34 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:33:30.619 13:16:34 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:30.619 13:16:34 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:30.619 13:16:34 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:33:30.619 13:16:34 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:30.619 13:16:34 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:33:30.619 Found net devices under 0000:31:00.1: cvl_0_1 00:33:30.619 13:16:34 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:33:30.619 13:16:34 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:33:30.619 13:16:34 -- nvmf/common.sh@403 -- # is_hw=yes 00:33:30.619 13:16:34 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:33:30.619 13:16:34 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:33:30.619 13:16:34 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:33:30.619 13:16:34 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:30.619 13:16:34 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:30.619 13:16:34 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:30.619 13:16:34 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:30.619 13:16:34 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:30.619 13:16:34 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:30.619 13:16:34 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:30.619 13:16:34 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:30.619 13:16:34 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:30.619 13:16:34 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:30.619 13:16:34 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:30.619 13:16:34 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:30.619 13:16:34 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:30.619 13:16:34 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:30.619 13:16:34 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:30.619 13:16:34 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:30.619 13:16:34 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:30.619 13:16:34 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:30.619 13:16:34 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:30.619 13:16:34 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:30.620 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:30.620 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.590 ms 00:33:30.620 00:33:30.620 --- 10.0.0.2 ping statistics --- 00:33:30.620 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:30.620 rtt min/avg/max/mdev = 0.590/0.590/0.590/0.000 ms 00:33:30.620 13:16:34 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:30.620 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:30.620 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.334 ms 00:33:30.620 00:33:30.620 --- 10.0.0.1 ping statistics --- 00:33:30.620 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:30.620 rtt min/avg/max/mdev = 0.334/0.334/0.334/0.000 ms 00:33:30.620 13:16:34 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:30.620 13:16:34 -- nvmf/common.sh@411 -- # return 0 00:33:30.620 13:16:34 -- nvmf/common.sh@439 -- # '[' iso == iso ']' 00:33:30.620 13:16:34 -- nvmf/common.sh@440 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:32.535 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:33:32.535 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:33:32.535 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:33:32.535 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:33:32.535 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:33:32.535 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:33:32.535 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:33:32.535 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:33:32.535 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:33:32.535 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:33:32.535 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:33:32.796 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:33:32.796 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:33:32.796 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:33:32.796 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:33:32.796 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:33:32.796 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:33:33.057 13:16:37 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:33.057 13:16:37 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:33:33.057 13:16:37 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:33:33.057 13:16:37 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:33.057 13:16:37 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:33:33.057 13:16:37 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:33:33.057 13:16:37 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:33:33.057 13:16:37 -- target/dif.sh@137 -- # nvmfappstart 00:33:33.057 13:16:37 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:33:33.057 13:16:37 -- common/autotest_common.sh@710 -- # xtrace_disable 00:33:33.057 13:16:37 -- common/autotest_common.sh@10 -- # set +x 00:33:33.057 13:16:37 -- nvmf/common.sh@470 -- # nvmfpid=34778 00:33:33.057 13:16:37 -- nvmf/common.sh@471 -- # waitforlisten 34778 00:33:33.057 13:16:37 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:33:33.057 13:16:37 -- common/autotest_common.sh@817 -- # '[' -z 34778 ']' 00:33:33.057 13:16:37 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:33.057 13:16:37 -- common/autotest_common.sh@822 -- # local max_retries=100 00:33:33.057 13:16:37 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:33.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:33.057 13:16:37 -- common/autotest_common.sh@826 -- # xtrace_disable 00:33:33.057 13:16:37 -- common/autotest_common.sh@10 -- # set +x 00:33:33.057 [2024-04-26 13:16:38.047793] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:33:33.057 [2024-04-26 13:16:38.047867] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:33.057 EAL: No free 2048 kB hugepages reported on node 1 00:33:33.318 [2024-04-26 13:16:38.119968] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:33.318 [2024-04-26 13:16:38.191595] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:33.318 [2024-04-26 13:16:38.191633] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:33.318 [2024-04-26 13:16:38.191640] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:33.318 [2024-04-26 13:16:38.191647] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:33.318 [2024-04-26 13:16:38.191653] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:33.318 [2024-04-26 13:16:38.191671] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:33.887 13:16:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:33:33.887 13:16:38 -- common/autotest_common.sh@850 -- # return 0 00:33:33.887 13:16:38 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:33:33.887 13:16:38 -- common/autotest_common.sh@716 -- # xtrace_disable 00:33:33.887 13:16:38 -- common/autotest_common.sh@10 -- # set +x 00:33:33.887 13:16:38 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:33.887 13:16:38 -- target/dif.sh@139 -- # create_transport 00:33:33.887 13:16:38 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:33:33.887 13:16:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:33.887 13:16:38 -- common/autotest_common.sh@10 -- # set +x 00:33:33.887 [2024-04-26 13:16:38.858358] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:33.887 13:16:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:33.887 13:16:38 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:33:33.887 13:16:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:33:33.887 13:16:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:33:33.887 13:16:38 -- common/autotest_common.sh@10 -- # set +x 00:33:34.147 ************************************ 00:33:34.147 START TEST fio_dif_1_default 00:33:34.147 ************************************ 00:33:34.148 13:16:39 -- common/autotest_common.sh@1111 -- # fio_dif_1 00:33:34.148 13:16:39 -- target/dif.sh@86 -- # create_subsystems 0 00:33:34.148 13:16:39 -- target/dif.sh@28 -- # local sub 00:33:34.148 13:16:39 -- target/dif.sh@30 -- # for sub in "$@" 00:33:34.148 13:16:39 -- target/dif.sh@31 -- # create_subsystem 0 00:33:34.148 13:16:39 -- target/dif.sh@18 -- # local sub_id=0 00:33:34.148 13:16:39 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:33:34.148 13:16:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:34.148 13:16:39 -- common/autotest_common.sh@10 -- # set +x 00:33:34.148 bdev_null0 00:33:34.148 13:16:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:34.148 13:16:39 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:34.148 13:16:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:34.148 13:16:39 -- common/autotest_common.sh@10 -- # set +x 00:33:34.148 13:16:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:34.148 13:16:39 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:34.148 13:16:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:34.148 13:16:39 -- common/autotest_common.sh@10 -- # set +x 00:33:34.148 13:16:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:34.148 13:16:39 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:34.148 13:16:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:34.148 13:16:39 -- common/autotest_common.sh@10 -- # set +x 00:33:34.148 [2024-04-26 13:16:39.047126] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:34.148 13:16:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:34.148 13:16:39 -- target/dif.sh@87 -- # fio /dev/fd/62 00:33:34.148 13:16:39 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:33:34.148 13:16:39 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:33:34.148 13:16:39 -- nvmf/common.sh@521 -- # config=() 00:33:34.148 13:16:39 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:34.148 13:16:39 -- nvmf/common.sh@521 -- # local subsystem config 00:33:34.148 13:16:39 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:34.148 13:16:39 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:33:34.148 13:16:39 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:33:34.148 { 00:33:34.148 "params": { 00:33:34.148 "name": "Nvme$subsystem", 00:33:34.148 "trtype": "$TEST_TRANSPORT", 00:33:34.148 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:34.148 "adrfam": "ipv4", 00:33:34.148 "trsvcid": "$NVMF_PORT", 00:33:34.148 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:34.148 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:34.148 "hdgst": ${hdgst:-false}, 00:33:34.148 "ddgst": ${ddgst:-false} 00:33:34.148 }, 00:33:34.148 "method": "bdev_nvme_attach_controller" 00:33:34.148 } 00:33:34.148 EOF 00:33:34.148 )") 00:33:34.148 13:16:39 -- target/dif.sh@82 -- # gen_fio_conf 00:33:34.148 13:16:39 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:33:34.148 13:16:39 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:34.148 13:16:39 -- target/dif.sh@54 -- # local file 00:33:34.148 13:16:39 -- common/autotest_common.sh@1325 -- # local sanitizers 00:33:34.148 13:16:39 -- target/dif.sh@56 -- # cat 00:33:34.148 13:16:39 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:34.148 13:16:39 -- common/autotest_common.sh@1327 -- # shift 00:33:34.148 13:16:39 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:33:34.148 13:16:39 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:33:34.148 13:16:39 -- nvmf/common.sh@543 -- # cat 00:33:34.148 13:16:39 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:34.148 13:16:39 -- target/dif.sh@72 -- # (( file = 1 )) 00:33:34.148 13:16:39 -- common/autotest_common.sh@1331 -- # grep libasan 00:33:34.148 13:16:39 -- target/dif.sh@72 -- # (( file <= files )) 00:33:34.148 13:16:39 -- nvmf/common.sh@545 -- # jq . 00:33:34.148 13:16:39 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:33:34.148 13:16:39 -- nvmf/common.sh@546 -- # IFS=, 00:33:34.148 13:16:39 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:33:34.148 "params": { 00:33:34.148 "name": "Nvme0", 00:33:34.148 "trtype": "tcp", 00:33:34.148 "traddr": "10.0.0.2", 00:33:34.148 "adrfam": "ipv4", 00:33:34.148 "trsvcid": "4420", 00:33:34.148 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:34.148 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:34.148 "hdgst": false, 00:33:34.148 "ddgst": false 00:33:34.148 }, 00:33:34.148 "method": "bdev_nvme_attach_controller" 00:33:34.148 }' 00:33:34.148 13:16:39 -- common/autotest_common.sh@1331 -- # asan_lib= 00:33:34.148 13:16:39 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:33:34.148 13:16:39 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:33:34.148 13:16:39 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:34.148 13:16:39 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:33:34.148 13:16:39 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:33:34.148 13:16:39 -- common/autotest_common.sh@1331 -- # asan_lib= 00:33:34.148 13:16:39 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:33:34.148 13:16:39 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:34.148 13:16:39 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:34.718 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:33:34.718 fio-3.35 00:33:34.718 Starting 1 thread 00:33:34.718 EAL: No free 2048 kB hugepages reported on node 1 00:33:46.956 00:33:46.956 filename0: (groupid=0, jobs=1): err= 0: pid=35314: Fri Apr 26 13:16:49 2024 00:33:46.956 read: IOPS=186, BW=748KiB/s (766kB/s)(7488KiB/10014msec) 00:33:46.956 slat (nsec): min=5289, max=31890, avg=6083.64, stdev=1355.12 00:33:46.956 clat (usec): min=738, max=42890, avg=21380.67, stdev=20351.49 00:33:46.956 lat (usec): min=746, max=42922, avg=21386.75, stdev=20351.50 00:33:46.956 clat percentiles (usec): 00:33:46.956 | 1.00th=[ 816], 5.00th=[ 914], 10.00th=[ 930], 20.00th=[ 947], 00:33:46.956 | 30.00th=[ 963], 40.00th=[ 979], 50.00th=[41157], 60.00th=[41157], 00:33:46.956 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:33:46.956 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:33:46.956 | 99.99th=[42730] 00:33:46.956 bw ( KiB/s): min= 672, max= 768, per=99.90%, avg=747.20, stdev=33.28, samples=20 00:33:46.956 iops : min= 168, max= 192, avg=186.80, stdev= 8.32, samples=20 00:33:46.956 lat (usec) : 750=0.11%, 1000=47.01% 00:33:46.956 lat (msec) : 2=2.67%, 50=50.21% 00:33:46.956 cpu : usr=94.48%, sys=5.32%, ctx=11, majf=0, minf=203 00:33:46.956 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:46.956 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.956 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.956 issued rwts: total=1872,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.956 latency : target=0, window=0, percentile=100.00%, depth=4 00:33:46.956 00:33:46.956 Run status group 0 (all jobs): 00:33:46.956 READ: bw=748KiB/s (766kB/s), 748KiB/s-748KiB/s (766kB/s-766kB/s), io=7488KiB (7668kB), run=10014-10014msec 00:33:46.956 13:16:50 -- target/dif.sh@88 -- # destroy_subsystems 0 00:33:46.956 13:16:50 -- target/dif.sh@43 -- # local sub 00:33:46.956 13:16:50 -- target/dif.sh@45 -- # for sub in "$@" 00:33:46.956 13:16:50 -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:46.956 13:16:50 -- target/dif.sh@36 -- # local sub_id=0 00:33:46.956 13:16:50 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:46.956 13:16:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:46.956 13:16:50 -- common/autotest_common.sh@10 -- # set +x 00:33:46.956 13:16:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:46.956 13:16:50 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:46.956 13:16:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:46.956 13:16:50 -- common/autotest_common.sh@10 -- # set +x 00:33:46.956 13:16:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:46.956 00:33:46.956 real 0m11.119s 00:33:46.956 user 0m23.120s 00:33:46.956 sys 0m0.819s 00:33:46.956 13:16:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:33:46.956 13:16:50 -- common/autotest_common.sh@10 -- # set +x 00:33:46.956 ************************************ 00:33:46.956 END TEST fio_dif_1_default 00:33:46.956 ************************************ 00:33:46.956 13:16:50 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:33:46.956 13:16:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:33:46.956 13:16:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:33:46.956 13:16:50 -- common/autotest_common.sh@10 -- # set +x 00:33:46.956 ************************************ 00:33:46.956 START TEST fio_dif_1_multi_subsystems 00:33:46.956 ************************************ 00:33:46.956 13:16:50 -- common/autotest_common.sh@1111 -- # fio_dif_1_multi_subsystems 00:33:46.956 13:16:50 -- target/dif.sh@92 -- # local files=1 00:33:46.956 13:16:50 -- target/dif.sh@94 -- # create_subsystems 0 1 00:33:46.956 13:16:50 -- target/dif.sh@28 -- # local sub 00:33:46.956 13:16:50 -- target/dif.sh@30 -- # for sub in "$@" 00:33:46.956 13:16:50 -- target/dif.sh@31 -- # create_subsystem 0 00:33:46.956 13:16:50 -- target/dif.sh@18 -- # local sub_id=0 00:33:46.956 13:16:50 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:33:46.956 13:16:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:46.956 13:16:50 -- common/autotest_common.sh@10 -- # set +x 00:33:46.956 bdev_null0 00:33:46.956 13:16:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:46.956 13:16:50 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:46.956 13:16:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:46.956 13:16:50 -- common/autotest_common.sh@10 -- # set +x 00:33:46.956 13:16:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:46.956 13:16:50 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:46.956 13:16:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:46.956 13:16:50 -- common/autotest_common.sh@10 -- # set +x 00:33:46.956 13:16:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:46.956 13:16:50 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:46.956 13:16:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:46.956 13:16:50 -- common/autotest_common.sh@10 -- # set +x 00:33:46.956 [2024-04-26 13:16:50.350020] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:46.956 13:16:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:46.956 13:16:50 -- target/dif.sh@30 -- # for sub in "$@" 00:33:46.956 13:16:50 -- target/dif.sh@31 -- # create_subsystem 1 00:33:46.956 13:16:50 -- target/dif.sh@18 -- # local sub_id=1 00:33:46.956 13:16:50 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:33:46.956 13:16:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:46.956 13:16:50 -- common/autotest_common.sh@10 -- # set +x 00:33:46.956 bdev_null1 00:33:46.956 13:16:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:46.956 13:16:50 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:33:46.956 13:16:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:46.956 13:16:50 -- common/autotest_common.sh@10 -- # set +x 00:33:46.956 13:16:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:46.956 13:16:50 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:33:46.956 13:16:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:46.956 13:16:50 -- common/autotest_common.sh@10 -- # set +x 00:33:46.956 13:16:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:46.956 13:16:50 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:46.956 13:16:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:46.956 13:16:50 -- common/autotest_common.sh@10 -- # set +x 00:33:46.956 13:16:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:46.956 13:16:50 -- target/dif.sh@95 -- # fio /dev/fd/62 00:33:46.956 13:16:50 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:33:46.956 13:16:50 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:33:46.956 13:16:50 -- nvmf/common.sh@521 -- # config=() 00:33:46.956 13:16:50 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:46.956 13:16:50 -- nvmf/common.sh@521 -- # local subsystem config 00:33:46.956 13:16:50 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:33:46.956 13:16:50 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:46.956 13:16:50 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:33:46.956 { 00:33:46.956 "params": { 00:33:46.956 "name": "Nvme$subsystem", 00:33:46.956 "trtype": "$TEST_TRANSPORT", 00:33:46.956 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:46.956 "adrfam": "ipv4", 00:33:46.956 "trsvcid": "$NVMF_PORT", 00:33:46.956 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:46.956 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:46.956 "hdgst": ${hdgst:-false}, 00:33:46.956 "ddgst": ${ddgst:-false} 00:33:46.956 }, 00:33:46.956 "method": "bdev_nvme_attach_controller" 00:33:46.956 } 00:33:46.956 EOF 00:33:46.956 )") 00:33:46.956 13:16:50 -- target/dif.sh@82 -- # gen_fio_conf 00:33:46.956 13:16:50 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:33:46.956 13:16:50 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:46.956 13:16:50 -- target/dif.sh@54 -- # local file 00:33:46.956 13:16:50 -- common/autotest_common.sh@1325 -- # local sanitizers 00:33:46.956 13:16:50 -- target/dif.sh@56 -- # cat 00:33:46.956 13:16:50 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:46.956 13:16:50 -- common/autotest_common.sh@1327 -- # shift 00:33:46.956 13:16:50 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:33:46.956 13:16:50 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:33:46.956 13:16:50 -- nvmf/common.sh@543 -- # cat 00:33:46.956 13:16:50 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:46.956 13:16:50 -- target/dif.sh@72 -- # (( file = 1 )) 00:33:46.956 13:16:50 -- common/autotest_common.sh@1331 -- # grep libasan 00:33:46.956 13:16:50 -- target/dif.sh@72 -- # (( file <= files )) 00:33:46.956 13:16:50 -- target/dif.sh@73 -- # cat 00:33:46.956 13:16:50 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:33:46.956 13:16:50 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:33:46.956 13:16:50 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:33:46.956 { 00:33:46.956 "params": { 00:33:46.956 "name": "Nvme$subsystem", 00:33:46.956 "trtype": "$TEST_TRANSPORT", 00:33:46.956 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:46.956 "adrfam": "ipv4", 00:33:46.956 "trsvcid": "$NVMF_PORT", 00:33:46.956 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:46.956 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:46.957 "hdgst": ${hdgst:-false}, 00:33:46.957 "ddgst": ${ddgst:-false} 00:33:46.957 }, 00:33:46.957 "method": "bdev_nvme_attach_controller" 00:33:46.957 } 00:33:46.957 EOF 00:33:46.957 )") 00:33:46.957 13:16:50 -- target/dif.sh@72 -- # (( file++ )) 00:33:46.957 13:16:50 -- target/dif.sh@72 -- # (( file <= files )) 00:33:46.957 13:16:50 -- nvmf/common.sh@543 -- # cat 00:33:46.957 13:16:50 -- nvmf/common.sh@545 -- # jq . 00:33:46.957 13:16:50 -- nvmf/common.sh@546 -- # IFS=, 00:33:46.957 13:16:50 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:33:46.957 "params": { 00:33:46.957 "name": "Nvme0", 00:33:46.957 "trtype": "tcp", 00:33:46.957 "traddr": "10.0.0.2", 00:33:46.957 "adrfam": "ipv4", 00:33:46.957 "trsvcid": "4420", 00:33:46.957 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:46.957 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:46.957 "hdgst": false, 00:33:46.957 "ddgst": false 00:33:46.957 }, 00:33:46.957 "method": "bdev_nvme_attach_controller" 00:33:46.957 },{ 00:33:46.957 "params": { 00:33:46.957 "name": "Nvme1", 00:33:46.957 "trtype": "tcp", 00:33:46.957 "traddr": "10.0.0.2", 00:33:46.957 "adrfam": "ipv4", 00:33:46.957 "trsvcid": "4420", 00:33:46.957 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:46.957 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:46.957 "hdgst": false, 00:33:46.957 "ddgst": false 00:33:46.957 }, 00:33:46.957 "method": "bdev_nvme_attach_controller" 00:33:46.957 }' 00:33:46.957 13:16:50 -- common/autotest_common.sh@1331 -- # asan_lib= 00:33:46.957 13:16:50 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:33:46.957 13:16:50 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:33:46.957 13:16:50 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:46.957 13:16:50 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:33:46.957 13:16:50 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:33:46.957 13:16:50 -- common/autotest_common.sh@1331 -- # asan_lib= 00:33:46.957 13:16:50 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:33:46.957 13:16:50 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:46.957 13:16:50 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:46.957 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:33:46.957 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:33:46.957 fio-3.35 00:33:46.957 Starting 2 threads 00:33:46.957 EAL: No free 2048 kB hugepages reported on node 1 00:33:56.962 00:33:56.962 filename0: (groupid=0, jobs=1): err= 0: pid=37633: Fri Apr 26 13:17:01 2024 00:33:56.962 read: IOPS=186, BW=747KiB/s (764kB/s)(7472KiB/10009msec) 00:33:56.962 slat (nsec): min=5294, max=32965, avg=7233.97, stdev=3348.35 00:33:56.962 clat (usec): min=879, max=42994, avg=21411.95, stdev=20267.06 00:33:56.962 lat (usec): min=889, max=43003, avg=21419.18, stdev=20266.95 00:33:56.962 clat percentiles (usec): 00:33:56.962 | 1.00th=[ 914], 5.00th=[ 947], 10.00th=[ 979], 20.00th=[ 1237], 00:33:56.962 | 30.00th=[ 1270], 40.00th=[ 1287], 50.00th=[ 2442], 60.00th=[41681], 00:33:56.962 | 70.00th=[41681], 80.00th=[41681], 90.00th=[41681], 95.00th=[42206], 00:33:56.962 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[43254], 00:33:56.962 | 99.99th=[43254] 00:33:56.962 bw ( KiB/s): min= 704, max= 768, per=50.06%, avg=745.60, stdev=31.32, samples=20 00:33:56.962 iops : min= 176, max= 192, avg=186.40, stdev= 7.83, samples=20 00:33:56.962 lat (usec) : 1000=12.04% 00:33:56.962 lat (msec) : 2=37.85%, 4=0.21%, 50=49.89% 00:33:56.962 cpu : usr=96.88%, sys=2.91%, ctx=15, majf=0, minf=163 00:33:56.962 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:56.962 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:56.962 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:56.962 issued rwts: total=1868,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:56.962 latency : target=0, window=0, percentile=100.00%, depth=4 00:33:56.962 filename1: (groupid=0, jobs=1): err= 0: pid=37634: Fri Apr 26 13:17:01 2024 00:33:56.962 read: IOPS=185, BW=742KiB/s (759kB/s)(7424KiB/10010msec) 00:33:56.962 slat (nsec): min=5293, max=34066, avg=7177.41, stdev=3339.77 00:33:56.962 clat (usec): min=894, max=44466, avg=21550.67, stdev=20262.84 00:33:56.962 lat (usec): min=900, max=44492, avg=21557.85, stdev=20262.90 00:33:56.962 clat percentiles (usec): 00:33:56.962 | 1.00th=[ 922], 5.00th=[ 963], 10.00th=[ 1004], 20.00th=[ 1254], 00:33:56.962 | 30.00th=[ 1270], 40.00th=[ 1287], 50.00th=[41157], 60.00th=[41681], 00:33:56.962 | 70.00th=[41681], 80.00th=[41681], 90.00th=[41681], 95.00th=[42206], 00:33:56.962 | 99.00th=[42206], 99.50th=[42206], 99.90th=[44303], 99.95th=[44303], 00:33:56.962 | 99.99th=[44303] 00:33:56.962 bw ( KiB/s): min= 672, max= 768, per=49.73%, avg=740.80, stdev=34.86, samples=20 00:33:56.962 iops : min= 168, max= 192, avg=185.20, stdev= 8.72, samples=20 00:33:56.962 lat (usec) : 1000=9.91% 00:33:56.962 lat (msec) : 2=39.87%, 50=50.22% 00:33:56.962 cpu : usr=96.82%, sys=2.97%, ctx=12, majf=0, minf=118 00:33:56.962 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:56.962 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:56.962 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:56.962 issued rwts: total=1856,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:56.962 latency : target=0, window=0, percentile=100.00%, depth=4 00:33:56.962 00:33:56.962 Run status group 0 (all jobs): 00:33:56.962 READ: bw=1488KiB/s (1524kB/s), 742KiB/s-747KiB/s (759kB/s-764kB/s), io=14.5MiB (15.3MB), run=10009-10010msec 00:33:56.962 13:17:01 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:33:56.962 13:17:01 -- target/dif.sh@43 -- # local sub 00:33:56.962 13:17:01 -- target/dif.sh@45 -- # for sub in "$@" 00:33:56.962 13:17:01 -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:56.962 13:17:01 -- target/dif.sh@36 -- # local sub_id=0 00:33:56.962 13:17:01 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:56.962 13:17:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:56.962 13:17:01 -- common/autotest_common.sh@10 -- # set +x 00:33:56.962 13:17:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:56.962 13:17:01 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:56.962 13:17:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:56.962 13:17:01 -- common/autotest_common.sh@10 -- # set +x 00:33:56.962 13:17:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:56.962 13:17:01 -- target/dif.sh@45 -- # for sub in "$@" 00:33:56.962 13:17:01 -- target/dif.sh@46 -- # destroy_subsystem 1 00:33:56.962 13:17:01 -- target/dif.sh@36 -- # local sub_id=1 00:33:56.962 13:17:01 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:56.962 13:17:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:56.962 13:17:01 -- common/autotest_common.sh@10 -- # set +x 00:33:56.962 13:17:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:56.962 13:17:01 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:33:56.963 13:17:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:56.963 13:17:01 -- common/autotest_common.sh@10 -- # set +x 00:33:56.963 13:17:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:56.963 00:33:56.963 real 0m11.352s 00:33:56.963 user 0m31.442s 00:33:56.963 sys 0m0.912s 00:33:56.963 13:17:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:33:56.963 13:17:01 -- common/autotest_common.sh@10 -- # set +x 00:33:56.963 ************************************ 00:33:56.963 END TEST fio_dif_1_multi_subsystems 00:33:56.963 ************************************ 00:33:56.963 13:17:01 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:33:56.963 13:17:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:33:56.963 13:17:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:33:56.963 13:17:01 -- common/autotest_common.sh@10 -- # set +x 00:33:56.963 ************************************ 00:33:56.963 START TEST fio_dif_rand_params 00:33:56.963 ************************************ 00:33:56.963 13:17:01 -- common/autotest_common.sh@1111 -- # fio_dif_rand_params 00:33:56.963 13:17:01 -- target/dif.sh@100 -- # local NULL_DIF 00:33:56.963 13:17:01 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:33:56.963 13:17:01 -- target/dif.sh@103 -- # NULL_DIF=3 00:33:56.963 13:17:01 -- target/dif.sh@103 -- # bs=128k 00:33:56.963 13:17:01 -- target/dif.sh@103 -- # numjobs=3 00:33:56.963 13:17:01 -- target/dif.sh@103 -- # iodepth=3 00:33:56.963 13:17:01 -- target/dif.sh@103 -- # runtime=5 00:33:56.963 13:17:01 -- target/dif.sh@105 -- # create_subsystems 0 00:33:56.963 13:17:01 -- target/dif.sh@28 -- # local sub 00:33:56.963 13:17:01 -- target/dif.sh@30 -- # for sub in "$@" 00:33:56.963 13:17:01 -- target/dif.sh@31 -- # create_subsystem 0 00:33:56.963 13:17:01 -- target/dif.sh@18 -- # local sub_id=0 00:33:56.963 13:17:01 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:33:56.963 13:17:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:56.963 13:17:01 -- common/autotest_common.sh@10 -- # set +x 00:33:56.963 bdev_null0 00:33:56.963 13:17:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:56.963 13:17:01 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:56.963 13:17:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:56.963 13:17:01 -- common/autotest_common.sh@10 -- # set +x 00:33:56.963 13:17:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:56.963 13:17:01 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:56.963 13:17:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:56.963 13:17:01 -- common/autotest_common.sh@10 -- # set +x 00:33:56.963 13:17:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:56.963 13:17:01 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:56.963 13:17:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:56.963 13:17:01 -- common/autotest_common.sh@10 -- # set +x 00:33:56.963 [2024-04-26 13:17:01.890019] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:56.963 13:17:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:56.963 13:17:01 -- target/dif.sh@106 -- # fio /dev/fd/62 00:33:56.963 13:17:01 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:33:56.963 13:17:01 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:33:56.963 13:17:01 -- nvmf/common.sh@521 -- # config=() 00:33:56.963 13:17:01 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:56.963 13:17:01 -- nvmf/common.sh@521 -- # local subsystem config 00:33:56.963 13:17:01 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:56.963 13:17:01 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:33:56.963 13:17:01 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:33:56.963 { 00:33:56.963 "params": { 00:33:56.963 "name": "Nvme$subsystem", 00:33:56.963 "trtype": "$TEST_TRANSPORT", 00:33:56.963 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:56.963 "adrfam": "ipv4", 00:33:56.963 "trsvcid": "$NVMF_PORT", 00:33:56.963 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:56.963 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:56.963 "hdgst": ${hdgst:-false}, 00:33:56.963 "ddgst": ${ddgst:-false} 00:33:56.963 }, 00:33:56.963 "method": "bdev_nvme_attach_controller" 00:33:56.963 } 00:33:56.963 EOF 00:33:56.963 )") 00:33:56.963 13:17:01 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:33:56.963 13:17:01 -- target/dif.sh@82 -- # gen_fio_conf 00:33:56.963 13:17:01 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:56.963 13:17:01 -- target/dif.sh@54 -- # local file 00:33:56.963 13:17:01 -- common/autotest_common.sh@1325 -- # local sanitizers 00:33:56.963 13:17:01 -- target/dif.sh@56 -- # cat 00:33:56.963 13:17:01 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:56.963 13:17:01 -- common/autotest_common.sh@1327 -- # shift 00:33:56.963 13:17:01 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:33:56.963 13:17:01 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:33:56.963 13:17:01 -- nvmf/common.sh@543 -- # cat 00:33:56.963 13:17:01 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:56.963 13:17:01 -- target/dif.sh@72 -- # (( file = 1 )) 00:33:56.963 13:17:01 -- common/autotest_common.sh@1331 -- # grep libasan 00:33:56.963 13:17:01 -- target/dif.sh@72 -- # (( file <= files )) 00:33:56.963 13:17:01 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:33:56.963 13:17:01 -- nvmf/common.sh@545 -- # jq . 00:33:56.963 13:17:01 -- nvmf/common.sh@546 -- # IFS=, 00:33:56.963 13:17:01 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:33:56.963 "params": { 00:33:56.963 "name": "Nvme0", 00:33:56.963 "trtype": "tcp", 00:33:56.963 "traddr": "10.0.0.2", 00:33:56.963 "adrfam": "ipv4", 00:33:56.963 "trsvcid": "4420", 00:33:56.963 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:56.963 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:56.963 "hdgst": false, 00:33:56.963 "ddgst": false 00:33:56.963 }, 00:33:56.963 "method": "bdev_nvme_attach_controller" 00:33:56.963 }' 00:33:56.963 13:17:01 -- common/autotest_common.sh@1331 -- # asan_lib= 00:33:56.963 13:17:01 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:33:56.963 13:17:01 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:33:56.963 13:17:01 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:56.963 13:17:01 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:33:56.963 13:17:01 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:33:56.963 13:17:01 -- common/autotest_common.sh@1331 -- # asan_lib= 00:33:56.963 13:17:01 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:33:56.963 13:17:01 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:56.963 13:17:01 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:57.532 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:33:57.532 ... 00:33:57.532 fio-3.35 00:33:57.532 Starting 3 threads 00:33:57.532 EAL: No free 2048 kB hugepages reported on node 1 00:34:04.173 00:34:04.173 filename0: (groupid=0, jobs=1): err= 0: pid=40037: Fri Apr 26 13:17:07 2024 00:34:04.173 read: IOPS=239, BW=30.0MiB/s (31.5MB/s)(151MiB/5021msec) 00:34:04.173 slat (nsec): min=5402, max=35140, avg=7472.52, stdev=1894.43 00:34:04.173 clat (usec): min=6076, max=90894, avg=12489.90, stdev=7522.70 00:34:04.173 lat (usec): min=6084, max=90901, avg=12497.38, stdev=7522.66 00:34:04.173 clat percentiles (usec): 00:34:04.173 | 1.00th=[ 6718], 5.00th=[ 7635], 10.00th=[ 8225], 20.00th=[ 9241], 00:34:04.173 | 30.00th=[10028], 40.00th=[10814], 50.00th=[11338], 60.00th=[11994], 00:34:04.173 | 70.00th=[12649], 80.00th=[13435], 90.00th=[14484], 95.00th=[15664], 00:34:04.173 | 99.00th=[50594], 99.50th=[52167], 99.90th=[88605], 99.95th=[90702], 00:34:04.173 | 99.99th=[90702] 00:34:04.173 bw ( KiB/s): min=19712, max=34816, per=35.26%, avg=30771.20, stdev=4404.06, samples=10 00:34:04.173 iops : min= 154, max= 272, avg=240.40, stdev=34.41, samples=10 00:34:04.173 lat (msec) : 10=28.96%, 20=67.97%, 50=1.49%, 100=1.58% 00:34:04.173 cpu : usr=95.42%, sys=4.32%, ctx=12, majf=0, minf=41 00:34:04.173 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:04.173 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:04.173 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:04.173 issued rwts: total=1205,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:04.173 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:04.173 filename0: (groupid=0, jobs=1): err= 0: pid=40038: Fri Apr 26 13:17:07 2024 00:34:04.173 read: IOPS=236, BW=29.6MiB/s (31.0MB/s)(149MiB/5045msec) 00:34:04.173 slat (nsec): min=5376, max=54328, avg=8042.96, stdev=2492.38 00:34:04.173 clat (usec): min=5167, max=90709, avg=12639.63, stdev=9708.37 00:34:04.173 lat (usec): min=5175, max=90715, avg=12647.68, stdev=9708.20 00:34:04.173 clat percentiles (usec): 00:34:04.173 | 1.00th=[ 5800], 5.00th=[ 6783], 10.00th=[ 7504], 20.00th=[ 8455], 00:34:04.173 | 30.00th=[ 9110], 40.00th=[ 9896], 50.00th=[10683], 60.00th=[11338], 00:34:04.173 | 70.00th=[12256], 80.00th=[13173], 90.00th=[14615], 95.00th=[18220], 00:34:04.173 | 99.00th=[52167], 99.50th=[54789], 99.90th=[89654], 99.95th=[90702], 00:34:04.173 | 99.99th=[90702] 00:34:04.173 bw ( KiB/s): min=20992, max=40704, per=34.90%, avg=30464.00, stdev=5848.91, samples=10 00:34:04.173 iops : min= 164, max= 318, avg=238.00, stdev=45.69, samples=10 00:34:04.173 lat (msec) : 10=41.91%, 20=53.14%, 50=2.51%, 100=2.43% 00:34:04.173 cpu : usr=95.88%, sys=3.87%, ctx=20, majf=0, minf=197 00:34:04.173 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:04.173 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:04.173 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:04.173 issued rwts: total=1193,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:04.173 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:04.173 filename0: (groupid=0, jobs=1): err= 0: pid=40039: Fri Apr 26 13:17:07 2024 00:34:04.173 read: IOPS=208, BW=26.0MiB/s (27.3MB/s)(130MiB/5004msec) 00:34:04.173 slat (nsec): min=5317, max=36276, avg=7918.68, stdev=2016.99 00:34:04.173 clat (usec): min=5177, max=91598, avg=14394.10, stdev=11300.90 00:34:04.173 lat (usec): min=5183, max=91608, avg=14402.02, stdev=11300.84 00:34:04.173 clat percentiles (usec): 00:34:04.173 | 1.00th=[ 5866], 5.00th=[ 7373], 10.00th=[ 7963], 20.00th=[ 9372], 00:34:04.173 | 30.00th=[10159], 40.00th=[10814], 50.00th=[11469], 60.00th=[12256], 00:34:04.173 | 70.00th=[13304], 80.00th=[14353], 90.00th=[15795], 95.00th=[49546], 00:34:04.173 | 99.00th=[52691], 99.50th=[54264], 99.90th=[91751], 99.95th=[91751], 00:34:04.173 | 99.99th=[91751] 00:34:04.173 bw ( KiB/s): min=18688, max=32000, per=30.47%, avg=26598.40, stdev=4907.15, samples=10 00:34:04.173 iops : min= 146, max= 250, avg=207.80, stdev=38.34, samples=10 00:34:04.173 lat (msec) : 10=26.78%, 20=65.83%, 50=2.69%, 100=4.70% 00:34:04.173 cpu : usr=95.06%, sys=4.66%, ctx=16, majf=0, minf=134 00:34:04.173 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:04.174 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:04.174 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:04.174 issued rwts: total=1042,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:04.174 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:04.174 00:34:04.174 Run status group 0 (all jobs): 00:34:04.174 READ: bw=85.2MiB/s (89.4MB/s), 26.0MiB/s-30.0MiB/s (27.3MB/s-31.5MB/s), io=430MiB (451MB), run=5004-5045msec 00:34:04.174 13:17:08 -- target/dif.sh@107 -- # destroy_subsystems 0 00:34:04.174 13:17:08 -- target/dif.sh@43 -- # local sub 00:34:04.174 13:17:08 -- target/dif.sh@45 -- # for sub in "$@" 00:34:04.174 13:17:08 -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:04.174 13:17:08 -- target/dif.sh@36 -- # local sub_id=0 00:34:04.174 13:17:08 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:04.174 13:17:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:04.174 13:17:08 -- common/autotest_common.sh@10 -- # set +x 00:34:04.174 13:17:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:04.174 13:17:08 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:04.174 13:17:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:04.174 13:17:08 -- common/autotest_common.sh@10 -- # set +x 00:34:04.174 13:17:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:04.174 13:17:08 -- target/dif.sh@109 -- # NULL_DIF=2 00:34:04.174 13:17:08 -- target/dif.sh@109 -- # bs=4k 00:34:04.174 13:17:08 -- target/dif.sh@109 -- # numjobs=8 00:34:04.174 13:17:08 -- target/dif.sh@109 -- # iodepth=16 00:34:04.174 13:17:08 -- target/dif.sh@109 -- # runtime= 00:34:04.174 13:17:08 -- target/dif.sh@109 -- # files=2 00:34:04.174 13:17:08 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:34:04.174 13:17:08 -- target/dif.sh@28 -- # local sub 00:34:04.174 13:17:08 -- target/dif.sh@30 -- # for sub in "$@" 00:34:04.174 13:17:08 -- target/dif.sh@31 -- # create_subsystem 0 00:34:04.174 13:17:08 -- target/dif.sh@18 -- # local sub_id=0 00:34:04.174 13:17:08 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:34:04.174 13:17:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:04.174 13:17:08 -- common/autotest_common.sh@10 -- # set +x 00:34:04.174 bdev_null0 00:34:04.174 13:17:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:04.174 13:17:08 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:04.174 13:17:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:04.174 13:17:08 -- common/autotest_common.sh@10 -- # set +x 00:34:04.174 13:17:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:04.174 13:17:08 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:04.174 13:17:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:04.174 13:17:08 -- common/autotest_common.sh@10 -- # set +x 00:34:04.174 13:17:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:04.174 13:17:08 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:04.174 13:17:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:04.174 13:17:08 -- common/autotest_common.sh@10 -- # set +x 00:34:04.174 [2024-04-26 13:17:08.167497] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:04.174 13:17:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:04.174 13:17:08 -- target/dif.sh@30 -- # for sub in "$@" 00:34:04.174 13:17:08 -- target/dif.sh@31 -- # create_subsystem 1 00:34:04.174 13:17:08 -- target/dif.sh@18 -- # local sub_id=1 00:34:04.174 13:17:08 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:34:04.174 13:17:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:04.174 13:17:08 -- common/autotest_common.sh@10 -- # set +x 00:34:04.174 bdev_null1 00:34:04.174 13:17:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:04.174 13:17:08 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:04.174 13:17:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:04.174 13:17:08 -- common/autotest_common.sh@10 -- # set +x 00:34:04.174 13:17:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:04.174 13:17:08 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:04.174 13:17:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:04.174 13:17:08 -- common/autotest_common.sh@10 -- # set +x 00:34:04.174 13:17:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:04.174 13:17:08 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:04.174 13:17:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:04.174 13:17:08 -- common/autotest_common.sh@10 -- # set +x 00:34:04.174 13:17:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:04.174 13:17:08 -- target/dif.sh@30 -- # for sub in "$@" 00:34:04.174 13:17:08 -- target/dif.sh@31 -- # create_subsystem 2 00:34:04.174 13:17:08 -- target/dif.sh@18 -- # local sub_id=2 00:34:04.174 13:17:08 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:34:04.174 13:17:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:04.174 13:17:08 -- common/autotest_common.sh@10 -- # set +x 00:34:04.174 bdev_null2 00:34:04.174 13:17:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:04.174 13:17:08 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:34:04.174 13:17:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:04.174 13:17:08 -- common/autotest_common.sh@10 -- # set +x 00:34:04.174 13:17:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:04.174 13:17:08 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:34:04.174 13:17:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:04.174 13:17:08 -- common/autotest_common.sh@10 -- # set +x 00:34:04.174 13:17:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:04.174 13:17:08 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:34:04.174 13:17:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:04.174 13:17:08 -- common/autotest_common.sh@10 -- # set +x 00:34:04.174 13:17:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:04.174 13:17:08 -- target/dif.sh@112 -- # fio /dev/fd/62 00:34:04.174 13:17:08 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:34:04.174 13:17:08 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:34:04.174 13:17:08 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:04.174 13:17:08 -- nvmf/common.sh@521 -- # config=() 00:34:04.174 13:17:08 -- nvmf/common.sh@521 -- # local subsystem config 00:34:04.174 13:17:08 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:04.174 13:17:08 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:34:04.174 13:17:08 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:34:04.174 { 00:34:04.174 "params": { 00:34:04.174 "name": "Nvme$subsystem", 00:34:04.174 "trtype": "$TEST_TRANSPORT", 00:34:04.174 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:04.174 "adrfam": "ipv4", 00:34:04.174 "trsvcid": "$NVMF_PORT", 00:34:04.174 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:04.174 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:04.174 "hdgst": ${hdgst:-false}, 00:34:04.174 "ddgst": ${ddgst:-false} 00:34:04.174 }, 00:34:04.174 "method": "bdev_nvme_attach_controller" 00:34:04.174 } 00:34:04.174 EOF 00:34:04.174 )") 00:34:04.174 13:17:08 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:34:04.174 13:17:08 -- target/dif.sh@82 -- # gen_fio_conf 00:34:04.174 13:17:08 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:04.174 13:17:08 -- target/dif.sh@54 -- # local file 00:34:04.174 13:17:08 -- common/autotest_common.sh@1325 -- # local sanitizers 00:34:04.174 13:17:08 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:04.174 13:17:08 -- target/dif.sh@56 -- # cat 00:34:04.174 13:17:08 -- common/autotest_common.sh@1327 -- # shift 00:34:04.174 13:17:08 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:34:04.174 13:17:08 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:34:04.174 13:17:08 -- nvmf/common.sh@543 -- # cat 00:34:04.174 13:17:08 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:04.174 13:17:08 -- common/autotest_common.sh@1331 -- # grep libasan 00:34:04.174 13:17:08 -- target/dif.sh@72 -- # (( file = 1 )) 00:34:04.174 13:17:08 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:34:04.174 13:17:08 -- target/dif.sh@72 -- # (( file <= files )) 00:34:04.174 13:17:08 -- target/dif.sh@73 -- # cat 00:34:04.174 13:17:08 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:34:04.174 13:17:08 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:34:04.174 { 00:34:04.174 "params": { 00:34:04.174 "name": "Nvme$subsystem", 00:34:04.174 "trtype": "$TEST_TRANSPORT", 00:34:04.174 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:04.174 "adrfam": "ipv4", 00:34:04.174 "trsvcid": "$NVMF_PORT", 00:34:04.174 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:04.174 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:04.174 "hdgst": ${hdgst:-false}, 00:34:04.174 "ddgst": ${ddgst:-false} 00:34:04.174 }, 00:34:04.174 "method": "bdev_nvme_attach_controller" 00:34:04.174 } 00:34:04.174 EOF 00:34:04.174 )") 00:34:04.174 13:17:08 -- target/dif.sh@72 -- # (( file++ )) 00:34:04.174 13:17:08 -- target/dif.sh@72 -- # (( file <= files )) 00:34:04.174 13:17:08 -- nvmf/common.sh@543 -- # cat 00:34:04.174 13:17:08 -- target/dif.sh@73 -- # cat 00:34:04.174 13:17:08 -- target/dif.sh@72 -- # (( file++ )) 00:34:04.174 13:17:08 -- target/dif.sh@72 -- # (( file <= files )) 00:34:04.174 13:17:08 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:34:04.174 13:17:08 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:34:04.174 { 00:34:04.174 "params": { 00:34:04.174 "name": "Nvme$subsystem", 00:34:04.174 "trtype": "$TEST_TRANSPORT", 00:34:04.174 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:04.174 "adrfam": "ipv4", 00:34:04.174 "trsvcid": "$NVMF_PORT", 00:34:04.174 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:04.174 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:04.175 "hdgst": ${hdgst:-false}, 00:34:04.175 "ddgst": ${ddgst:-false} 00:34:04.175 }, 00:34:04.175 "method": "bdev_nvme_attach_controller" 00:34:04.175 } 00:34:04.175 EOF 00:34:04.175 )") 00:34:04.175 13:17:08 -- nvmf/common.sh@543 -- # cat 00:34:04.175 13:17:08 -- nvmf/common.sh@545 -- # jq . 00:34:04.175 13:17:08 -- nvmf/common.sh@546 -- # IFS=, 00:34:04.175 13:17:08 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:34:04.175 "params": { 00:34:04.175 "name": "Nvme0", 00:34:04.175 "trtype": "tcp", 00:34:04.175 "traddr": "10.0.0.2", 00:34:04.175 "adrfam": "ipv4", 00:34:04.175 "trsvcid": "4420", 00:34:04.175 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:04.175 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:04.175 "hdgst": false, 00:34:04.175 "ddgst": false 00:34:04.175 }, 00:34:04.175 "method": "bdev_nvme_attach_controller" 00:34:04.175 },{ 00:34:04.175 "params": { 00:34:04.175 "name": "Nvme1", 00:34:04.175 "trtype": "tcp", 00:34:04.175 "traddr": "10.0.0.2", 00:34:04.175 "adrfam": "ipv4", 00:34:04.175 "trsvcid": "4420", 00:34:04.175 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:04.175 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:04.175 "hdgst": false, 00:34:04.175 "ddgst": false 00:34:04.175 }, 00:34:04.175 "method": "bdev_nvme_attach_controller" 00:34:04.175 },{ 00:34:04.175 "params": { 00:34:04.175 "name": "Nvme2", 00:34:04.175 "trtype": "tcp", 00:34:04.175 "traddr": "10.0.0.2", 00:34:04.175 "adrfam": "ipv4", 00:34:04.175 "trsvcid": "4420", 00:34:04.175 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:34:04.175 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:34:04.175 "hdgst": false, 00:34:04.175 "ddgst": false 00:34:04.175 }, 00:34:04.175 "method": "bdev_nvme_attach_controller" 00:34:04.175 }' 00:34:04.175 13:17:08 -- common/autotest_common.sh@1331 -- # asan_lib= 00:34:04.175 13:17:08 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:34:04.175 13:17:08 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:34:04.175 13:17:08 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:04.175 13:17:08 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:34:04.175 13:17:08 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:34:04.175 13:17:08 -- common/autotest_common.sh@1331 -- # asan_lib= 00:34:04.175 13:17:08 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:34:04.175 13:17:08 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:04.175 13:17:08 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:04.175 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:04.175 ... 00:34:04.175 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:04.175 ... 00:34:04.175 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:04.175 ... 00:34:04.175 fio-3.35 00:34:04.175 Starting 24 threads 00:34:04.175 EAL: No free 2048 kB hugepages reported on node 1 00:34:16.405 00:34:16.405 filename0: (groupid=0, jobs=1): err= 0: pid=41396: Fri Apr 26 13:17:19 2024 00:34:16.405 read: IOPS=513, BW=2056KiB/s (2105kB/s)(20.1MiB/10024msec) 00:34:16.405 slat (nsec): min=5468, max=86670, avg=10290.29, stdev=7278.15 00:34:16.405 clat (usec): min=1057, max=34784, avg=31043.89, stdev=4778.75 00:34:16.405 lat (usec): min=1074, max=34818, avg=31054.18, stdev=4778.71 00:34:16.405 clat percentiles (usec): 00:34:16.405 | 1.00th=[ 4817], 5.00th=[21627], 10.00th=[23200], 20.00th=[32113], 00:34:16.405 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32637], 00:34:16.405 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33162], 95.00th=[33424], 00:34:16.405 | 99.00th=[34341], 99.50th=[34341], 99.90th=[34866], 99.95th=[34866], 00:34:16.405 | 99.99th=[34866] 00:34:16.405 bw ( KiB/s): min= 1916, max= 2560, per=4.34%, avg=2047.32, stdev=176.05, samples=19 00:34:16.405 iops : min= 479, max= 640, avg=511.79, stdev=43.98, samples=19 00:34:16.405 lat (msec) : 2=0.04%, 4=0.89%, 10=0.62%, 20=0.62%, 50=97.83% 00:34:16.405 cpu : usr=99.02%, sys=0.61%, ctx=84, majf=0, minf=54 00:34:16.405 IO depths : 1=6.2%, 2=12.4%, 4=24.8%, 8=50.3%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:16.405 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:16.405 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:16.405 issued rwts: total=5152,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:16.405 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:16.405 filename0: (groupid=0, jobs=1): err= 0: pid=41397: Fri Apr 26 13:17:19 2024 00:34:16.405 read: IOPS=488, BW=1954KiB/s (2001kB/s)(19.1MiB/10020msec) 00:34:16.405 slat (usec): min=4, max=115, avg=22.09, stdev=16.42 00:34:16.405 clat (usec): min=18637, max=39774, avg=32547.65, stdev=1110.66 00:34:16.405 lat (usec): min=18645, max=39788, avg=32569.75, stdev=1109.59 00:34:16.405 clat percentiles (usec): 00:34:16.405 | 1.00th=[30802], 5.00th=[31851], 10.00th=[32113], 20.00th=[32113], 00:34:16.405 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:34:16.405 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33424], 95.00th=[33817], 00:34:16.405 | 99.00th=[34341], 99.50th=[34866], 99.90th=[36963], 99.95th=[39584], 00:34:16.405 | 99.99th=[39584] 00:34:16.405 bw ( KiB/s): min= 1916, max= 2048, per=4.14%, avg=1950.65, stdev=55.92, samples=20 00:34:16.405 iops : min= 479, max= 512, avg=487.55, stdev=13.79, samples=20 00:34:16.405 lat (msec) : 20=0.33%, 50=99.67% 00:34:16.405 cpu : usr=99.12%, sys=0.59%, ctx=25, majf=0, minf=32 00:34:16.405 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:16.405 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:16.405 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:16.405 issued rwts: total=4896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:16.405 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:16.405 filename0: (groupid=0, jobs=1): err= 0: pid=41398: Fri Apr 26 13:17:19 2024 00:34:16.405 read: IOPS=492, BW=1968KiB/s (2015kB/s)(19.2MiB/10007msec) 00:34:16.405 slat (usec): min=5, max=108, avg=15.80, stdev=13.78 00:34:16.405 clat (usec): min=12053, max=56508, avg=32447.91, stdev=4366.56 00:34:16.405 lat (usec): min=12060, max=56520, avg=32463.71, stdev=4366.41 00:34:16.405 clat percentiles (usec): 00:34:16.405 | 1.00th=[21103], 5.00th=[25297], 10.00th=[26346], 20.00th=[30016], 00:34:16.405 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32637], 00:34:16.405 | 70.00th=[33162], 80.00th=[33817], 90.00th=[37487], 95.00th=[39584], 00:34:16.405 | 99.00th=[46924], 99.50th=[49546], 99.90th=[55313], 99.95th=[56361], 00:34:16.405 | 99.99th=[56361] 00:34:16.405 bw ( KiB/s): min= 1763, max= 2059, per=4.15%, avg=1958.84, stdev=72.10, samples=19 00:34:16.405 iops : min= 440, max= 514, avg=489.63, stdev=18.08, samples=19 00:34:16.405 lat (msec) : 20=0.67%, 50=98.94%, 100=0.39% 00:34:16.405 cpu : usr=98.41%, sys=0.96%, ctx=147, majf=0, minf=59 00:34:16.405 IO depths : 1=0.2%, 2=0.5%, 4=3.6%, 8=79.7%, 16=16.1%, 32=0.0%, >=64=0.0% 00:34:16.405 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:16.405 complete : 0=0.0%, 4=89.2%, 8=8.8%, 16=2.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:16.405 issued rwts: total=4924,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:16.405 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:16.405 filename0: (groupid=0, jobs=1): err= 0: pid=41399: Fri Apr 26 13:17:19 2024 00:34:16.405 read: IOPS=490, BW=1963KiB/s (2010kB/s)(19.2MiB/10005msec) 00:34:16.405 slat (nsec): min=5464, max=90622, avg=20379.85, stdev=13425.03 00:34:16.405 clat (usec): min=16593, max=49773, avg=32427.47, stdev=2556.21 00:34:16.405 lat (usec): min=16600, max=49796, avg=32447.85, stdev=2556.68 00:34:16.405 clat percentiles (usec): 00:34:16.405 | 1.00th=[22414], 5.00th=[27657], 10.00th=[31851], 20.00th=[32113], 00:34:16.405 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:34:16.405 | 70.00th=[32900], 80.00th=[32900], 90.00th=[33424], 95.00th=[34341], 00:34:16.405 | 99.00th=[40109], 99.50th=[45351], 99.90th=[49546], 99.95th=[49546], 00:34:16.405 | 99.99th=[49546] 00:34:16.405 bw ( KiB/s): min= 1795, max= 2059, per=4.15%, avg=1958.79, stdev=71.29, samples=19 00:34:16.405 iops : min= 448, max= 514, avg=489.58, stdev=17.83, samples=19 00:34:16.405 lat (msec) : 20=0.08%, 50=99.92% 00:34:16.405 cpu : usr=99.02%, sys=0.67%, ctx=58, majf=0, minf=55 00:34:16.405 IO depths : 1=4.7%, 2=9.6%, 4=19.8%, 8=57.3%, 16=8.6%, 32=0.0%, >=64=0.0% 00:34:16.405 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:16.405 complete : 0=0.0%, 4=92.8%, 8=2.2%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:16.405 issued rwts: total=4910,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:16.405 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:16.405 filename0: (groupid=0, jobs=1): err= 0: pid=41400: Fri Apr 26 13:17:19 2024 00:34:16.405 read: IOPS=487, BW=1951KiB/s (1998kB/s)(19.1MiB/10005msec) 00:34:16.405 slat (usec): min=5, max=104, avg=21.12, stdev=15.44 00:34:16.405 clat (usec): min=21204, max=44168, avg=32600.98, stdev=1124.06 00:34:16.405 lat (usec): min=21211, max=44178, avg=32622.09, stdev=1123.14 00:34:16.405 clat percentiles (usec): 00:34:16.405 | 1.00th=[31327], 5.00th=[31851], 10.00th=[32113], 20.00th=[32113], 00:34:16.405 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:34:16.405 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33424], 95.00th=[33817], 00:34:16.405 | 99.00th=[35390], 99.50th=[40633], 99.90th=[41681], 99.95th=[43254], 00:34:16.405 | 99.99th=[44303] 00:34:16.405 bw ( KiB/s): min= 1904, max= 2048, per=4.14%, avg=1952.74, stdev=58.16, samples=19 00:34:16.405 iops : min= 476, max= 512, avg=488.11, stdev=14.50, samples=19 00:34:16.405 lat (msec) : 50=100.00% 00:34:16.405 cpu : usr=99.24%, sys=0.47%, ctx=10, majf=0, minf=37 00:34:16.405 IO depths : 1=5.7%, 2=11.9%, 4=24.8%, 8=50.9%, 16=6.8%, 32=0.0%, >=64=0.0% 00:34:16.405 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:16.405 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:16.405 issued rwts: total=4880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:16.405 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:16.405 filename0: (groupid=0, jobs=1): err= 0: pid=41401: Fri Apr 26 13:17:19 2024 00:34:16.405 read: IOPS=487, BW=1950KiB/s (1997kB/s)(19.1MiB/10011msec) 00:34:16.405 slat (nsec): min=5621, max=76813, avg=21411.79, stdev=12905.99 00:34:16.405 clat (usec): min=14156, max=53258, avg=32624.32, stdev=1672.30 00:34:16.405 lat (usec): min=14163, max=53277, avg=32645.73, stdev=1671.78 00:34:16.405 clat percentiles (usec): 00:34:16.405 | 1.00th=[31851], 5.00th=[32113], 10.00th=[32113], 20.00th=[32375], 00:34:16.405 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:34:16.405 | 70.00th=[32900], 80.00th=[32900], 90.00th=[33424], 95.00th=[33817], 00:34:16.405 | 99.00th=[34341], 99.50th=[34866], 99.90th=[53216], 99.95th=[53216], 00:34:16.405 | 99.99th=[53216] 00:34:16.405 bw ( KiB/s): min= 1792, max= 2048, per=4.13%, avg=1946.11, stdev=67.34, samples=19 00:34:16.405 iops : min= 448, max= 512, avg=486.37, stdev=16.67, samples=19 00:34:16.405 lat (msec) : 20=0.33%, 50=99.34%, 100=0.33% 00:34:16.405 cpu : usr=98.57%, sys=0.82%, ctx=162, majf=0, minf=39 00:34:16.405 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:16.405 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:16.405 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:16.405 issued rwts: total=4880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:16.405 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:16.405 filename0: (groupid=0, jobs=1): err= 0: pid=41402: Fri Apr 26 13:17:19 2024 00:34:16.405 read: IOPS=487, BW=1952KiB/s (1999kB/s)(19.1MiB/10013msec) 00:34:16.405 slat (nsec): min=5474, max=84987, avg=19859.74, stdev=13624.69 00:34:16.405 clat (usec): min=14320, max=63567, avg=32623.60, stdev=2438.58 00:34:16.405 lat (usec): min=14345, max=63583, avg=32643.46, stdev=2437.86 00:34:16.405 clat percentiles (usec): 00:34:16.406 | 1.00th=[22676], 5.00th=[31851], 10.00th=[32113], 20.00th=[32375], 00:34:16.406 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32637], 00:34:16.406 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:34:16.406 | 99.00th=[42730], 99.50th=[45876], 99.90th=[51643], 99.95th=[51643], 00:34:16.406 | 99.99th=[63701] 00:34:16.406 bw ( KiB/s): min= 1840, max= 2043, per=4.13%, avg=1948.21, stdev=58.71, samples=19 00:34:16.406 iops : min= 460, max= 510, avg=486.89, stdev=14.41, samples=19 00:34:16.406 lat (msec) : 20=0.80%, 50=98.87%, 100=0.33% 00:34:16.406 cpu : usr=99.06%, sys=0.65%, ctx=18, majf=0, minf=48 00:34:16.406 IO depths : 1=5.6%, 2=11.6%, 4=24.6%, 8=51.2%, 16=6.9%, 32=0.0%, >=64=0.0% 00:34:16.406 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:16.406 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:16.406 issued rwts: total=4886,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:16.406 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:16.406 filename0: (groupid=0, jobs=1): err= 0: pid=41404: Fri Apr 26 13:17:19 2024 00:34:16.406 read: IOPS=496, BW=1985KiB/s (2033kB/s)(19.4MiB/10023msec) 00:34:16.406 slat (usec): min=5, max=100, avg=13.56, stdev=10.78 00:34:16.406 clat (usec): min=17059, max=50060, avg=32130.28, stdev=3573.15 00:34:16.406 lat (usec): min=17067, max=50077, avg=32143.84, stdev=3574.11 00:34:16.406 clat percentiles (usec): 00:34:16.406 | 1.00th=[20841], 5.00th=[25035], 10.00th=[26870], 20.00th=[32113], 00:34:16.406 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32637], 00:34:16.406 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[38011], 00:34:16.406 | 99.00th=[42206], 99.50th=[44303], 99.90th=[46924], 99.95th=[50070], 00:34:16.406 | 99.99th=[50070] 00:34:16.406 bw ( KiB/s): min= 1884, max= 2123, per=4.21%, avg=1984.95, stdev=79.50, samples=20 00:34:16.406 iops : min= 471, max= 530, avg=496.20, stdev=19.81, samples=20 00:34:16.406 lat (msec) : 20=0.56%, 50=99.36%, 100=0.08% 00:34:16.406 cpu : usr=98.75%, sys=0.87%, ctx=126, majf=0, minf=74 00:34:16.406 IO depths : 1=2.7%, 2=5.6%, 4=13.7%, 8=67.2%, 16=10.8%, 32=0.0%, >=64=0.0% 00:34:16.406 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:16.406 complete : 0=0.0%, 4=91.2%, 8=4.1%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:16.406 issued rwts: total=4974,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:16.406 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:16.406 filename1: (groupid=0, jobs=1): err= 0: pid=41405: Fri Apr 26 13:17:19 2024 00:34:16.406 read: IOPS=487, BW=1950KiB/s (1997kB/s)(19.1MiB/10010msec) 00:34:16.406 slat (nsec): min=5474, max=98280, avg=21846.86, stdev=15631.90 00:34:16.406 clat (usec): min=20603, max=50600, avg=32605.84, stdev=1408.86 00:34:16.406 lat (usec): min=20614, max=50616, avg=32627.68, stdev=1408.05 00:34:16.406 clat percentiles (usec): 00:34:16.406 | 1.00th=[31327], 5.00th=[31851], 10.00th=[32113], 20.00th=[32113], 00:34:16.406 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:34:16.406 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33424], 95.00th=[33817], 00:34:16.406 | 99.00th=[34866], 99.50th=[35914], 99.90th=[50594], 99.95th=[50594], 00:34:16.406 | 99.99th=[50594] 00:34:16.406 bw ( KiB/s): min= 1795, max= 2048, per=4.13%, avg=1946.00, stdev=68.11, samples=19 00:34:16.406 iops : min= 448, max= 512, avg=486.42, stdev=17.06, samples=19 00:34:16.406 lat (msec) : 50=99.67%, 100=0.33% 00:34:16.406 cpu : usr=99.15%, sys=0.47%, ctx=88, majf=0, minf=47 00:34:16.406 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:16.406 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:16.406 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:16.406 issued rwts: total=4880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:16.406 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:16.406 filename1: (groupid=0, jobs=1): err= 0: pid=41406: Fri Apr 26 13:17:19 2024 00:34:16.406 read: IOPS=497, BW=1989KiB/s (2036kB/s)(19.5MiB/10019msec) 00:34:16.406 slat (nsec): min=5468, max=91611, avg=16981.79, stdev=14542.62 00:34:16.406 clat (usec): min=11330, max=43463, avg=32037.03, stdev=2969.96 00:34:16.406 lat (usec): min=11338, max=43470, avg=32054.01, stdev=2971.29 00:34:16.406 clat percentiles (usec): 00:34:16.406 | 1.00th=[17695], 5.00th=[26608], 10.00th=[31851], 20.00th=[32113], 00:34:16.406 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:34:16.406 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33424], 95.00th=[33817], 00:34:16.406 | 99.00th=[36439], 99.50th=[39584], 99.90th=[41681], 99.95th=[41681], 00:34:16.406 | 99.99th=[43254] 00:34:16.406 bw ( KiB/s): min= 1916, max= 2448, per=4.21%, avg=1984.90, stdev=133.24, samples=20 00:34:16.406 iops : min= 479, max= 612, avg=496.15, stdev=33.31, samples=20 00:34:16.406 lat (msec) : 20=1.87%, 50=98.13% 00:34:16.406 cpu : usr=98.97%, sys=0.67%, ctx=34, majf=0, minf=61 00:34:16.406 IO depths : 1=5.3%, 2=11.0%, 4=23.2%, 8=53.1%, 16=7.3%, 32=0.0%, >=64=0.0% 00:34:16.406 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:16.406 complete : 0=0.0%, 4=93.7%, 8=0.6%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:16.406 issued rwts: total=4981,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:16.406 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:16.406 filename1: (groupid=0, jobs=1): err= 0: pid=41407: Fri Apr 26 13:17:19 2024 00:34:16.406 read: IOPS=488, BW=1954KiB/s (2000kB/s)(19.1MiB/10025msec) 00:34:16.406 slat (nsec): min=5472, max=98614, avg=14503.49, stdev=10493.17 00:34:16.406 clat (usec): min=19398, max=41962, avg=32639.08, stdev=1157.32 00:34:16.406 lat (usec): min=19404, max=41979, avg=32653.59, stdev=1157.26 00:34:16.406 clat percentiles (usec): 00:34:16.406 | 1.00th=[30802], 5.00th=[32113], 10.00th=[32113], 20.00th=[32375], 00:34:16.406 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32637], 00:34:16.406 | 70.00th=[32900], 80.00th=[32900], 90.00th=[33424], 95.00th=[33817], 00:34:16.406 | 99.00th=[34341], 99.50th=[34866], 99.90th=[41681], 99.95th=[42206], 00:34:16.406 | 99.99th=[42206] 00:34:16.406 bw ( KiB/s): min= 1912, max= 2048, per=4.14%, avg=1950.95, stdev=56.94, samples=20 00:34:16.406 iops : min= 478, max= 512, avg=487.70, stdev=14.17, samples=20 00:34:16.406 lat (msec) : 20=0.33%, 50=99.67% 00:34:16.406 cpu : usr=99.23%, sys=0.47%, ctx=23, majf=0, minf=63 00:34:16.406 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:16.406 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:16.406 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:16.406 issued rwts: total=4896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:16.406 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:16.406 filename1: (groupid=0, jobs=1): err= 0: pid=41408: Fri Apr 26 13:17:19 2024 00:34:16.406 read: IOPS=487, BW=1951KiB/s (1998kB/s)(19.1MiB/10003msec) 00:34:16.406 slat (nsec): min=5456, max=73884, avg=12823.59, stdev=9264.96 00:34:16.406 clat (usec): min=16225, max=48230, avg=32685.45, stdev=1291.08 00:34:16.406 lat (usec): min=16234, max=48242, avg=32698.27, stdev=1291.18 00:34:16.406 clat percentiles (usec): 00:34:16.406 | 1.00th=[31589], 5.00th=[32113], 10.00th=[32113], 20.00th=[32375], 00:34:16.406 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32637], 00:34:16.406 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:34:16.406 | 99.00th=[35914], 99.50th=[36439], 99.90th=[46400], 99.95th=[46924], 00:34:16.406 | 99.99th=[47973] 00:34:16.406 bw ( KiB/s): min= 1916, max= 2048, per=4.14%, avg=1952.63, stdev=56.85, samples=19 00:34:16.406 iops : min= 479, max= 512, avg=488.16, stdev=14.21, samples=19 00:34:16.406 lat (msec) : 20=0.12%, 50=99.88% 00:34:16.406 cpu : usr=99.05%, sys=0.60%, ctx=52, majf=0, minf=34 00:34:16.406 IO depths : 1=5.8%, 2=12.1%, 4=25.0%, 8=50.4%, 16=6.7%, 32=0.0%, >=64=0.0% 00:34:16.406 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:16.406 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:16.406 issued rwts: total=4880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:16.406 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:16.406 filename1: (groupid=0, jobs=1): err= 0: pid=41409: Fri Apr 26 13:17:19 2024 00:34:16.406 read: IOPS=506, BW=2026KiB/s (2075kB/s)(19.8MiB/10013msec) 00:34:16.406 slat (nsec): min=2876, max=73806, avg=12128.06, stdev=9677.69 00:34:16.406 clat (usec): min=1374, max=56139, avg=31478.32, stdev=5776.76 00:34:16.406 lat (usec): min=1379, max=56144, avg=31490.45, stdev=5778.06 00:34:16.406 clat percentiles (usec): 00:34:16.406 | 1.00th=[ 1631], 5.00th=[24511], 10.00th=[32113], 20.00th=[32375], 00:34:16.406 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32637], 00:34:16.406 | 70.00th=[32900], 80.00th=[32900], 90.00th=[33424], 95.00th=[33817], 00:34:16.406 | 99.00th=[34866], 99.50th=[35390], 99.90th=[46924], 99.95th=[52167], 00:34:16.406 | 99.99th=[56361] 00:34:16.406 bw ( KiB/s): min= 1904, max= 3312, per=4.30%, avg=2026.84, stdev=316.31, samples=19 00:34:16.406 iops : min= 476, max= 828, avg=506.63, stdev=79.07, samples=19 00:34:16.406 lat (msec) : 2=1.26%, 4=1.89%, 10=0.32%, 20=0.51%, 50=95.94% 00:34:16.406 lat (msec) : 100=0.08% 00:34:16.406 cpu : usr=99.09%, sys=0.56%, ctx=53, majf=0, minf=64 00:34:16.406 IO depths : 1=5.5%, 2=11.5%, 4=24.1%, 8=51.7%, 16=7.1%, 32=0.0%, >=64=0.0% 00:34:16.406 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:16.406 complete : 0=0.0%, 4=94.0%, 8=0.3%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:16.406 issued rwts: total=5072,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:16.406 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:16.406 filename1: (groupid=0, jobs=1): err= 0: pid=41410: Fri Apr 26 13:17:19 2024 00:34:16.406 read: IOPS=487, BW=1950KiB/s (1997kB/s)(19.1MiB/10008msec) 00:34:16.406 slat (nsec): min=5521, max=80065, avg=22628.85, stdev=14743.40 00:34:16.406 clat (usec): min=14039, max=50480, avg=32621.77, stdev=1710.95 00:34:16.406 lat (usec): min=14045, max=50496, avg=32644.40, stdev=1710.43 00:34:16.406 clat percentiles (usec): 00:34:16.406 | 1.00th=[31589], 5.00th=[32113], 10.00th=[32113], 20.00th=[32375], 00:34:16.406 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32637], 00:34:16.406 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33424], 95.00th=[33817], 00:34:16.406 | 99.00th=[34866], 99.50th=[40109], 99.90th=[50594], 99.95th=[50594], 00:34:16.406 | 99.99th=[50594] 00:34:16.406 bw ( KiB/s): min= 1792, max= 2048, per=4.13%, avg=1946.05, stdev=66.92, samples=19 00:34:16.406 iops : min= 448, max= 512, avg=486.47, stdev=16.67, samples=19 00:34:16.406 lat (msec) : 20=0.41%, 50=99.26%, 100=0.33% 00:34:16.406 cpu : usr=99.28%, sys=0.43%, ctx=17, majf=0, minf=53 00:34:16.406 IO depths : 1=6.0%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.5%, 32=0.0%, >=64=0.0% 00:34:16.406 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:16.407 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:16.407 issued rwts: total=4880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:16.407 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:16.407 filename1: (groupid=0, jobs=1): err= 0: pid=41411: Fri Apr 26 13:17:19 2024 00:34:16.407 read: IOPS=485, BW=1943KiB/s (1990kB/s)(19.0MiB/10013msec) 00:34:16.407 slat (nsec): min=5469, max=93273, avg=21439.14, stdev=14800.94 00:34:16.407 clat (usec): min=26563, max=45891, avg=32742.24, stdev=1240.04 00:34:16.407 lat (usec): min=26570, max=45900, avg=32763.68, stdev=1239.16 00:34:16.407 clat percentiles (usec): 00:34:16.407 | 1.00th=[31589], 5.00th=[32113], 10.00th=[32113], 20.00th=[32375], 00:34:16.407 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32637], 00:34:16.407 | 70.00th=[32900], 80.00th=[32900], 90.00th=[33424], 95.00th=[33817], 00:34:16.407 | 99.00th=[38011], 99.50th=[42206], 99.90th=[45876], 99.95th=[45876], 00:34:16.407 | 99.99th=[45876] 00:34:16.407 bw ( KiB/s): min= 1792, max= 2048, per=4.11%, avg=1938.25, stdev=61.90, samples=20 00:34:16.407 iops : min= 448, max= 512, avg=484.45, stdev=15.35, samples=20 00:34:16.407 lat (msec) : 50=100.00% 00:34:16.407 cpu : usr=99.25%, sys=0.47%, ctx=11, majf=0, minf=64 00:34:16.407 IO depths : 1=6.0%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.5%, 32=0.0%, >=64=0.0% 00:34:16.407 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:16.407 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:16.407 issued rwts: total=4864,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:16.407 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:16.407 filename1: (groupid=0, jobs=1): err= 0: pid=41413: Fri Apr 26 13:17:19 2024 00:34:16.407 read: IOPS=488, BW=1955KiB/s (2002kB/s)(19.1MiB/10017msec) 00:34:16.407 slat (usec): min=5, max=121, avg=22.73, stdev=17.39 00:34:16.407 clat (usec): min=20016, max=39482, avg=32536.65, stdev=1193.56 00:34:16.407 lat (usec): min=20031, max=39526, avg=32559.38, stdev=1192.39 00:34:16.407 clat percentiles (usec): 00:34:16.407 | 1.00th=[28443], 5.00th=[31851], 10.00th=[32113], 20.00th=[32113], 00:34:16.407 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32637], 00:34:16.407 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33424], 95.00th=[33817], 00:34:16.407 | 99.00th=[34866], 99.50th=[35390], 99.90th=[39584], 99.95th=[39584], 00:34:16.407 | 99.99th=[39584] 00:34:16.407 bw ( KiB/s): min= 1916, max= 2048, per=4.14%, avg=1951.40, stdev=57.24, samples=20 00:34:16.407 iops : min= 479, max= 512, avg=487.85, stdev=14.31, samples=20 00:34:16.407 lat (msec) : 50=100.00% 00:34:16.407 cpu : usr=98.99%, sys=0.62%, ctx=29, majf=0, minf=58 00:34:16.407 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:16.407 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:16.407 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:16.407 issued rwts: total=4896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:16.407 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:16.407 filename2: (groupid=0, jobs=1): err= 0: pid=41414: Fri Apr 26 13:17:19 2024 00:34:16.407 read: IOPS=487, BW=1950KiB/s (1997kB/s)(19.1MiB/10010msec) 00:34:16.407 slat (nsec): min=5515, max=89790, avg=23990.65, stdev=14675.43 00:34:16.407 clat (usec): min=14329, max=52380, avg=32602.62, stdev=1635.38 00:34:16.407 lat (usec): min=14350, max=52401, avg=32626.61, stdev=1635.00 00:34:16.407 clat percentiles (usec): 00:34:16.407 | 1.00th=[31851], 5.00th=[32113], 10.00th=[32113], 20.00th=[32113], 00:34:16.407 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:34:16.407 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33424], 95.00th=[33817], 00:34:16.407 | 99.00th=[34866], 99.50th=[34866], 99.90th=[52167], 99.95th=[52167], 00:34:16.407 | 99.99th=[52167] 00:34:16.407 bw ( KiB/s): min= 1795, max= 2048, per=4.13%, avg=1946.26, stdev=66.96, samples=19 00:34:16.407 iops : min= 448, max= 512, avg=486.37, stdev=16.67, samples=19 00:34:16.407 lat (msec) : 20=0.33%, 50=99.34%, 100=0.33% 00:34:16.407 cpu : usr=98.30%, sys=0.97%, ctx=298, majf=0, minf=38 00:34:16.407 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:16.407 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:16.407 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:16.407 issued rwts: total=4880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:16.407 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:16.407 filename2: (groupid=0, jobs=1): err= 0: pid=41415: Fri Apr 26 13:17:19 2024 00:34:16.407 read: IOPS=489, BW=1958KiB/s (2005kB/s)(19.1MiB/10004msec) 00:34:16.407 slat (nsec): min=5490, max=83630, avg=14162.95, stdev=10622.86 00:34:16.407 clat (usec): min=17923, max=34886, avg=32572.90, stdev=1207.46 00:34:16.407 lat (usec): min=17952, max=34902, avg=32587.07, stdev=1206.90 00:34:16.407 clat percentiles (usec): 00:34:16.407 | 1.00th=[30540], 5.00th=[31851], 10.00th=[32113], 20.00th=[32375], 00:34:16.407 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32637], 00:34:16.407 | 70.00th=[32900], 80.00th=[32900], 90.00th=[33424], 95.00th=[33817], 00:34:16.407 | 99.00th=[34341], 99.50th=[34341], 99.90th=[34866], 99.95th=[34866], 00:34:16.407 | 99.99th=[34866] 00:34:16.407 bw ( KiB/s): min= 1916, max= 2048, per=4.14%, avg=1953.00, stdev=57.74, samples=19 00:34:16.407 iops : min= 479, max= 512, avg=488.21, stdev=14.37, samples=19 00:34:16.407 lat (msec) : 20=0.25%, 50=99.75% 00:34:16.407 cpu : usr=99.22%, sys=0.48%, ctx=11, majf=0, minf=52 00:34:16.407 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:16.407 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:16.407 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:16.407 issued rwts: total=4896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:16.407 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:16.407 filename2: (groupid=0, jobs=1): err= 0: pid=41416: Fri Apr 26 13:17:19 2024 00:34:16.407 read: IOPS=488, BW=1954KiB/s (2001kB/s)(19.1MiB/10023msec) 00:34:16.407 slat (usec): min=5, max=120, avg=22.07, stdev=16.74 00:34:16.407 clat (usec): min=14544, max=41828, avg=32545.75, stdev=1426.64 00:34:16.407 lat (usec): min=14550, max=41834, avg=32567.81, stdev=1425.68 00:34:16.407 clat percentiles (usec): 00:34:16.407 | 1.00th=[30540], 5.00th=[31851], 10.00th=[32113], 20.00th=[32113], 00:34:16.407 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:34:16.407 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:34:16.407 | 99.00th=[34341], 99.50th=[39584], 99.90th=[41157], 99.95th=[41681], 00:34:16.407 | 99.99th=[41681] 00:34:16.407 bw ( KiB/s): min= 1916, max= 2048, per=4.14%, avg=1951.30, stdev=56.12, samples=20 00:34:16.407 iops : min= 479, max= 512, avg=487.75, stdev=13.90, samples=20 00:34:16.407 lat (msec) : 20=0.33%, 50=99.67% 00:34:16.407 cpu : usr=99.16%, sys=0.49%, ctx=72, majf=0, minf=36 00:34:16.407 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:34:16.407 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:16.407 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:16.407 issued rwts: total=4896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:16.407 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:16.407 filename2: (groupid=0, jobs=1): err= 0: pid=41417: Fri Apr 26 13:17:19 2024 00:34:16.407 read: IOPS=487, BW=1951KiB/s (1997kB/s)(19.1MiB/10007msec) 00:34:16.407 slat (nsec): min=5531, max=76433, avg=20595.98, stdev=12711.49 00:34:16.407 clat (usec): min=14235, max=50075, avg=32612.11, stdev=1545.85 00:34:16.407 lat (usec): min=14241, max=50092, avg=32632.71, stdev=1545.59 00:34:16.407 clat percentiles (usec): 00:34:16.407 | 1.00th=[31851], 5.00th=[32113], 10.00th=[32113], 20.00th=[32375], 00:34:16.407 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:34:16.407 | 70.00th=[32900], 80.00th=[32900], 90.00th=[33424], 95.00th=[33817], 00:34:16.407 | 99.00th=[34341], 99.50th=[34866], 99.90th=[50070], 99.95th=[50070], 00:34:16.407 | 99.99th=[50070] 00:34:16.407 bw ( KiB/s): min= 1792, max= 2048, per=4.13%, avg=1946.05, stdev=68.39, samples=19 00:34:16.407 iops : min= 448, max= 512, avg=486.47, stdev=17.04, samples=19 00:34:16.407 lat (msec) : 20=0.33%, 50=99.45%, 100=0.23% 00:34:16.407 cpu : usr=99.12%, sys=0.59%, ctx=10, majf=0, minf=34 00:34:16.407 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:16.407 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:16.407 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:16.407 issued rwts: total=4880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:16.407 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:16.407 filename2: (groupid=0, jobs=1): err= 0: pid=41418: Fri Apr 26 13:17:19 2024 00:34:16.407 read: IOPS=492, BW=1971KiB/s (2018kB/s)(19.3MiB/10029msec) 00:34:16.407 slat (usec): min=5, max=111, avg=21.11, stdev=16.81 00:34:16.407 clat (usec): min=17221, max=51003, avg=32277.31, stdev=2656.36 00:34:16.407 lat (usec): min=17274, max=51016, avg=32298.42, stdev=2656.59 00:34:16.407 clat percentiles (usec): 00:34:16.407 | 1.00th=[22414], 5.00th=[26870], 10.00th=[31851], 20.00th=[32113], 00:34:16.407 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:34:16.407 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33424], 95.00th=[33817], 00:34:16.407 | 99.00th=[40109], 99.50th=[43779], 99.90th=[51119], 99.95th=[51119], 00:34:16.407 | 99.99th=[51119] 00:34:16.407 bw ( KiB/s): min= 1888, max= 2128, per=4.18%, avg=1972.15, stdev=67.13, samples=20 00:34:16.407 iops : min= 472, max= 532, avg=493.00, stdev=16.76, samples=20 00:34:16.407 lat (msec) : 20=0.24%, 50=99.64%, 100=0.12% 00:34:16.407 cpu : usr=99.03%, sys=0.68%, ctx=15, majf=0, minf=51 00:34:16.407 IO depths : 1=4.6%, 2=9.3%, 4=19.7%, 8=57.7%, 16=8.6%, 32=0.0%, >=64=0.0% 00:34:16.407 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:16.407 complete : 0=0.0%, 4=92.8%, 8=2.2%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:16.407 issued rwts: total=4942,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:16.407 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:16.407 filename2: (groupid=0, jobs=1): err= 0: pid=41419: Fri Apr 26 13:17:19 2024 00:34:16.407 read: IOPS=490, BW=1962KiB/s (2009kB/s)(19.2MiB/10018msec) 00:34:16.407 slat (nsec): min=5443, max=77242, avg=13189.13, stdev=9872.85 00:34:16.407 clat (usec): min=15270, max=51880, avg=32500.67, stdev=2968.62 00:34:16.407 lat (usec): min=15276, max=51887, avg=32513.86, stdev=2968.93 00:34:16.407 clat percentiles (usec): 00:34:16.407 | 1.00th=[20579], 5.00th=[27395], 10.00th=[32113], 20.00th=[32375], 00:34:16.407 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32637], 00:34:16.408 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[36439], 00:34:16.408 | 99.00th=[40109], 99.50th=[46924], 99.90th=[48497], 99.95th=[51643], 00:34:16.408 | 99.99th=[51643] 00:34:16.408 bw ( KiB/s): min= 1888, max= 2144, per=4.16%, avg=1962.30, stdev=69.78, samples=20 00:34:16.408 iops : min= 472, max= 536, avg=490.50, stdev=17.44, samples=20 00:34:16.408 lat (msec) : 20=0.71%, 50=99.21%, 100=0.08% 00:34:16.408 cpu : usr=99.06%, sys=0.65%, ctx=16, majf=0, minf=50 00:34:16.408 IO depths : 1=4.2%, 2=8.4%, 4=17.7%, 8=60.2%, 16=9.5%, 32=0.0%, >=64=0.0% 00:34:16.408 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:16.408 complete : 0=0.0%, 4=92.3%, 8=3.1%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:16.408 issued rwts: total=4914,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:16.408 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:16.408 filename2: (groupid=0, jobs=1): err= 0: pid=41420: Fri Apr 26 13:17:19 2024 00:34:16.408 read: IOPS=487, BW=1950KiB/s (1997kB/s)(19.1MiB/10010msec) 00:34:16.408 slat (nsec): min=5464, max=75198, avg=13126.24, stdev=10167.72 00:34:16.408 clat (usec): min=14119, max=53234, avg=32711.92, stdev=1764.64 00:34:16.408 lat (usec): min=14129, max=53256, avg=32725.05, stdev=1764.53 00:34:16.408 clat percentiles (usec): 00:34:16.408 | 1.00th=[28967], 5.00th=[32113], 10.00th=[32113], 20.00th=[32375], 00:34:16.408 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32637], 00:34:16.408 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:34:16.408 | 99.00th=[34866], 99.50th=[39060], 99.90th=[53216], 99.95th=[53216], 00:34:16.408 | 99.99th=[53216] 00:34:16.408 bw ( KiB/s): min= 1792, max= 2048, per=4.13%, avg=1946.11, stdev=67.34, samples=19 00:34:16.408 iops : min= 448, max= 512, avg=486.37, stdev=16.67, samples=19 00:34:16.408 lat (msec) : 20=0.33%, 50=99.34%, 100=0.33% 00:34:16.408 cpu : usr=99.29%, sys=0.41%, ctx=15, majf=0, minf=47 00:34:16.408 IO depths : 1=6.0%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.5%, 32=0.0%, >=64=0.0% 00:34:16.408 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:16.408 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:16.408 issued rwts: total=4880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:16.408 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:16.408 filename2: (groupid=0, jobs=1): err= 0: pid=41421: Fri Apr 26 13:17:19 2024 00:34:16.408 read: IOPS=505, BW=2023KiB/s (2072kB/s)(19.8MiB/10023msec) 00:34:16.408 slat (usec): min=5, max=115, avg=16.57, stdev=16.53 00:34:16.408 clat (usec): min=12792, max=53087, avg=31498.60, stdev=3937.49 00:34:16.408 lat (usec): min=12800, max=53095, avg=31515.17, stdev=3939.70 00:34:16.408 clat percentiles (usec): 00:34:16.408 | 1.00th=[13566], 5.00th=[23200], 10.00th=[26870], 20.00th=[32113], 00:34:16.408 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:34:16.408 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33424], 95.00th=[33817], 00:34:16.408 | 99.00th=[36963], 99.50th=[39584], 99.90th=[53216], 99.95th=[53216], 00:34:16.408 | 99.99th=[53216] 00:34:16.408 bw ( KiB/s): min= 1916, max= 2472, per=4.28%, avg=2020.90, stdev=145.91, samples=20 00:34:16.408 iops : min= 479, max= 618, avg=505.15, stdev=36.43, samples=20 00:34:16.408 lat (msec) : 20=3.43%, 50=96.45%, 100=0.12% 00:34:16.408 cpu : usr=99.08%, sys=0.61%, ctx=49, majf=0, minf=52 00:34:16.408 IO depths : 1=4.8%, 2=9.9%, 4=21.4%, 8=56.0%, 16=7.9%, 32=0.0%, >=64=0.0% 00:34:16.408 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:16.408 complete : 0=0.0%, 4=93.2%, 8=1.3%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:16.408 issued rwts: total=5070,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:16.408 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:16.408 00:34:16.408 Run status group 0 (all jobs): 00:34:16.408 READ: bw=46.0MiB/s (48.3MB/s), 1943KiB/s-2056KiB/s (1990kB/s-2105kB/s), io=462MiB (484MB), run=10003-10029msec 00:34:16.408 13:17:19 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:34:16.408 13:17:19 -- target/dif.sh@43 -- # local sub 00:34:16.408 13:17:19 -- target/dif.sh@45 -- # for sub in "$@" 00:34:16.408 13:17:19 -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:16.408 13:17:19 -- target/dif.sh@36 -- # local sub_id=0 00:34:16.408 13:17:19 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:16.408 13:17:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:16.408 13:17:19 -- common/autotest_common.sh@10 -- # set +x 00:34:16.408 13:17:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:16.408 13:17:19 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:16.408 13:17:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:16.408 13:17:19 -- common/autotest_common.sh@10 -- # set +x 00:34:16.408 13:17:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:16.408 13:17:19 -- target/dif.sh@45 -- # for sub in "$@" 00:34:16.408 13:17:19 -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:16.408 13:17:19 -- target/dif.sh@36 -- # local sub_id=1 00:34:16.408 13:17:19 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:16.408 13:17:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:16.408 13:17:19 -- common/autotest_common.sh@10 -- # set +x 00:34:16.408 13:17:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:16.408 13:17:19 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:16.408 13:17:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:16.408 13:17:19 -- common/autotest_common.sh@10 -- # set +x 00:34:16.408 13:17:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:16.408 13:17:19 -- target/dif.sh@45 -- # for sub in "$@" 00:34:16.408 13:17:19 -- target/dif.sh@46 -- # destroy_subsystem 2 00:34:16.408 13:17:19 -- target/dif.sh@36 -- # local sub_id=2 00:34:16.408 13:17:19 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:34:16.408 13:17:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:16.408 13:17:19 -- common/autotest_common.sh@10 -- # set +x 00:34:16.408 13:17:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:16.408 13:17:19 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:34:16.408 13:17:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:16.408 13:17:19 -- common/autotest_common.sh@10 -- # set +x 00:34:16.408 13:17:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:16.408 13:17:19 -- target/dif.sh@115 -- # NULL_DIF=1 00:34:16.408 13:17:19 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:34:16.408 13:17:19 -- target/dif.sh@115 -- # numjobs=2 00:34:16.408 13:17:19 -- target/dif.sh@115 -- # iodepth=8 00:34:16.408 13:17:19 -- target/dif.sh@115 -- # runtime=5 00:34:16.408 13:17:19 -- target/dif.sh@115 -- # files=1 00:34:16.408 13:17:19 -- target/dif.sh@117 -- # create_subsystems 0 1 00:34:16.408 13:17:19 -- target/dif.sh@28 -- # local sub 00:34:16.408 13:17:19 -- target/dif.sh@30 -- # for sub in "$@" 00:34:16.408 13:17:19 -- target/dif.sh@31 -- # create_subsystem 0 00:34:16.408 13:17:19 -- target/dif.sh@18 -- # local sub_id=0 00:34:16.408 13:17:19 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:16.408 13:17:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:16.408 13:17:19 -- common/autotest_common.sh@10 -- # set +x 00:34:16.408 bdev_null0 00:34:16.408 13:17:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:16.408 13:17:19 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:16.408 13:17:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:16.408 13:17:19 -- common/autotest_common.sh@10 -- # set +x 00:34:16.408 13:17:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:16.408 13:17:19 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:16.408 13:17:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:16.408 13:17:19 -- common/autotest_common.sh@10 -- # set +x 00:34:16.408 13:17:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:16.408 13:17:19 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:16.408 13:17:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:16.408 13:17:19 -- common/autotest_common.sh@10 -- # set +x 00:34:16.408 [2024-04-26 13:17:19.908047] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:16.408 13:17:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:16.408 13:17:19 -- target/dif.sh@30 -- # for sub in "$@" 00:34:16.408 13:17:19 -- target/dif.sh@31 -- # create_subsystem 1 00:34:16.408 13:17:19 -- target/dif.sh@18 -- # local sub_id=1 00:34:16.408 13:17:19 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:34:16.408 13:17:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:16.408 13:17:19 -- common/autotest_common.sh@10 -- # set +x 00:34:16.408 bdev_null1 00:34:16.408 13:17:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:16.408 13:17:19 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:16.408 13:17:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:16.408 13:17:19 -- common/autotest_common.sh@10 -- # set +x 00:34:16.408 13:17:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:16.408 13:17:19 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:16.408 13:17:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:16.408 13:17:19 -- common/autotest_common.sh@10 -- # set +x 00:34:16.408 13:17:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:16.408 13:17:19 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:16.408 13:17:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:16.408 13:17:19 -- common/autotest_common.sh@10 -- # set +x 00:34:16.408 13:17:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:16.408 13:17:19 -- target/dif.sh@118 -- # fio /dev/fd/62 00:34:16.408 13:17:19 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:34:16.408 13:17:19 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:34:16.408 13:17:19 -- nvmf/common.sh@521 -- # config=() 00:34:16.408 13:17:19 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:16.408 13:17:19 -- nvmf/common.sh@521 -- # local subsystem config 00:34:16.408 13:17:19 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:16.408 13:17:19 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:34:16.408 13:17:19 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:34:16.408 { 00:34:16.408 "params": { 00:34:16.408 "name": "Nvme$subsystem", 00:34:16.408 "trtype": "$TEST_TRANSPORT", 00:34:16.408 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:16.408 "adrfam": "ipv4", 00:34:16.408 "trsvcid": "$NVMF_PORT", 00:34:16.409 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:16.409 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:16.409 "hdgst": ${hdgst:-false}, 00:34:16.409 "ddgst": ${ddgst:-false} 00:34:16.409 }, 00:34:16.409 "method": "bdev_nvme_attach_controller" 00:34:16.409 } 00:34:16.409 EOF 00:34:16.409 )") 00:34:16.409 13:17:19 -- target/dif.sh@82 -- # gen_fio_conf 00:34:16.409 13:17:19 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:34:16.409 13:17:19 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:16.409 13:17:19 -- target/dif.sh@54 -- # local file 00:34:16.409 13:17:19 -- common/autotest_common.sh@1325 -- # local sanitizers 00:34:16.409 13:17:19 -- target/dif.sh@56 -- # cat 00:34:16.409 13:17:19 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:16.409 13:17:19 -- common/autotest_common.sh@1327 -- # shift 00:34:16.409 13:17:19 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:34:16.409 13:17:19 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:34:16.409 13:17:19 -- nvmf/common.sh@543 -- # cat 00:34:16.409 13:17:19 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:16.409 13:17:19 -- target/dif.sh@72 -- # (( file = 1 )) 00:34:16.409 13:17:19 -- common/autotest_common.sh@1331 -- # grep libasan 00:34:16.409 13:17:19 -- target/dif.sh@72 -- # (( file <= files )) 00:34:16.409 13:17:19 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:34:16.409 13:17:19 -- target/dif.sh@73 -- # cat 00:34:16.409 13:17:19 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:34:16.409 13:17:19 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:34:16.409 { 00:34:16.409 "params": { 00:34:16.409 "name": "Nvme$subsystem", 00:34:16.409 "trtype": "$TEST_TRANSPORT", 00:34:16.409 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:16.409 "adrfam": "ipv4", 00:34:16.409 "trsvcid": "$NVMF_PORT", 00:34:16.409 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:16.409 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:16.409 "hdgst": ${hdgst:-false}, 00:34:16.409 "ddgst": ${ddgst:-false} 00:34:16.409 }, 00:34:16.409 "method": "bdev_nvme_attach_controller" 00:34:16.409 } 00:34:16.409 EOF 00:34:16.409 )") 00:34:16.409 13:17:19 -- target/dif.sh@72 -- # (( file++ )) 00:34:16.409 13:17:19 -- target/dif.sh@72 -- # (( file <= files )) 00:34:16.409 13:17:19 -- nvmf/common.sh@543 -- # cat 00:34:16.409 13:17:19 -- nvmf/common.sh@545 -- # jq . 00:34:16.409 13:17:19 -- nvmf/common.sh@546 -- # IFS=, 00:34:16.409 13:17:19 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:34:16.409 "params": { 00:34:16.409 "name": "Nvme0", 00:34:16.409 "trtype": "tcp", 00:34:16.409 "traddr": "10.0.0.2", 00:34:16.409 "adrfam": "ipv4", 00:34:16.409 "trsvcid": "4420", 00:34:16.409 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:16.409 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:16.409 "hdgst": false, 00:34:16.409 "ddgst": false 00:34:16.409 }, 00:34:16.409 "method": "bdev_nvme_attach_controller" 00:34:16.409 },{ 00:34:16.409 "params": { 00:34:16.409 "name": "Nvme1", 00:34:16.409 "trtype": "tcp", 00:34:16.409 "traddr": "10.0.0.2", 00:34:16.409 "adrfam": "ipv4", 00:34:16.409 "trsvcid": "4420", 00:34:16.409 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:16.409 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:16.409 "hdgst": false, 00:34:16.409 "ddgst": false 00:34:16.409 }, 00:34:16.409 "method": "bdev_nvme_attach_controller" 00:34:16.409 }' 00:34:16.409 13:17:20 -- common/autotest_common.sh@1331 -- # asan_lib= 00:34:16.409 13:17:20 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:34:16.409 13:17:20 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:34:16.409 13:17:20 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:16.409 13:17:20 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:34:16.409 13:17:20 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:34:16.409 13:17:20 -- common/autotest_common.sh@1331 -- # asan_lib= 00:34:16.409 13:17:20 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:34:16.409 13:17:20 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:16.409 13:17:20 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:16.409 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:34:16.409 ... 00:34:16.409 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:34:16.409 ... 00:34:16.409 fio-3.35 00:34:16.409 Starting 4 threads 00:34:16.409 EAL: No free 2048 kB hugepages reported on node 1 00:34:21.693 00:34:21.693 filename0: (groupid=0, jobs=1): err= 0: pid=43756: Fri Apr 26 13:17:26 2024 00:34:21.693 read: IOPS=2286, BW=17.9MiB/s (18.7MB/s)(89.4MiB/5003msec) 00:34:21.693 slat (nsec): min=5335, max=40910, avg=7730.46, stdev=2055.32 00:34:21.693 clat (usec): min=1148, max=6005, avg=3478.86, stdev=555.59 00:34:21.693 lat (usec): min=1165, max=6018, avg=3486.59, stdev=555.47 00:34:21.693 clat percentiles (usec): 00:34:21.693 | 1.00th=[ 2311], 5.00th=[ 2638], 10.00th=[ 2835], 20.00th=[ 3032], 00:34:21.693 | 30.00th=[ 3195], 40.00th=[ 3326], 50.00th=[ 3458], 60.00th=[ 3589], 00:34:21.693 | 70.00th=[ 3752], 80.00th=[ 3818], 90.00th=[ 4178], 95.00th=[ 4555], 00:34:21.693 | 99.00th=[ 4948], 99.50th=[ 5080], 99.90th=[ 5342], 99.95th=[ 5604], 00:34:21.693 | 99.99th=[ 5800] 00:34:21.693 bw ( KiB/s): min=17648, max=19216, per=27.43%, avg=18293.33, stdev=465.45, samples=9 00:34:21.693 iops : min= 2206, max= 2402, avg=2286.67, stdev=58.18, samples=9 00:34:21.693 lat (msec) : 2=0.45%, 4=87.06%, 10=12.49% 00:34:21.693 cpu : usr=98.02%, sys=1.70%, ctx=6, majf=0, minf=64 00:34:21.693 IO depths : 1=0.1%, 2=1.9%, 4=67.1%, 8=31.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:21.693 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:21.693 complete : 0=0.0%, 4=95.4%, 8=4.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:21.693 issued rwts: total=11438,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:21.693 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:21.693 filename0: (groupid=0, jobs=1): err= 0: pid=43757: Fri Apr 26 13:17:26 2024 00:34:21.693 read: IOPS=1963, BW=15.3MiB/s (16.1MB/s)(76.7MiB/5001msec) 00:34:21.693 slat (nsec): min=5284, max=50385, avg=8009.16, stdev=2362.06 00:34:21.693 clat (usec): min=2222, max=7153, avg=4053.47, stdev=732.55 00:34:21.693 lat (usec): min=2227, max=7161, avg=4061.48, stdev=732.49 00:34:21.693 clat percentiles (usec): 00:34:21.693 | 1.00th=[ 3064], 5.00th=[ 3359], 10.00th=[ 3458], 20.00th=[ 3556], 00:34:21.693 | 30.00th=[ 3687], 40.00th=[ 3752], 50.00th=[ 3785], 60.00th=[ 3851], 00:34:21.693 | 70.00th=[ 4047], 80.00th=[ 4359], 90.00th=[ 5473], 95.00th=[ 5735], 00:34:21.693 | 99.00th=[ 6063], 99.50th=[ 6128], 99.90th=[ 6587], 99.95th=[ 7046], 00:34:21.693 | 99.99th=[ 7177] 00:34:21.693 bw ( KiB/s): min=15520, max=16064, per=23.56%, avg=15710.22, stdev=209.45, samples=9 00:34:21.693 iops : min= 1940, max= 2008, avg=1963.78, stdev=26.18, samples=9 00:34:21.693 lat (msec) : 4=67.08%, 10=32.92% 00:34:21.693 cpu : usr=97.88%, sys=1.86%, ctx=7, majf=0, minf=37 00:34:21.693 IO depths : 1=0.1%, 2=0.3%, 4=72.3%, 8=27.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:21.693 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:21.693 complete : 0=0.0%, 4=92.8%, 8=7.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:21.693 issued rwts: total=9817,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:21.693 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:21.693 filename1: (groupid=0, jobs=1): err= 0: pid=43758: Fri Apr 26 13:17:26 2024 00:34:21.693 read: IOPS=2018, BW=15.8MiB/s (16.5MB/s)(78.9MiB/5002msec) 00:34:21.693 slat (nsec): min=5282, max=41332, avg=7654.19, stdev=2346.37 00:34:21.693 clat (usec): min=1724, max=6896, avg=3940.70, stdev=701.02 00:34:21.693 lat (usec): min=1730, max=6901, avg=3948.35, stdev=701.05 00:34:21.693 clat percentiles (usec): 00:34:21.693 | 1.00th=[ 2835], 5.00th=[ 3195], 10.00th=[ 3326], 20.00th=[ 3490], 00:34:21.693 | 30.00th=[ 3589], 40.00th=[ 3687], 50.00th=[ 3752], 60.00th=[ 3851], 00:34:21.693 | 70.00th=[ 3982], 80.00th=[ 4146], 90.00th=[ 5276], 95.00th=[ 5538], 00:34:21.693 | 99.00th=[ 6063], 99.50th=[ 6325], 99.90th=[ 6456], 99.95th=[ 6652], 00:34:21.693 | 99.99th=[ 6915] 00:34:21.693 bw ( KiB/s): min=15840, max=16720, per=24.23%, avg=16160.00, stdev=297.40, samples=9 00:34:21.693 iops : min= 1980, max= 2090, avg=2020.00, stdev=37.18, samples=9 00:34:21.693 lat (msec) : 2=0.05%, 4=71.75%, 10=28.20% 00:34:21.693 cpu : usr=97.66%, sys=2.06%, ctx=6, majf=0, minf=54 00:34:21.693 IO depths : 1=0.3%, 2=0.9%, 4=72.0%, 8=26.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:21.693 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:21.693 complete : 0=0.0%, 4=92.4%, 8=7.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:21.693 issued rwts: total=10099,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:21.693 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:21.693 filename1: (groupid=0, jobs=1): err= 0: pid=43759: Fri Apr 26 13:17:26 2024 00:34:21.693 read: IOPS=2069, BW=16.2MiB/s (17.0MB/s)(80.9MiB/5001msec) 00:34:21.693 slat (usec): min=5, max=117, avg= 8.78, stdev= 3.14 00:34:21.693 clat (usec): min=1285, max=6635, avg=3841.18, stdev=604.66 00:34:21.693 lat (usec): min=1290, max=6643, avg=3849.96, stdev=604.46 00:34:21.693 clat percentiles (usec): 00:34:21.693 | 1.00th=[ 2802], 5.00th=[ 3163], 10.00th=[ 3294], 20.00th=[ 3458], 00:34:21.693 | 30.00th=[ 3556], 40.00th=[ 3654], 50.00th=[ 3720], 60.00th=[ 3785], 00:34:21.693 | 70.00th=[ 3916], 80.00th=[ 4113], 90.00th=[ 4555], 95.00th=[ 5407], 00:34:21.693 | 99.00th=[ 5800], 99.50th=[ 5866], 99.90th=[ 6325], 99.95th=[ 6390], 00:34:21.693 | 99.99th=[ 6652] 00:34:21.693 bw ( KiB/s): min=15984, max=16897, per=24.79%, avg=16528.11, stdev=360.13, samples=9 00:34:21.693 iops : min= 1998, max= 2112, avg=2066.00, stdev=45.00, samples=9 00:34:21.693 lat (msec) : 2=0.03%, 4=74.65%, 10=25.32% 00:34:21.693 cpu : usr=90.72%, sys=5.80%, ctx=158, majf=0, minf=39 00:34:21.694 IO depths : 1=0.1%, 2=0.6%, 4=72.6%, 8=26.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:21.694 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:21.694 complete : 0=0.0%, 4=92.0%, 8=8.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:21.694 issued rwts: total=10349,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:21.694 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:21.694 00:34:21.694 Run status group 0 (all jobs): 00:34:21.694 READ: bw=65.1MiB/s (68.3MB/s), 15.3MiB/s-17.9MiB/s (16.1MB/s-18.7MB/s), io=326MiB (342MB), run=5001-5003msec 00:34:21.694 13:17:26 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:34:21.694 13:17:26 -- target/dif.sh@43 -- # local sub 00:34:21.694 13:17:26 -- target/dif.sh@45 -- # for sub in "$@" 00:34:21.694 13:17:26 -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:21.694 13:17:26 -- target/dif.sh@36 -- # local sub_id=0 00:34:21.694 13:17:26 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:21.694 13:17:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:21.694 13:17:26 -- common/autotest_common.sh@10 -- # set +x 00:34:21.694 13:17:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:21.694 13:17:26 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:21.694 13:17:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:21.694 13:17:26 -- common/autotest_common.sh@10 -- # set +x 00:34:21.694 13:17:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:21.694 13:17:26 -- target/dif.sh@45 -- # for sub in "$@" 00:34:21.694 13:17:26 -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:21.694 13:17:26 -- target/dif.sh@36 -- # local sub_id=1 00:34:21.694 13:17:26 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:21.694 13:17:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:21.694 13:17:26 -- common/autotest_common.sh@10 -- # set +x 00:34:21.694 13:17:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:21.694 13:17:26 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:21.694 13:17:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:21.694 13:17:26 -- common/autotest_common.sh@10 -- # set +x 00:34:21.694 13:17:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:21.694 00:34:21.694 real 0m24.472s 00:34:21.694 user 5m16.196s 00:34:21.694 sys 0m3.780s 00:34:21.694 13:17:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:34:21.694 13:17:26 -- common/autotest_common.sh@10 -- # set +x 00:34:21.694 ************************************ 00:34:21.694 END TEST fio_dif_rand_params 00:34:21.694 ************************************ 00:34:21.694 13:17:26 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:34:21.694 13:17:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:34:21.694 13:17:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:34:21.694 13:17:26 -- common/autotest_common.sh@10 -- # set +x 00:34:21.694 ************************************ 00:34:21.694 START TEST fio_dif_digest 00:34:21.694 ************************************ 00:34:21.694 13:17:26 -- common/autotest_common.sh@1111 -- # fio_dif_digest 00:34:21.694 13:17:26 -- target/dif.sh@123 -- # local NULL_DIF 00:34:21.694 13:17:26 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:34:21.694 13:17:26 -- target/dif.sh@125 -- # local hdgst ddgst 00:34:21.694 13:17:26 -- target/dif.sh@127 -- # NULL_DIF=3 00:34:21.694 13:17:26 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:34:21.694 13:17:26 -- target/dif.sh@127 -- # numjobs=3 00:34:21.694 13:17:26 -- target/dif.sh@127 -- # iodepth=3 00:34:21.694 13:17:26 -- target/dif.sh@127 -- # runtime=10 00:34:21.694 13:17:26 -- target/dif.sh@128 -- # hdgst=true 00:34:21.694 13:17:26 -- target/dif.sh@128 -- # ddgst=true 00:34:21.694 13:17:26 -- target/dif.sh@130 -- # create_subsystems 0 00:34:21.694 13:17:26 -- target/dif.sh@28 -- # local sub 00:34:21.694 13:17:26 -- target/dif.sh@30 -- # for sub in "$@" 00:34:21.694 13:17:26 -- target/dif.sh@31 -- # create_subsystem 0 00:34:21.694 13:17:26 -- target/dif.sh@18 -- # local sub_id=0 00:34:21.694 13:17:26 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:34:21.694 13:17:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:21.694 13:17:26 -- common/autotest_common.sh@10 -- # set +x 00:34:21.694 bdev_null0 00:34:21.694 13:17:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:21.694 13:17:26 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:21.694 13:17:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:21.694 13:17:26 -- common/autotest_common.sh@10 -- # set +x 00:34:21.694 13:17:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:21.694 13:17:26 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:21.694 13:17:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:21.694 13:17:26 -- common/autotest_common.sh@10 -- # set +x 00:34:21.694 13:17:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:21.694 13:17:26 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:21.694 13:17:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:21.694 13:17:26 -- common/autotest_common.sh@10 -- # set +x 00:34:21.694 [2024-04-26 13:17:26.553635] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:21.694 13:17:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:21.694 13:17:26 -- target/dif.sh@131 -- # fio /dev/fd/62 00:34:21.694 13:17:26 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:34:21.694 13:17:26 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:21.694 13:17:26 -- nvmf/common.sh@521 -- # config=() 00:34:21.694 13:17:26 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:21.694 13:17:26 -- nvmf/common.sh@521 -- # local subsystem config 00:34:21.694 13:17:26 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:21.694 13:17:26 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:34:21.694 13:17:26 -- target/dif.sh@82 -- # gen_fio_conf 00:34:21.694 13:17:26 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:34:21.694 { 00:34:21.694 "params": { 00:34:21.694 "name": "Nvme$subsystem", 00:34:21.694 "trtype": "$TEST_TRANSPORT", 00:34:21.694 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:21.694 "adrfam": "ipv4", 00:34:21.694 "trsvcid": "$NVMF_PORT", 00:34:21.694 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:21.694 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:21.694 "hdgst": ${hdgst:-false}, 00:34:21.694 "ddgst": ${ddgst:-false} 00:34:21.694 }, 00:34:21.694 "method": "bdev_nvme_attach_controller" 00:34:21.694 } 00:34:21.694 EOF 00:34:21.694 )") 00:34:21.694 13:17:26 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:34:21.694 13:17:26 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:21.694 13:17:26 -- target/dif.sh@54 -- # local file 00:34:21.694 13:17:26 -- common/autotest_common.sh@1325 -- # local sanitizers 00:34:21.694 13:17:26 -- target/dif.sh@56 -- # cat 00:34:21.694 13:17:26 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:21.694 13:17:26 -- common/autotest_common.sh@1327 -- # shift 00:34:21.694 13:17:26 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:34:21.694 13:17:26 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:34:21.694 13:17:26 -- nvmf/common.sh@543 -- # cat 00:34:21.694 13:17:26 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:21.694 13:17:26 -- target/dif.sh@72 -- # (( file = 1 )) 00:34:21.694 13:17:26 -- common/autotest_common.sh@1331 -- # grep libasan 00:34:21.694 13:17:26 -- target/dif.sh@72 -- # (( file <= files )) 00:34:21.694 13:17:26 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:34:21.694 13:17:26 -- nvmf/common.sh@545 -- # jq . 00:34:21.694 13:17:26 -- nvmf/common.sh@546 -- # IFS=, 00:34:21.694 13:17:26 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:34:21.694 "params": { 00:34:21.694 "name": "Nvme0", 00:34:21.694 "trtype": "tcp", 00:34:21.694 "traddr": "10.0.0.2", 00:34:21.694 "adrfam": "ipv4", 00:34:21.694 "trsvcid": "4420", 00:34:21.694 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:21.694 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:21.694 "hdgst": true, 00:34:21.694 "ddgst": true 00:34:21.694 }, 00:34:21.694 "method": "bdev_nvme_attach_controller" 00:34:21.694 }' 00:34:21.694 13:17:26 -- common/autotest_common.sh@1331 -- # asan_lib= 00:34:21.694 13:17:26 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:34:21.694 13:17:26 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:34:21.694 13:17:26 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:21.694 13:17:26 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:34:21.694 13:17:26 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:34:21.694 13:17:26 -- common/autotest_common.sh@1331 -- # asan_lib= 00:34:21.694 13:17:26 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:34:21.694 13:17:26 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:21.694 13:17:26 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:21.954 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:34:21.954 ... 00:34:21.954 fio-3.35 00:34:21.954 Starting 3 threads 00:34:21.954 EAL: No free 2048 kB hugepages reported on node 1 00:34:34.191 00:34:34.191 filename0: (groupid=0, jobs=1): err= 0: pid=45132: Fri Apr 26 13:17:37 2024 00:34:34.191 read: IOPS=234, BW=29.4MiB/s (30.8MB/s)(295MiB/10043msec) 00:34:34.191 slat (usec): min=5, max=128, avg= 8.41, stdev= 3.04 00:34:34.191 clat (usec): min=7331, max=53983, avg=12743.22, stdev=1687.99 00:34:34.191 lat (usec): min=7339, max=53991, avg=12751.63, stdev=1688.09 00:34:34.191 clat percentiles (usec): 00:34:34.191 | 1.00th=[ 8848], 5.00th=[10683], 10.00th=[11338], 20.00th=[11863], 00:34:34.191 | 30.00th=[12125], 40.00th=[12518], 50.00th=[12780], 60.00th=[13042], 00:34:34.191 | 70.00th=[13304], 80.00th=[13698], 90.00th=[14091], 95.00th=[14615], 00:34:34.191 | 99.00th=[15401], 99.50th=[15664], 99.90th=[16909], 99.95th=[51643], 00:34:34.191 | 99.99th=[53740] 00:34:34.191 bw ( KiB/s): min=28160, max=32768, per=36.00%, avg=30166.55, stdev=1129.79, samples=20 00:34:34.191 iops : min= 220, max= 256, avg=235.65, stdev= 8.82, samples=20 00:34:34.191 lat (msec) : 10=2.76%, 20=97.16%, 100=0.08% 00:34:34.191 cpu : usr=95.49%, sys=4.25%, ctx=39, majf=0, minf=169 00:34:34.191 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:34.191 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:34.191 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:34.191 issued rwts: total=2359,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:34.191 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:34.191 filename0: (groupid=0, jobs=1): err= 0: pid=45133: Fri Apr 26 13:17:37 2024 00:34:34.191 read: IOPS=209, BW=26.2MiB/s (27.5MB/s)(264MiB/10047msec) 00:34:34.191 slat (nsec): min=5573, max=60839, avg=8033.58, stdev=1904.34 00:34:34.191 clat (usec): min=8682, max=52099, avg=14266.80, stdev=1719.17 00:34:34.191 lat (usec): min=8691, max=52106, avg=14274.83, stdev=1719.06 00:34:34.191 clat percentiles (usec): 00:34:34.191 | 1.00th=[10159], 5.00th=[12125], 10.00th=[12649], 20.00th=[13304], 00:34:34.191 | 30.00th=[13566], 40.00th=[13960], 50.00th=[14222], 60.00th=[14484], 00:34:34.191 | 70.00th=[14877], 80.00th=[15270], 90.00th=[15926], 95.00th=[16450], 00:34:34.191 | 99.00th=[17171], 99.50th=[17957], 99.90th=[19006], 99.95th=[46400], 00:34:34.191 | 99.99th=[52167] 00:34:34.191 bw ( KiB/s): min=26112, max=28672, per=32.17%, avg=26956.80, stdev=766.20, samples=20 00:34:34.191 iops : min= 204, max= 224, avg=210.60, stdev= 5.99, samples=20 00:34:34.191 lat (msec) : 10=0.90%, 20=99.00%, 50=0.05%, 100=0.05% 00:34:34.191 cpu : usr=96.63%, sys=3.14%, ctx=22, majf=0, minf=141 00:34:34.191 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:34.191 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:34.191 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:34.191 issued rwts: total=2108,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:34.191 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:34.191 filename0: (groupid=0, jobs=1): err= 0: pid=45134: Fri Apr 26 13:17:37 2024 00:34:34.191 read: IOPS=210, BW=26.3MiB/s (27.5MB/s)(264MiB/10046msec) 00:34:34.191 slat (nsec): min=5595, max=30743, avg=8385.76, stdev=1515.59 00:34:34.191 clat (usec): min=9630, max=94957, avg=14245.55, stdev=4112.53 00:34:34.191 lat (usec): min=9637, max=94966, avg=14253.93, stdev=4112.56 00:34:34.191 clat percentiles (usec): 00:34:34.191 | 1.00th=[11469], 5.00th=[12387], 10.00th=[12649], 20.00th=[13173], 00:34:34.191 | 30.00th=[13435], 40.00th=[13698], 50.00th=[13829], 60.00th=[14091], 00:34:34.191 | 70.00th=[14484], 80.00th=[14746], 90.00th=[15270], 95.00th=[15664], 00:34:34.191 | 99.00th=[17171], 99.50th=[53740], 99.90th=[55837], 99.95th=[94897], 00:34:34.191 | 99.99th=[94897] 00:34:34.191 bw ( KiB/s): min=22784, max=28416, per=32.21%, avg=26992.50, stdev=1354.12, samples=20 00:34:34.191 iops : min= 178, max= 222, avg=210.85, stdev=10.59, samples=20 00:34:34.191 lat (msec) : 10=0.05%, 20=99.24%, 50=0.09%, 100=0.62% 00:34:34.191 cpu : usr=96.62%, sys=3.14%, ctx=21, majf=0, minf=107 00:34:34.191 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:34.191 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:34.191 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:34.191 issued rwts: total=2111,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:34.191 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:34.191 00:34:34.191 Run status group 0 (all jobs): 00:34:34.191 READ: bw=81.8MiB/s (85.8MB/s), 26.2MiB/s-29.4MiB/s (27.5MB/s-30.8MB/s), io=822MiB (862MB), run=10043-10047msec 00:34:34.191 13:17:37 -- target/dif.sh@132 -- # destroy_subsystems 0 00:34:34.191 13:17:37 -- target/dif.sh@43 -- # local sub 00:34:34.191 13:17:37 -- target/dif.sh@45 -- # for sub in "$@" 00:34:34.191 13:17:37 -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:34.191 13:17:37 -- target/dif.sh@36 -- # local sub_id=0 00:34:34.191 13:17:37 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:34.191 13:17:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:34.191 13:17:37 -- common/autotest_common.sh@10 -- # set +x 00:34:34.191 13:17:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:34.191 13:17:37 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:34.191 13:17:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:34.191 13:17:37 -- common/autotest_common.sh@10 -- # set +x 00:34:34.191 13:17:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:34.191 00:34:34.191 real 0m11.130s 00:34:34.191 user 0m43.169s 00:34:34.191 sys 0m1.383s 00:34:34.191 13:17:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:34:34.191 13:17:37 -- common/autotest_common.sh@10 -- # set +x 00:34:34.191 ************************************ 00:34:34.191 END TEST fio_dif_digest 00:34:34.191 ************************************ 00:34:34.191 13:17:37 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:34:34.191 13:17:37 -- target/dif.sh@147 -- # nvmftestfini 00:34:34.191 13:17:37 -- nvmf/common.sh@477 -- # nvmfcleanup 00:34:34.191 13:17:37 -- nvmf/common.sh@117 -- # sync 00:34:34.191 13:17:37 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:34.191 13:17:37 -- nvmf/common.sh@120 -- # set +e 00:34:34.191 13:17:37 -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:34.191 13:17:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:34.191 rmmod nvme_tcp 00:34:34.191 rmmod nvme_fabrics 00:34:34.191 rmmod nvme_keyring 00:34:34.191 13:17:37 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:34.191 13:17:37 -- nvmf/common.sh@124 -- # set -e 00:34:34.191 13:17:37 -- nvmf/common.sh@125 -- # return 0 00:34:34.191 13:17:37 -- nvmf/common.sh@478 -- # '[' -n 34778 ']' 00:34:34.191 13:17:37 -- nvmf/common.sh@479 -- # killprocess 34778 00:34:34.191 13:17:37 -- common/autotest_common.sh@936 -- # '[' -z 34778 ']' 00:34:34.191 13:17:37 -- common/autotest_common.sh@940 -- # kill -0 34778 00:34:34.191 13:17:37 -- common/autotest_common.sh@941 -- # uname 00:34:34.191 13:17:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:34:34.191 13:17:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 34778 00:34:34.191 13:17:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:34:34.191 13:17:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:34:34.191 13:17:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 34778' 00:34:34.191 killing process with pid 34778 00:34:34.191 13:17:37 -- common/autotest_common.sh@955 -- # kill 34778 00:34:34.191 13:17:37 -- common/autotest_common.sh@960 -- # wait 34778 00:34:34.191 13:17:37 -- nvmf/common.sh@481 -- # '[' iso == iso ']' 00:34:34.191 13:17:37 -- nvmf/common.sh@482 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:36.739 Waiting for block devices as requested 00:34:36.739 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:34:36.739 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:34:36.739 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:34:36.739 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:34:36.739 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:34:36.739 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:34:36.739 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:34:36.739 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:34:37.000 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:34:37.000 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:34:37.260 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:34:37.260 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:34:37.260 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:34:37.521 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:34:37.521 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:34:37.521 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:34:37.521 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:34:37.781 13:17:42 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:34:37.781 13:17:42 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:34:37.781 13:17:42 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:37.781 13:17:42 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:37.781 13:17:42 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:37.781 13:17:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:37.781 13:17:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:40.327 13:17:44 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:40.327 00:34:40.327 real 1m17.678s 00:34:40.327 user 7m56.743s 00:34:40.327 sys 0m19.463s 00:34:40.327 13:17:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:34:40.327 13:17:44 -- common/autotest_common.sh@10 -- # set +x 00:34:40.327 ************************************ 00:34:40.327 END TEST nvmf_dif 00:34:40.327 ************************************ 00:34:40.327 13:17:44 -- spdk/autotest.sh@291 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:34:40.327 13:17:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:34:40.327 13:17:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:34:40.327 13:17:44 -- common/autotest_common.sh@10 -- # set +x 00:34:40.327 ************************************ 00:34:40.327 START TEST nvmf_abort_qd_sizes 00:34:40.327 ************************************ 00:34:40.327 13:17:45 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:34:40.327 * Looking for test storage... 00:34:40.327 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:40.327 13:17:45 -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:40.327 13:17:45 -- nvmf/common.sh@7 -- # uname -s 00:34:40.327 13:17:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:40.327 13:17:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:40.327 13:17:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:40.327 13:17:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:40.327 13:17:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:40.327 13:17:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:40.327 13:17:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:40.327 13:17:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:40.327 13:17:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:40.327 13:17:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:40.327 13:17:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:40.327 13:17:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:40.327 13:17:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:40.327 13:17:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:40.327 13:17:45 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:40.327 13:17:45 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:40.327 13:17:45 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:40.327 13:17:45 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:40.327 13:17:45 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:40.327 13:17:45 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:40.327 13:17:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:40.327 13:17:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:40.328 13:17:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:40.328 13:17:45 -- paths/export.sh@5 -- # export PATH 00:34:40.328 13:17:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:40.328 13:17:45 -- nvmf/common.sh@47 -- # : 0 00:34:40.328 13:17:45 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:40.328 13:17:45 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:40.328 13:17:45 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:40.328 13:17:45 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:40.328 13:17:45 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:40.328 13:17:45 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:40.328 13:17:45 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:40.328 13:17:45 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:40.328 13:17:45 -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:34:40.328 13:17:45 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:34:40.328 13:17:45 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:40.328 13:17:45 -- nvmf/common.sh@437 -- # prepare_net_devs 00:34:40.328 13:17:45 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:34:40.328 13:17:45 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:34:40.328 13:17:45 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:40.328 13:17:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:40.328 13:17:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:40.328 13:17:45 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:34:40.328 13:17:45 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:34:40.328 13:17:45 -- nvmf/common.sh@285 -- # xtrace_disable 00:34:40.328 13:17:45 -- common/autotest_common.sh@10 -- # set +x 00:34:46.918 13:17:51 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:34:46.918 13:17:51 -- nvmf/common.sh@291 -- # pci_devs=() 00:34:46.918 13:17:51 -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:46.918 13:17:51 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:46.918 13:17:51 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:46.918 13:17:51 -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:46.918 13:17:51 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:46.918 13:17:51 -- nvmf/common.sh@295 -- # net_devs=() 00:34:46.918 13:17:51 -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:46.918 13:17:51 -- nvmf/common.sh@296 -- # e810=() 00:34:46.918 13:17:51 -- nvmf/common.sh@296 -- # local -ga e810 00:34:46.918 13:17:51 -- nvmf/common.sh@297 -- # x722=() 00:34:46.918 13:17:51 -- nvmf/common.sh@297 -- # local -ga x722 00:34:46.918 13:17:51 -- nvmf/common.sh@298 -- # mlx=() 00:34:46.918 13:17:51 -- nvmf/common.sh@298 -- # local -ga mlx 00:34:46.918 13:17:51 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:46.918 13:17:51 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:46.918 13:17:51 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:46.918 13:17:51 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:46.918 13:17:51 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:46.918 13:17:51 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:46.918 13:17:51 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:46.918 13:17:51 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:46.918 13:17:51 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:46.918 13:17:51 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:46.918 13:17:51 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:46.918 13:17:51 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:46.918 13:17:51 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:46.918 13:17:51 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:46.918 13:17:51 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:46.918 13:17:51 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:46.918 13:17:51 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:46.918 13:17:51 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:46.918 13:17:51 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:34:46.918 Found 0000:31:00.0 (0x8086 - 0x159b) 00:34:46.918 13:17:51 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:46.918 13:17:51 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:46.918 13:17:51 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:46.918 13:17:51 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:46.918 13:17:51 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:46.918 13:17:51 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:46.918 13:17:51 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:34:46.918 Found 0000:31:00.1 (0x8086 - 0x159b) 00:34:46.918 13:17:51 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:46.918 13:17:51 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:46.918 13:17:51 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:46.918 13:17:51 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:46.918 13:17:51 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:46.918 13:17:51 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:46.918 13:17:51 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:46.918 13:17:51 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:46.918 13:17:51 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:46.918 13:17:51 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:46.918 13:17:51 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:34:46.918 13:17:51 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:46.918 13:17:51 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:34:46.918 Found net devices under 0000:31:00.0: cvl_0_0 00:34:46.918 13:17:51 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:34:46.918 13:17:51 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:46.918 13:17:51 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:46.918 13:17:51 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:34:46.918 13:17:51 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:46.918 13:17:51 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:34:46.918 Found net devices under 0000:31:00.1: cvl_0_1 00:34:46.918 13:17:51 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:34:46.918 13:17:51 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:34:46.918 13:17:51 -- nvmf/common.sh@403 -- # is_hw=yes 00:34:46.918 13:17:51 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:34:46.918 13:17:51 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:34:46.918 13:17:51 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:34:46.918 13:17:51 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:46.918 13:17:51 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:46.918 13:17:51 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:46.918 13:17:51 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:46.918 13:17:51 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:46.918 13:17:51 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:46.918 13:17:51 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:46.918 13:17:51 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:46.918 13:17:51 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:46.918 13:17:51 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:46.918 13:17:51 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:46.918 13:17:51 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:46.918 13:17:51 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:47.179 13:17:51 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:47.179 13:17:51 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:47.179 13:17:52 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:47.179 13:17:52 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:47.179 13:17:52 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:47.179 13:17:52 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:47.179 13:17:52 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:47.179 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:47.179 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.601 ms 00:34:47.179 00:34:47.179 --- 10.0.0.2 ping statistics --- 00:34:47.179 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:47.179 rtt min/avg/max/mdev = 0.601/0.601/0.601/0.000 ms 00:34:47.179 13:17:52 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:47.179 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:47.179 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.334 ms 00:34:47.179 00:34:47.179 --- 10.0.0.1 ping statistics --- 00:34:47.179 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:47.179 rtt min/avg/max/mdev = 0.334/0.334/0.334/0.000 ms 00:34:47.179 13:17:52 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:47.179 13:17:52 -- nvmf/common.sh@411 -- # return 0 00:34:47.179 13:17:52 -- nvmf/common.sh@439 -- # '[' iso == iso ']' 00:34:47.179 13:17:52 -- nvmf/common.sh@440 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:50.480 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:34:50.480 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:34:50.480 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:34:50.741 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:34:50.741 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:34:50.741 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:34:50.741 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:34:50.741 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:34:50.741 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:34:50.741 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:34:50.741 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:34:50.741 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:34:50.741 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:34:50.741 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:34:50.741 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:34:50.741 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:34:50.741 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:34:51.002 13:17:56 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:51.002 13:17:56 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:34:51.002 13:17:56 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:34:51.002 13:17:56 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:51.002 13:17:56 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:34:51.002 13:17:56 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:34:51.002 13:17:56 -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:34:51.002 13:17:56 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:34:51.002 13:17:56 -- common/autotest_common.sh@710 -- # xtrace_disable 00:34:51.002 13:17:56 -- common/autotest_common.sh@10 -- # set +x 00:34:51.264 13:17:56 -- nvmf/common.sh@470 -- # nvmfpid=54647 00:34:51.264 13:17:56 -- nvmf/common.sh@471 -- # waitforlisten 54647 00:34:51.264 13:17:56 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:34:51.264 13:17:56 -- common/autotest_common.sh@817 -- # '[' -z 54647 ']' 00:34:51.264 13:17:56 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:51.264 13:17:56 -- common/autotest_common.sh@822 -- # local max_retries=100 00:34:51.264 13:17:56 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:51.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:51.264 13:17:56 -- common/autotest_common.sh@826 -- # xtrace_disable 00:34:51.264 13:17:56 -- common/autotest_common.sh@10 -- # set +x 00:34:51.264 [2024-04-26 13:17:56.112792] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:34:51.264 [2024-04-26 13:17:56.112835] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:51.264 EAL: No free 2048 kB hugepages reported on node 1 00:34:51.264 [2024-04-26 13:17:56.178592] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:51.264 [2024-04-26 13:17:56.243330] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:51.264 [2024-04-26 13:17:56.243369] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:51.264 [2024-04-26 13:17:56.243379] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:51.264 [2024-04-26 13:17:56.243387] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:51.264 [2024-04-26 13:17:56.243394] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:51.264 [2024-04-26 13:17:56.243565] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:51.264 [2024-04-26 13:17:56.243682] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:34:51.264 [2024-04-26 13:17:56.243845] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:51.264 [2024-04-26 13:17:56.243857] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:34:51.836 13:17:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:34:51.836 13:17:56 -- common/autotest_common.sh@850 -- # return 0 00:34:51.836 13:17:56 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:34:51.836 13:17:56 -- common/autotest_common.sh@716 -- # xtrace_disable 00:34:51.836 13:17:56 -- common/autotest_common.sh@10 -- # set +x 00:34:52.097 13:17:56 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:52.097 13:17:56 -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:34:52.097 13:17:56 -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:34:52.097 13:17:56 -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:34:52.097 13:17:56 -- scripts/common.sh@309 -- # local bdf bdfs 00:34:52.097 13:17:56 -- scripts/common.sh@310 -- # local nvmes 00:34:52.097 13:17:56 -- scripts/common.sh@312 -- # [[ -n 0000:65:00.0 ]] 00:34:52.097 13:17:56 -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:34:52.097 13:17:56 -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:34:52.097 13:17:56 -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:34:52.097 13:17:56 -- scripts/common.sh@320 -- # uname -s 00:34:52.097 13:17:56 -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:34:52.097 13:17:56 -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:34:52.097 13:17:56 -- scripts/common.sh@325 -- # (( 1 )) 00:34:52.097 13:17:56 -- scripts/common.sh@326 -- # printf '%s\n' 0000:65:00.0 00:34:52.097 13:17:56 -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:34:52.097 13:17:56 -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:34:52.097 13:17:56 -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:34:52.097 13:17:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:34:52.097 13:17:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:34:52.097 13:17:56 -- common/autotest_common.sh@10 -- # set +x 00:34:52.097 ************************************ 00:34:52.097 START TEST spdk_target_abort 00:34:52.097 ************************************ 00:34:52.097 13:17:57 -- common/autotest_common.sh@1111 -- # spdk_target 00:34:52.097 13:17:57 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:34:52.097 13:17:57 -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:34:52.097 13:17:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:52.097 13:17:57 -- common/autotest_common.sh@10 -- # set +x 00:34:52.357 spdk_targetn1 00:34:52.357 13:17:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:52.357 13:17:57 -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:52.357 13:17:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:52.358 13:17:57 -- common/autotest_common.sh@10 -- # set +x 00:34:52.358 [2024-04-26 13:17:57.395898] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:52.358 13:17:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:52.358 13:17:57 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:34:52.358 13:17:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:52.358 13:17:57 -- common/autotest_common.sh@10 -- # set +x 00:34:52.358 13:17:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:52.358 13:17:57 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:34:52.358 13:17:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:52.358 13:17:57 -- common/autotest_common.sh@10 -- # set +x 00:34:52.619 13:17:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:52.619 13:17:57 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:34:52.619 13:17:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:52.619 13:17:57 -- common/autotest_common.sh@10 -- # set +x 00:34:52.619 [2024-04-26 13:17:57.436147] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:52.619 13:17:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:52.619 13:17:57 -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:34:52.619 13:17:57 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:34:52.619 13:17:57 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:34:52.619 13:17:57 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:34:52.620 13:17:57 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:34:52.620 13:17:57 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:34:52.620 13:17:57 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:34:52.620 13:17:57 -- target/abort_qd_sizes.sh@24 -- # local target r 00:34:52.620 13:17:57 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:34:52.620 13:17:57 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:52.620 13:17:57 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:34:52.620 13:17:57 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:52.620 13:17:57 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:34:52.620 13:17:57 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:52.620 13:17:57 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:34:52.620 13:17:57 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:52.620 13:17:57 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:52.620 13:17:57 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:52.620 13:17:57 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:52.620 13:17:57 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:52.620 13:17:57 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:52.620 EAL: No free 2048 kB hugepages reported on node 1 00:34:52.620 [2024-04-26 13:17:57.643274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:392 len:8 PRP1 0x2000078be000 PRP2 0x0 00:34:52.620 [2024-04-26 13:17:57.643297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:0034 p:1 m:0 dnr:0 00:34:52.620 [2024-04-26 13:17:57.653062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:776 len:8 PRP1 0x2000078be000 PRP2 0x0 00:34:52.620 [2024-04-26 13:17:57.653079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:0062 p:1 m:0 dnr:0 00:34:52.620 [2024-04-26 13:17:57.667284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:1264 len:8 PRP1 0x2000078c0000 PRP2 0x0 00:34:52.620 [2024-04-26 13:17:57.667300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:009f p:1 m:0 dnr:0 00:34:52.881 [2024-04-26 13:17:57.690770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:2096 len:8 PRP1 0x2000078c2000 PRP2 0x0 00:34:52.881 [2024-04-26 13:17:57.690787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:52.881 [2024-04-26 13:17:57.706036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:2632 len:8 PRP1 0x2000078c0000 PRP2 0x0 00:34:52.881 [2024-04-26 13:17:57.706051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:52.881 [2024-04-26 13:17:57.713988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:2896 len:8 PRP1 0x2000078c6000 PRP2 0x0 00:34:52.881 [2024-04-26 13:17:57.714002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:52.881 [2024-04-26 13:17:57.736165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:3608 len:8 PRP1 0x2000078c4000 PRP2 0x0 00:34:52.881 [2024-04-26 13:17:57.736181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:00c4 p:0 m:0 dnr:0 00:34:56.184 Initializing NVMe Controllers 00:34:56.184 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:34:56.184 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:56.184 Initialization complete. Launching workers. 00:34:56.184 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11989, failed: 7 00:34:56.184 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 3758, failed to submit 8238 00:34:56.184 success 693, unsuccess 3065, failed 0 00:34:56.184 13:18:00 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:56.184 13:18:00 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:56.184 EAL: No free 2048 kB hugepages reported on node 1 00:34:56.184 [2024-04-26 13:18:00.945048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:176 nsid:1 lba:3352 len:8 PRP1 0x200007c4c000 PRP2 0x0 00:34:56.184 [2024-04-26 13:18:00.945091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:176 cdw0:0 sqhd:00a5 p:0 m:0 dnr:0 00:34:56.184 [2024-04-26 13:18:00.977003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:174 nsid:1 lba:4008 len:8 PRP1 0x200007c40000 PRP2 0x0 00:34:56.184 [2024-04-26 13:18:00.977032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:174 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:56.184 [2024-04-26 13:18:01.188110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:191 nsid:1 lba:9000 len:8 PRP1 0x200007c46000 PRP2 0x0 00:34:56.184 [2024-04-26 13:18:01.188138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:191 cdw0:0 sqhd:006a p:1 m:0 dnr:0 00:34:59.490 Initializing NVMe Controllers 00:34:59.490 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:34:59.490 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:59.490 Initialization complete. Launching workers. 00:34:59.490 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8537, failed: 3 00:34:59.490 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1242, failed to submit 7298 00:34:59.490 success 320, unsuccess 922, failed 0 00:34:59.490 13:18:03 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:59.490 13:18:03 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:59.490 EAL: No free 2048 kB hugepages reported on node 1 00:35:02.086 [2024-04-26 13:18:06.590555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:258920 len:8 PRP1 0x200007922000 PRP2 0x0 00:35:02.086 [2024-04-26 13:18:06.590592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:35:02.346 Initializing NVMe Controllers 00:35:02.346 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:35:02.346 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:02.346 Initialization complete. Launching workers. 00:35:02.346 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 42114, failed: 1 00:35:02.346 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2645, failed to submit 39470 00:35:02.346 success 596, unsuccess 2049, failed 0 00:35:02.346 13:18:07 -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:35:02.346 13:18:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:02.346 13:18:07 -- common/autotest_common.sh@10 -- # set +x 00:35:02.346 13:18:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:02.346 13:18:07 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:35:02.346 13:18:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:02.346 13:18:07 -- common/autotest_common.sh@10 -- # set +x 00:35:04.304 13:18:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:04.304 13:18:09 -- target/abort_qd_sizes.sh@61 -- # killprocess 54647 00:35:04.304 13:18:09 -- common/autotest_common.sh@936 -- # '[' -z 54647 ']' 00:35:04.304 13:18:09 -- common/autotest_common.sh@940 -- # kill -0 54647 00:35:04.304 13:18:09 -- common/autotest_common.sh@941 -- # uname 00:35:04.304 13:18:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:35:04.304 13:18:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 54647 00:35:04.304 13:18:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:35:04.304 13:18:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:35:04.304 13:18:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 54647' 00:35:04.304 killing process with pid 54647 00:35:04.304 13:18:09 -- common/autotest_common.sh@955 -- # kill 54647 00:35:04.304 13:18:09 -- common/autotest_common.sh@960 -- # wait 54647 00:35:04.304 00:35:04.304 real 0m12.256s 00:35:04.304 user 0m50.496s 00:35:04.304 sys 0m1.669s 00:35:04.304 13:18:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:35:04.304 13:18:09 -- common/autotest_common.sh@10 -- # set +x 00:35:04.304 ************************************ 00:35:04.304 END TEST spdk_target_abort 00:35:04.304 ************************************ 00:35:04.564 13:18:09 -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:35:04.564 13:18:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:35:04.564 13:18:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:35:04.564 13:18:09 -- common/autotest_common.sh@10 -- # set +x 00:35:04.564 ************************************ 00:35:04.564 START TEST kernel_target_abort 00:35:04.564 ************************************ 00:35:04.564 13:18:09 -- common/autotest_common.sh@1111 -- # kernel_target 00:35:04.564 13:18:09 -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:35:04.564 13:18:09 -- nvmf/common.sh@717 -- # local ip 00:35:04.564 13:18:09 -- nvmf/common.sh@718 -- # ip_candidates=() 00:35:04.564 13:18:09 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:35:04.564 13:18:09 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:04.564 13:18:09 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:04.564 13:18:09 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:35:04.564 13:18:09 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:04.564 13:18:09 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:35:04.564 13:18:09 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:35:04.564 13:18:09 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:35:04.564 13:18:09 -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:35:04.564 13:18:09 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:35:04.564 13:18:09 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:35:04.564 13:18:09 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:04.564 13:18:09 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:04.565 13:18:09 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:35:04.565 13:18:09 -- nvmf/common.sh@628 -- # local block nvme 00:35:04.565 13:18:09 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:35:04.565 13:18:09 -- nvmf/common.sh@631 -- # modprobe nvmet 00:35:04.565 13:18:09 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:35:04.565 13:18:09 -- nvmf/common.sh@636 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:07.881 Waiting for block devices as requested 00:35:07.881 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:35:08.142 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:35:08.142 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:35:08.142 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:35:08.402 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:35:08.402 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:35:08.402 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:35:08.661 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:35:08.661 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:35:08.661 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:35:08.921 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:35:08.921 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:35:08.921 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:35:08.921 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:35:09.182 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:35:09.182 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:35:09.182 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:35:09.442 13:18:14 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:35:09.442 13:18:14 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:35:09.442 13:18:14 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:35:09.442 13:18:14 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:35:09.442 13:18:14 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:35:09.442 13:18:14 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:35:09.442 13:18:14 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:35:09.442 13:18:14 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:35:09.442 13:18:14 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:35:09.442 No valid GPT data, bailing 00:35:09.442 13:18:14 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:35:09.442 13:18:14 -- scripts/common.sh@391 -- # pt= 00:35:09.442 13:18:14 -- scripts/common.sh@392 -- # return 1 00:35:09.442 13:18:14 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:35:09.442 13:18:14 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme0n1 ]] 00:35:09.442 13:18:14 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:09.442 13:18:14 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:09.702 13:18:14 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:35:09.702 13:18:14 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:35:09.702 13:18:14 -- nvmf/common.sh@656 -- # echo 1 00:35:09.702 13:18:14 -- nvmf/common.sh@657 -- # echo /dev/nvme0n1 00:35:09.702 13:18:14 -- nvmf/common.sh@658 -- # echo 1 00:35:09.702 13:18:14 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:35:09.702 13:18:14 -- nvmf/common.sh@661 -- # echo tcp 00:35:09.702 13:18:14 -- nvmf/common.sh@662 -- # echo 4420 00:35:09.702 13:18:14 -- nvmf/common.sh@663 -- # echo ipv4 00:35:09.702 13:18:14 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:35:09.702 13:18:14 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.1 -t tcp -s 4420 00:35:09.702 00:35:09.702 Discovery Log Number of Records 2, Generation counter 2 00:35:09.702 =====Discovery Log Entry 0====== 00:35:09.702 trtype: tcp 00:35:09.702 adrfam: ipv4 00:35:09.702 subtype: current discovery subsystem 00:35:09.702 treq: not specified, sq flow control disable supported 00:35:09.702 portid: 1 00:35:09.702 trsvcid: 4420 00:35:09.702 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:35:09.702 traddr: 10.0.0.1 00:35:09.702 eflags: none 00:35:09.702 sectype: none 00:35:09.702 =====Discovery Log Entry 1====== 00:35:09.702 trtype: tcp 00:35:09.702 adrfam: ipv4 00:35:09.702 subtype: nvme subsystem 00:35:09.702 treq: not specified, sq flow control disable supported 00:35:09.702 portid: 1 00:35:09.702 trsvcid: 4420 00:35:09.702 subnqn: nqn.2016-06.io.spdk:testnqn 00:35:09.702 traddr: 10.0.0.1 00:35:09.702 eflags: none 00:35:09.702 sectype: none 00:35:09.702 13:18:14 -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:35:09.702 13:18:14 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:35:09.702 13:18:14 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:35:09.702 13:18:14 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:35:09.702 13:18:14 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:35:09.702 13:18:14 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:35:09.702 13:18:14 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:35:09.702 13:18:14 -- target/abort_qd_sizes.sh@24 -- # local target r 00:35:09.703 13:18:14 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:35:09.703 13:18:14 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:09.703 13:18:14 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:35:09.703 13:18:14 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:09.703 13:18:14 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:35:09.703 13:18:14 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:09.703 13:18:14 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:35:09.703 13:18:14 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:09.703 13:18:14 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:35:09.703 13:18:14 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:09.703 13:18:14 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:09.703 13:18:14 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:09.703 13:18:14 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:09.703 EAL: No free 2048 kB hugepages reported on node 1 00:35:13.003 Initializing NVMe Controllers 00:35:13.003 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:35:13.003 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:13.003 Initialization complete. Launching workers. 00:35:13.003 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 65068, failed: 0 00:35:13.003 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 65068, failed to submit 0 00:35:13.003 success 0, unsuccess 65068, failed 0 00:35:13.003 13:18:17 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:13.003 13:18:17 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:13.003 EAL: No free 2048 kB hugepages reported on node 1 00:35:16.349 Initializing NVMe Controllers 00:35:16.349 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:35:16.349 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:16.349 Initialization complete. Launching workers. 00:35:16.349 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 107085, failed: 0 00:35:16.349 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 26978, failed to submit 80107 00:35:16.349 success 0, unsuccess 26978, failed 0 00:35:16.349 13:18:20 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:16.349 13:18:20 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:16.349 EAL: No free 2048 kB hugepages reported on node 1 00:35:18.895 Initializing NVMe Controllers 00:35:18.895 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:35:18.895 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:18.895 Initialization complete. Launching workers. 00:35:18.895 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 102686, failed: 0 00:35:18.895 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 25690, failed to submit 76996 00:35:18.895 success 0, unsuccess 25690, failed 0 00:35:18.895 13:18:23 -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:35:18.895 13:18:23 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:35:18.895 13:18:23 -- nvmf/common.sh@675 -- # echo 0 00:35:18.895 13:18:23 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:18.895 13:18:23 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:18.895 13:18:23 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:35:18.895 13:18:23 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:18.895 13:18:23 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:35:18.895 13:18:23 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:35:19.155 13:18:23 -- nvmf/common.sh@687 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:22.459 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:35:22.459 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:35:22.459 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:35:22.459 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:35:22.459 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:35:22.459 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:35:22.459 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:35:22.459 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:35:22.459 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:35:22.459 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:35:22.459 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:35:22.459 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:35:22.459 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:35:22.459 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:35:22.459 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:35:22.459 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:35:23.844 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:35:24.417 00:35:24.417 real 0m19.699s 00:35:24.417 user 0m9.287s 00:35:24.417 sys 0m5.888s 00:35:24.417 13:18:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:35:24.417 13:18:29 -- common/autotest_common.sh@10 -- # set +x 00:35:24.417 ************************************ 00:35:24.417 END TEST kernel_target_abort 00:35:24.417 ************************************ 00:35:24.417 13:18:29 -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:35:24.417 13:18:29 -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:35:24.417 13:18:29 -- nvmf/common.sh@477 -- # nvmfcleanup 00:35:24.417 13:18:29 -- nvmf/common.sh@117 -- # sync 00:35:24.417 13:18:29 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:35:24.417 13:18:29 -- nvmf/common.sh@120 -- # set +e 00:35:24.417 13:18:29 -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:24.417 13:18:29 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:35:24.417 rmmod nvme_tcp 00:35:24.417 rmmod nvme_fabrics 00:35:24.417 rmmod nvme_keyring 00:35:24.417 13:18:29 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:24.417 13:18:29 -- nvmf/common.sh@124 -- # set -e 00:35:24.417 13:18:29 -- nvmf/common.sh@125 -- # return 0 00:35:24.417 13:18:29 -- nvmf/common.sh@478 -- # '[' -n 54647 ']' 00:35:24.417 13:18:29 -- nvmf/common.sh@479 -- # killprocess 54647 00:35:24.417 13:18:29 -- common/autotest_common.sh@936 -- # '[' -z 54647 ']' 00:35:24.417 13:18:29 -- common/autotest_common.sh@940 -- # kill -0 54647 00:35:24.417 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (54647) - No such process 00:35:24.417 13:18:29 -- common/autotest_common.sh@963 -- # echo 'Process with pid 54647 is not found' 00:35:24.417 Process with pid 54647 is not found 00:35:24.417 13:18:29 -- nvmf/common.sh@481 -- # '[' iso == iso ']' 00:35:24.417 13:18:29 -- nvmf/common.sh@482 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:27.718 Waiting for block devices as requested 00:35:27.718 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:35:27.718 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:35:27.718 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:35:27.980 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:35:27.980 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:35:27.980 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:35:28.240 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:35:28.241 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:35:28.241 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:35:28.502 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:35:28.502 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:35:28.762 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:35:28.762 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:35:28.762 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:35:28.762 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:35:29.022 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:35:29.022 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:35:29.283 13:18:34 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:35:29.283 13:18:34 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:35:29.283 13:18:34 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:29.283 13:18:34 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:35:29.283 13:18:34 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:29.283 13:18:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:29.283 13:18:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:31.828 13:18:36 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:35:31.828 00:35:31.828 real 0m51.217s 00:35:31.828 user 1m5.189s 00:35:31.828 sys 0m17.963s 00:35:31.828 13:18:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:35:31.828 13:18:36 -- common/autotest_common.sh@10 -- # set +x 00:35:31.828 ************************************ 00:35:31.828 END TEST nvmf_abort_qd_sizes 00:35:31.828 ************************************ 00:35:31.828 13:18:36 -- spdk/autotest.sh@293 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:35:31.829 13:18:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:35:31.829 13:18:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:35:31.829 13:18:36 -- common/autotest_common.sh@10 -- # set +x 00:35:31.829 ************************************ 00:35:31.829 START TEST keyring_file 00:35:31.829 ************************************ 00:35:31.829 13:18:36 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:35:31.829 * Looking for test storage... 00:35:31.829 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:35:31.829 13:18:36 -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:35:31.829 13:18:36 -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:31.829 13:18:36 -- nvmf/common.sh@7 -- # uname -s 00:35:31.829 13:18:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:31.829 13:18:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:31.829 13:18:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:31.829 13:18:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:31.829 13:18:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:31.829 13:18:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:31.829 13:18:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:31.829 13:18:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:31.829 13:18:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:31.829 13:18:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:31.829 13:18:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:35:31.829 13:18:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:35:31.829 13:18:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:31.829 13:18:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:31.829 13:18:36 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:31.829 13:18:36 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:31.829 13:18:36 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:31.829 13:18:36 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:31.829 13:18:36 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:31.829 13:18:36 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:31.829 13:18:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:31.829 13:18:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:31.829 13:18:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:31.829 13:18:36 -- paths/export.sh@5 -- # export PATH 00:35:31.829 13:18:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:31.829 13:18:36 -- nvmf/common.sh@47 -- # : 0 00:35:31.829 13:18:36 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:31.829 13:18:36 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:31.829 13:18:36 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:31.829 13:18:36 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:31.829 13:18:36 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:31.829 13:18:36 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:31.829 13:18:36 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:31.829 13:18:36 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:31.829 13:18:36 -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:35:31.829 13:18:36 -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:35:31.829 13:18:36 -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:35:31.829 13:18:36 -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:35:31.829 13:18:36 -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:35:31.829 13:18:36 -- keyring/file.sh@24 -- # trap cleanup EXIT 00:35:31.829 13:18:36 -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:35:31.829 13:18:36 -- keyring/common.sh@15 -- # local name key digest path 00:35:31.829 13:18:36 -- keyring/common.sh@17 -- # name=key0 00:35:31.829 13:18:36 -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:35:31.829 13:18:36 -- keyring/common.sh@17 -- # digest=0 00:35:31.829 13:18:36 -- keyring/common.sh@18 -- # mktemp 00:35:31.829 13:18:36 -- keyring/common.sh@18 -- # path=/tmp/tmp.mV42lBNhws 00:35:31.829 13:18:36 -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:35:31.829 13:18:36 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:35:31.829 13:18:36 -- nvmf/common.sh@691 -- # local prefix key digest 00:35:31.829 13:18:36 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:35:31.829 13:18:36 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:35:31.829 13:18:36 -- nvmf/common.sh@693 -- # digest=0 00:35:31.829 13:18:36 -- nvmf/common.sh@694 -- # python - 00:35:31.829 13:18:36 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.mV42lBNhws 00:35:31.829 13:18:36 -- keyring/common.sh@23 -- # echo /tmp/tmp.mV42lBNhws 00:35:31.829 13:18:36 -- keyring/file.sh@26 -- # key0path=/tmp/tmp.mV42lBNhws 00:35:31.829 13:18:36 -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:35:31.829 13:18:36 -- keyring/common.sh@15 -- # local name key digest path 00:35:31.829 13:18:36 -- keyring/common.sh@17 -- # name=key1 00:35:31.829 13:18:36 -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:35:31.829 13:18:36 -- keyring/common.sh@17 -- # digest=0 00:35:31.829 13:18:36 -- keyring/common.sh@18 -- # mktemp 00:35:31.829 13:18:36 -- keyring/common.sh@18 -- # path=/tmp/tmp.uxnQgexrah 00:35:31.829 13:18:36 -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:35:31.829 13:18:36 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:35:31.829 13:18:36 -- nvmf/common.sh@691 -- # local prefix key digest 00:35:31.829 13:18:36 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:35:31.829 13:18:36 -- nvmf/common.sh@693 -- # key=112233445566778899aabbccddeeff00 00:35:31.829 13:18:36 -- nvmf/common.sh@693 -- # digest=0 00:35:31.829 13:18:36 -- nvmf/common.sh@694 -- # python - 00:35:31.829 13:18:36 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.uxnQgexrah 00:35:31.829 13:18:36 -- keyring/common.sh@23 -- # echo /tmp/tmp.uxnQgexrah 00:35:31.829 13:18:36 -- keyring/file.sh@27 -- # key1path=/tmp/tmp.uxnQgexrah 00:35:31.829 13:18:36 -- keyring/file.sh@30 -- # tgtpid=65562 00:35:31.829 13:18:36 -- keyring/file.sh@32 -- # waitforlisten 65562 00:35:31.829 13:18:36 -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:35:31.829 13:18:36 -- common/autotest_common.sh@817 -- # '[' -z 65562 ']' 00:35:31.829 13:18:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:31.829 13:18:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:35:31.829 13:18:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:31.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:31.829 13:18:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:35:31.829 13:18:36 -- common/autotest_common.sh@10 -- # set +x 00:35:31.829 [2024-04-26 13:18:36.775947] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:35:31.829 [2024-04-26 13:18:36.776002] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65562 ] 00:35:31.829 EAL: No free 2048 kB hugepages reported on node 1 00:35:31.829 [2024-04-26 13:18:36.836759] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:32.090 [2024-04-26 13:18:36.902749] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:35:32.661 13:18:37 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:35:32.661 13:18:37 -- common/autotest_common.sh@850 -- # return 0 00:35:32.661 13:18:37 -- keyring/file.sh@33 -- # rpc_cmd 00:35:32.661 13:18:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:32.661 13:18:37 -- common/autotest_common.sh@10 -- # set +x 00:35:32.661 [2024-04-26 13:18:37.533891] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:32.661 null0 00:35:32.661 [2024-04-26 13:18:37.565929] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:35:32.661 [2024-04-26 13:18:37.566261] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:35:32.661 [2024-04-26 13:18:37.573942] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:35:32.661 13:18:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:32.661 13:18:37 -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:35:32.661 13:18:37 -- common/autotest_common.sh@638 -- # local es=0 00:35:32.661 13:18:37 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:35:32.661 13:18:37 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:35:32.661 13:18:37 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:35:32.661 13:18:37 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:35:32.661 13:18:37 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:35:32.661 13:18:37 -- common/autotest_common.sh@641 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:35:32.661 13:18:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:32.661 13:18:37 -- common/autotest_common.sh@10 -- # set +x 00:35:32.661 [2024-04-26 13:18:37.589987] nvmf_rpc.c: 769:nvmf_rpc_listen_paused: *ERROR*: A listener already exists with different secure channel option.request: 00:35:32.661 { 00:35:32.661 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:35:32.661 "secure_channel": false, 00:35:32.661 "listen_address": { 00:35:32.661 "trtype": "tcp", 00:35:32.661 "traddr": "127.0.0.1", 00:35:32.661 "trsvcid": "4420" 00:35:32.661 }, 00:35:32.661 "method": "nvmf_subsystem_add_listener", 00:35:32.661 "req_id": 1 00:35:32.661 } 00:35:32.661 Got JSON-RPC error response 00:35:32.661 response: 00:35:32.661 { 00:35:32.661 "code": -32602, 00:35:32.661 "message": "Invalid parameters" 00:35:32.661 } 00:35:32.661 13:18:37 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:35:32.661 13:18:37 -- common/autotest_common.sh@641 -- # es=1 00:35:32.661 13:18:37 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:35:32.661 13:18:37 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:35:32.661 13:18:37 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:35:32.661 13:18:37 -- keyring/file.sh@46 -- # bperfpid=65709 00:35:32.661 13:18:37 -- keyring/file.sh@48 -- # waitforlisten 65709 /var/tmp/bperf.sock 00:35:32.661 13:18:37 -- common/autotest_common.sh@817 -- # '[' -z 65709 ']' 00:35:32.661 13:18:37 -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:35:32.661 13:18:37 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:32.661 13:18:37 -- common/autotest_common.sh@822 -- # local max_retries=100 00:35:32.661 13:18:37 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:32.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:32.661 13:18:37 -- common/autotest_common.sh@826 -- # xtrace_disable 00:35:32.661 13:18:37 -- common/autotest_common.sh@10 -- # set +x 00:35:32.661 [2024-04-26 13:18:37.653348] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:35:32.661 [2024-04-26 13:18:37.653410] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65709 ] 00:35:32.661 EAL: No free 2048 kB hugepages reported on node 1 00:35:32.923 [2024-04-26 13:18:37.728087] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:32.923 [2024-04-26 13:18:37.791359] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:35:33.496 13:18:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:35:33.496 13:18:38 -- common/autotest_common.sh@850 -- # return 0 00:35:33.496 13:18:38 -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.mV42lBNhws 00:35:33.496 13:18:38 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.mV42lBNhws 00:35:33.496 13:18:38 -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.uxnQgexrah 00:35:33.496 13:18:38 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.uxnQgexrah 00:35:33.757 13:18:38 -- keyring/file.sh@51 -- # get_key key0 00:35:33.757 13:18:38 -- keyring/file.sh@51 -- # jq -r .path 00:35:33.757 13:18:38 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:33.757 13:18:38 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:33.757 13:18:38 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:34.018 13:18:38 -- keyring/file.sh@51 -- # [[ /tmp/tmp.mV42lBNhws == \/\t\m\p\/\t\m\p\.\m\V\4\2\l\B\N\h\w\s ]] 00:35:34.018 13:18:38 -- keyring/file.sh@52 -- # get_key key1 00:35:34.018 13:18:38 -- keyring/file.sh@52 -- # jq -r .path 00:35:34.018 13:18:38 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:34.018 13:18:38 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:34.018 13:18:38 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:34.018 13:18:39 -- keyring/file.sh@52 -- # [[ /tmp/tmp.uxnQgexrah == \/\t\m\p\/\t\m\p\.\u\x\n\Q\g\e\x\r\a\h ]] 00:35:34.018 13:18:39 -- keyring/file.sh@53 -- # get_refcnt key0 00:35:34.018 13:18:39 -- keyring/common.sh@12 -- # get_key key0 00:35:34.018 13:18:39 -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:34.018 13:18:39 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:34.018 13:18:39 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:34.018 13:18:39 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:34.279 13:18:39 -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:35:34.279 13:18:39 -- keyring/file.sh@54 -- # get_refcnt key1 00:35:34.279 13:18:39 -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:34.279 13:18:39 -- keyring/common.sh@12 -- # get_key key1 00:35:34.279 13:18:39 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:34.279 13:18:39 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:34.279 13:18:39 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:34.279 13:18:39 -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:35:34.279 13:18:39 -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:34.279 13:18:39 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:34.539 [2024-04-26 13:18:39.439858] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:34.539 nvme0n1 00:35:34.539 13:18:39 -- keyring/file.sh@59 -- # get_refcnt key0 00:35:34.539 13:18:39 -- keyring/common.sh@12 -- # get_key key0 00:35:34.539 13:18:39 -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:34.539 13:18:39 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:34.539 13:18:39 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:34.539 13:18:39 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:34.800 13:18:39 -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:35:34.800 13:18:39 -- keyring/file.sh@60 -- # get_refcnt key1 00:35:34.800 13:18:39 -- keyring/common.sh@12 -- # get_key key1 00:35:34.800 13:18:39 -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:34.800 13:18:39 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:34.800 13:18:39 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:34.800 13:18:39 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:34.800 13:18:39 -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:35:34.800 13:18:39 -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:35.061 Running I/O for 1 seconds... 00:35:36.003 00:35:36.003 Latency(us) 00:35:36.003 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:36.003 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:35:36.003 nvme0n1 : 1.00 13628.72 53.24 0.00 0.00 9366.28 4696.75 17585.49 00:35:36.003 =================================================================================================================== 00:35:36.003 Total : 13628.72 53.24 0.00 0.00 9366.28 4696.75 17585.49 00:35:36.003 0 00:35:36.003 13:18:40 -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:35:36.003 13:18:40 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:35:36.264 13:18:41 -- keyring/file.sh@65 -- # get_refcnt key0 00:35:36.264 13:18:41 -- keyring/common.sh@12 -- # get_key key0 00:35:36.264 13:18:41 -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:36.264 13:18:41 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:36.264 13:18:41 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:36.264 13:18:41 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:36.264 13:18:41 -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:35:36.264 13:18:41 -- keyring/file.sh@66 -- # get_refcnt key1 00:35:36.264 13:18:41 -- keyring/common.sh@12 -- # get_key key1 00:35:36.264 13:18:41 -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:36.264 13:18:41 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:36.264 13:18:41 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:36.264 13:18:41 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:36.525 13:18:41 -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:35:36.525 13:18:41 -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:36.525 13:18:41 -- common/autotest_common.sh@638 -- # local es=0 00:35:36.525 13:18:41 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:36.525 13:18:41 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:35:36.525 13:18:41 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:35:36.525 13:18:41 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:35:36.525 13:18:41 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:35:36.525 13:18:41 -- common/autotest_common.sh@641 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:36.525 13:18:41 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:36.786 [2024-04-26 13:18:41.609636] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:35:36.786 [2024-04-26 13:18:41.610094] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb318d0 (107): Transport endpoint is not connected 00:35:36.786 [2024-04-26 13:18:41.611090] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb318d0 (9): Bad file descriptor 00:35:36.786 [2024-04-26 13:18:41.612092] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:35:36.786 [2024-04-26 13:18:41.612099] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:35:36.786 [2024-04-26 13:18:41.612104] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:35:36.786 request: 00:35:36.786 { 00:35:36.786 "name": "nvme0", 00:35:36.786 "trtype": "tcp", 00:35:36.786 "traddr": "127.0.0.1", 00:35:36.786 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:36.786 "adrfam": "ipv4", 00:35:36.786 "trsvcid": "4420", 00:35:36.786 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:36.786 "psk": "key1", 00:35:36.786 "method": "bdev_nvme_attach_controller", 00:35:36.786 "req_id": 1 00:35:36.786 } 00:35:36.786 Got JSON-RPC error response 00:35:36.786 response: 00:35:36.786 { 00:35:36.786 "code": -32602, 00:35:36.786 "message": "Invalid parameters" 00:35:36.786 } 00:35:36.786 13:18:41 -- common/autotest_common.sh@641 -- # es=1 00:35:36.786 13:18:41 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:35:36.786 13:18:41 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:35:36.786 13:18:41 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:35:36.786 13:18:41 -- keyring/file.sh@71 -- # get_refcnt key0 00:35:36.786 13:18:41 -- keyring/common.sh@12 -- # get_key key0 00:35:36.786 13:18:41 -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:36.786 13:18:41 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:36.786 13:18:41 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:36.786 13:18:41 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:36.786 13:18:41 -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:35:36.786 13:18:41 -- keyring/file.sh@72 -- # get_refcnt key1 00:35:36.786 13:18:41 -- keyring/common.sh@12 -- # get_key key1 00:35:36.786 13:18:41 -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:36.786 13:18:41 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:36.786 13:18:41 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:36.786 13:18:41 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:37.048 13:18:41 -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:35:37.048 13:18:41 -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:35:37.048 13:18:41 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:35:37.048 13:18:42 -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:35:37.048 13:18:42 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:35:37.308 13:18:42 -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:35:37.308 13:18:42 -- keyring/file.sh@77 -- # jq length 00:35:37.308 13:18:42 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:37.568 13:18:42 -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:35:37.568 13:18:42 -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.mV42lBNhws 00:35:37.568 13:18:42 -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.mV42lBNhws 00:35:37.568 13:18:42 -- common/autotest_common.sh@638 -- # local es=0 00:35:37.568 13:18:42 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.mV42lBNhws 00:35:37.568 13:18:42 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:35:37.568 13:18:42 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:35:37.568 13:18:42 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:35:37.568 13:18:42 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:35:37.568 13:18:42 -- common/autotest_common.sh@641 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.mV42lBNhws 00:35:37.568 13:18:42 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.mV42lBNhws 00:35:37.568 [2024-04-26 13:18:42.514797] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.mV42lBNhws': 0100660 00:35:37.568 [2024-04-26 13:18:42.514816] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:35:37.568 request: 00:35:37.568 { 00:35:37.568 "name": "key0", 00:35:37.568 "path": "/tmp/tmp.mV42lBNhws", 00:35:37.568 "method": "keyring_file_add_key", 00:35:37.568 "req_id": 1 00:35:37.568 } 00:35:37.568 Got JSON-RPC error response 00:35:37.568 response: 00:35:37.568 { 00:35:37.568 "code": -1, 00:35:37.568 "message": "Operation not permitted" 00:35:37.568 } 00:35:37.568 13:18:42 -- common/autotest_common.sh@641 -- # es=1 00:35:37.568 13:18:42 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:35:37.568 13:18:42 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:35:37.568 13:18:42 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:35:37.568 13:18:42 -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.mV42lBNhws 00:35:37.568 13:18:42 -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.mV42lBNhws 00:35:37.568 13:18:42 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.mV42lBNhws 00:35:37.828 13:18:42 -- keyring/file.sh@86 -- # rm -f /tmp/tmp.mV42lBNhws 00:35:37.828 13:18:42 -- keyring/file.sh@88 -- # get_refcnt key0 00:35:37.828 13:18:42 -- keyring/common.sh@12 -- # get_key key0 00:35:37.829 13:18:42 -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:37.829 13:18:42 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:37.829 13:18:42 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:37.829 13:18:42 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:37.829 13:18:42 -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:35:37.829 13:18:42 -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:37.829 13:18:42 -- common/autotest_common.sh@638 -- # local es=0 00:35:37.829 13:18:42 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:37.829 13:18:42 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:35:37.829 13:18:42 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:35:37.829 13:18:42 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:35:37.829 13:18:42 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:35:37.829 13:18:42 -- common/autotest_common.sh@641 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:37.829 13:18:42 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:38.090 [2024-04-26 13:18:43.008037] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.mV42lBNhws': No such file or directory 00:35:38.090 [2024-04-26 13:18:43.008050] nvme_tcp.c:2570:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:35:38.090 [2024-04-26 13:18:43.008066] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:35:38.090 [2024-04-26 13:18:43.008071] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:35:38.090 [2024-04-26 13:18:43.008075] bdev_nvme.c:6208:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:35:38.090 request: 00:35:38.090 { 00:35:38.090 "name": "nvme0", 00:35:38.090 "trtype": "tcp", 00:35:38.090 "traddr": "127.0.0.1", 00:35:38.090 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:38.090 "adrfam": "ipv4", 00:35:38.090 "trsvcid": "4420", 00:35:38.090 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:38.090 "psk": "key0", 00:35:38.090 "method": "bdev_nvme_attach_controller", 00:35:38.090 "req_id": 1 00:35:38.090 } 00:35:38.090 Got JSON-RPC error response 00:35:38.090 response: 00:35:38.090 { 00:35:38.090 "code": -19, 00:35:38.090 "message": "No such device" 00:35:38.090 } 00:35:38.090 13:18:43 -- common/autotest_common.sh@641 -- # es=1 00:35:38.090 13:18:43 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:35:38.090 13:18:43 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:35:38.090 13:18:43 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:35:38.090 13:18:43 -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:35:38.090 13:18:43 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:35:38.354 13:18:43 -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:35:38.354 13:18:43 -- keyring/common.sh@15 -- # local name key digest path 00:35:38.354 13:18:43 -- keyring/common.sh@17 -- # name=key0 00:35:38.354 13:18:43 -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:35:38.354 13:18:43 -- keyring/common.sh@17 -- # digest=0 00:35:38.354 13:18:43 -- keyring/common.sh@18 -- # mktemp 00:35:38.354 13:18:43 -- keyring/common.sh@18 -- # path=/tmp/tmp.uTOPQBnbwu 00:35:38.354 13:18:43 -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:35:38.354 13:18:43 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:35:38.354 13:18:43 -- nvmf/common.sh@691 -- # local prefix key digest 00:35:38.354 13:18:43 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:35:38.354 13:18:43 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:35:38.354 13:18:43 -- nvmf/common.sh@693 -- # digest=0 00:35:38.354 13:18:43 -- nvmf/common.sh@694 -- # python - 00:35:38.354 13:18:43 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.uTOPQBnbwu 00:35:38.354 13:18:43 -- keyring/common.sh@23 -- # echo /tmp/tmp.uTOPQBnbwu 00:35:38.354 13:18:43 -- keyring/file.sh@95 -- # key0path=/tmp/tmp.uTOPQBnbwu 00:35:38.354 13:18:43 -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.uTOPQBnbwu 00:35:38.354 13:18:43 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.uTOPQBnbwu 00:35:38.615 13:18:43 -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:38.615 13:18:43 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:38.615 nvme0n1 00:35:38.615 13:18:43 -- keyring/file.sh@99 -- # get_refcnt key0 00:35:38.875 13:18:43 -- keyring/common.sh@12 -- # get_key key0 00:35:38.875 13:18:43 -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:38.875 13:18:43 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:38.875 13:18:43 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:38.875 13:18:43 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:38.875 13:18:43 -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:35:38.875 13:18:43 -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:35:38.875 13:18:43 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:35:39.136 13:18:43 -- keyring/file.sh@101 -- # get_key key0 00:35:39.136 13:18:43 -- keyring/file.sh@101 -- # jq -r .removed 00:35:39.136 13:18:43 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:39.136 13:18:43 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:39.136 13:18:43 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:39.136 13:18:44 -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:35:39.136 13:18:44 -- keyring/file.sh@102 -- # get_refcnt key0 00:35:39.136 13:18:44 -- keyring/common.sh@12 -- # get_key key0 00:35:39.136 13:18:44 -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:39.136 13:18:44 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:39.136 13:18:44 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:39.136 13:18:44 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:39.400 13:18:44 -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:35:39.400 13:18:44 -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:35:39.400 13:18:44 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:35:39.400 13:18:44 -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:35:39.400 13:18:44 -- keyring/file.sh@104 -- # jq length 00:35:39.400 13:18:44 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:39.734 13:18:44 -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:35:39.734 13:18:44 -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.uTOPQBnbwu 00:35:39.734 13:18:44 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.uTOPQBnbwu 00:35:39.734 13:18:44 -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.uxnQgexrah 00:35:39.734 13:18:44 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.uxnQgexrah 00:35:40.012 13:18:44 -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:40.012 13:18:44 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:40.272 nvme0n1 00:35:40.272 13:18:45 -- keyring/file.sh@112 -- # bperf_cmd save_config 00:35:40.272 13:18:45 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:35:40.533 13:18:45 -- keyring/file.sh@112 -- # config='{ 00:35:40.533 "subsystems": [ 00:35:40.533 { 00:35:40.533 "subsystem": "keyring", 00:35:40.533 "config": [ 00:35:40.533 { 00:35:40.533 "method": "keyring_file_add_key", 00:35:40.533 "params": { 00:35:40.533 "name": "key0", 00:35:40.533 "path": "/tmp/tmp.uTOPQBnbwu" 00:35:40.533 } 00:35:40.533 }, 00:35:40.533 { 00:35:40.533 "method": "keyring_file_add_key", 00:35:40.533 "params": { 00:35:40.533 "name": "key1", 00:35:40.533 "path": "/tmp/tmp.uxnQgexrah" 00:35:40.533 } 00:35:40.533 } 00:35:40.533 ] 00:35:40.533 }, 00:35:40.533 { 00:35:40.533 "subsystem": "iobuf", 00:35:40.533 "config": [ 00:35:40.533 { 00:35:40.533 "method": "iobuf_set_options", 00:35:40.533 "params": { 00:35:40.533 "small_pool_count": 8192, 00:35:40.533 "large_pool_count": 1024, 00:35:40.533 "small_bufsize": 8192, 00:35:40.533 "large_bufsize": 135168 00:35:40.533 } 00:35:40.533 } 00:35:40.533 ] 00:35:40.533 }, 00:35:40.533 { 00:35:40.533 "subsystem": "sock", 00:35:40.533 "config": [ 00:35:40.533 { 00:35:40.533 "method": "sock_impl_set_options", 00:35:40.533 "params": { 00:35:40.533 "impl_name": "posix", 00:35:40.533 "recv_buf_size": 2097152, 00:35:40.533 "send_buf_size": 2097152, 00:35:40.533 "enable_recv_pipe": true, 00:35:40.533 "enable_quickack": false, 00:35:40.533 "enable_placement_id": 0, 00:35:40.533 "enable_zerocopy_send_server": true, 00:35:40.533 "enable_zerocopy_send_client": false, 00:35:40.533 "zerocopy_threshold": 0, 00:35:40.533 "tls_version": 0, 00:35:40.533 "enable_ktls": false 00:35:40.533 } 00:35:40.533 }, 00:35:40.533 { 00:35:40.533 "method": "sock_impl_set_options", 00:35:40.533 "params": { 00:35:40.533 "impl_name": "ssl", 00:35:40.533 "recv_buf_size": 4096, 00:35:40.533 "send_buf_size": 4096, 00:35:40.533 "enable_recv_pipe": true, 00:35:40.533 "enable_quickack": false, 00:35:40.533 "enable_placement_id": 0, 00:35:40.533 "enable_zerocopy_send_server": true, 00:35:40.533 "enable_zerocopy_send_client": false, 00:35:40.533 "zerocopy_threshold": 0, 00:35:40.533 "tls_version": 0, 00:35:40.533 "enable_ktls": false 00:35:40.533 } 00:35:40.533 } 00:35:40.533 ] 00:35:40.533 }, 00:35:40.533 { 00:35:40.533 "subsystem": "vmd", 00:35:40.533 "config": [] 00:35:40.533 }, 00:35:40.533 { 00:35:40.533 "subsystem": "accel", 00:35:40.533 "config": [ 00:35:40.533 { 00:35:40.533 "method": "accel_set_options", 00:35:40.533 "params": { 00:35:40.533 "small_cache_size": 128, 00:35:40.533 "large_cache_size": 16, 00:35:40.533 "task_count": 2048, 00:35:40.533 "sequence_count": 2048, 00:35:40.533 "buf_count": 2048 00:35:40.533 } 00:35:40.533 } 00:35:40.533 ] 00:35:40.533 }, 00:35:40.533 { 00:35:40.533 "subsystem": "bdev", 00:35:40.533 "config": [ 00:35:40.533 { 00:35:40.533 "method": "bdev_set_options", 00:35:40.533 "params": { 00:35:40.533 "bdev_io_pool_size": 65535, 00:35:40.533 "bdev_io_cache_size": 256, 00:35:40.534 "bdev_auto_examine": true, 00:35:40.534 "iobuf_small_cache_size": 128, 00:35:40.534 "iobuf_large_cache_size": 16 00:35:40.534 } 00:35:40.534 }, 00:35:40.534 { 00:35:40.534 "method": "bdev_raid_set_options", 00:35:40.534 "params": { 00:35:40.534 "process_window_size_kb": 1024 00:35:40.534 } 00:35:40.534 }, 00:35:40.534 { 00:35:40.534 "method": "bdev_iscsi_set_options", 00:35:40.534 "params": { 00:35:40.534 "timeout_sec": 30 00:35:40.534 } 00:35:40.534 }, 00:35:40.534 { 00:35:40.534 "method": "bdev_nvme_set_options", 00:35:40.534 "params": { 00:35:40.534 "action_on_timeout": "none", 00:35:40.534 "timeout_us": 0, 00:35:40.534 "timeout_admin_us": 0, 00:35:40.534 "keep_alive_timeout_ms": 10000, 00:35:40.534 "arbitration_burst": 0, 00:35:40.534 "low_priority_weight": 0, 00:35:40.534 "medium_priority_weight": 0, 00:35:40.534 "high_priority_weight": 0, 00:35:40.534 "nvme_adminq_poll_period_us": 10000, 00:35:40.534 "nvme_ioq_poll_period_us": 0, 00:35:40.534 "io_queue_requests": 512, 00:35:40.534 "delay_cmd_submit": true, 00:35:40.534 "transport_retry_count": 4, 00:35:40.534 "bdev_retry_count": 3, 00:35:40.534 "transport_ack_timeout": 0, 00:35:40.534 "ctrlr_loss_timeout_sec": 0, 00:35:40.534 "reconnect_delay_sec": 0, 00:35:40.534 "fast_io_fail_timeout_sec": 0, 00:35:40.534 "disable_auto_failback": false, 00:35:40.534 "generate_uuids": false, 00:35:40.534 "transport_tos": 0, 00:35:40.534 "nvme_error_stat": false, 00:35:40.534 "rdma_srq_size": 0, 00:35:40.534 "io_path_stat": false, 00:35:40.534 "allow_accel_sequence": false, 00:35:40.534 "rdma_max_cq_size": 0, 00:35:40.534 "rdma_cm_event_timeout_ms": 0, 00:35:40.534 "dhchap_digests": [ 00:35:40.534 "sha256", 00:35:40.534 "sha384", 00:35:40.534 "sha512" 00:35:40.534 ], 00:35:40.534 "dhchap_dhgroups": [ 00:35:40.534 "null", 00:35:40.534 "ffdhe2048", 00:35:40.534 "ffdhe3072", 00:35:40.534 "ffdhe4096", 00:35:40.534 "ffdhe6144", 00:35:40.534 "ffdhe8192" 00:35:40.534 ] 00:35:40.534 } 00:35:40.534 }, 00:35:40.534 { 00:35:40.534 "method": "bdev_nvme_attach_controller", 00:35:40.534 "params": { 00:35:40.534 "name": "nvme0", 00:35:40.534 "trtype": "TCP", 00:35:40.534 "adrfam": "IPv4", 00:35:40.534 "traddr": "127.0.0.1", 00:35:40.534 "trsvcid": "4420", 00:35:40.534 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:40.534 "prchk_reftag": false, 00:35:40.534 "prchk_guard": false, 00:35:40.534 "ctrlr_loss_timeout_sec": 0, 00:35:40.534 "reconnect_delay_sec": 0, 00:35:40.534 "fast_io_fail_timeout_sec": 0, 00:35:40.534 "psk": "key0", 00:35:40.534 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:40.534 "hdgst": false, 00:35:40.534 "ddgst": false 00:35:40.534 } 00:35:40.534 }, 00:35:40.534 { 00:35:40.534 "method": "bdev_nvme_set_hotplug", 00:35:40.534 "params": { 00:35:40.534 "period_us": 100000, 00:35:40.534 "enable": false 00:35:40.534 } 00:35:40.534 }, 00:35:40.534 { 00:35:40.534 "method": "bdev_wait_for_examine" 00:35:40.534 } 00:35:40.534 ] 00:35:40.534 }, 00:35:40.534 { 00:35:40.534 "subsystem": "nbd", 00:35:40.534 "config": [] 00:35:40.534 } 00:35:40.534 ] 00:35:40.534 }' 00:35:40.534 13:18:45 -- keyring/file.sh@114 -- # killprocess 65709 00:35:40.534 13:18:45 -- common/autotest_common.sh@936 -- # '[' -z 65709 ']' 00:35:40.534 13:18:45 -- common/autotest_common.sh@940 -- # kill -0 65709 00:35:40.534 13:18:45 -- common/autotest_common.sh@941 -- # uname 00:35:40.534 13:18:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:35:40.534 13:18:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 65709 00:35:40.534 13:18:45 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:35:40.534 13:18:45 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:35:40.534 13:18:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 65709' 00:35:40.534 killing process with pid 65709 00:35:40.534 13:18:45 -- common/autotest_common.sh@955 -- # kill 65709 00:35:40.534 Received shutdown signal, test time was about 1.000000 seconds 00:35:40.534 00:35:40.534 Latency(us) 00:35:40.534 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:40.534 =================================================================================================================== 00:35:40.534 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:40.534 13:18:45 -- common/autotest_common.sh@960 -- # wait 65709 00:35:40.534 13:18:45 -- keyring/file.sh@117 -- # bperfpid=67286 00:35:40.534 13:18:45 -- keyring/file.sh@119 -- # waitforlisten 67286 /var/tmp/bperf.sock 00:35:40.534 13:18:45 -- common/autotest_common.sh@817 -- # '[' -z 67286 ']' 00:35:40.534 13:18:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:40.534 13:18:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:35:40.534 13:18:45 -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:35:40.534 13:18:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:40.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:40.534 13:18:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:35:40.534 13:18:45 -- common/autotest_common.sh@10 -- # set +x 00:35:40.534 13:18:45 -- keyring/file.sh@115 -- # echo '{ 00:35:40.534 "subsystems": [ 00:35:40.534 { 00:35:40.534 "subsystem": "keyring", 00:35:40.534 "config": [ 00:35:40.534 { 00:35:40.534 "method": "keyring_file_add_key", 00:35:40.534 "params": { 00:35:40.534 "name": "key0", 00:35:40.534 "path": "/tmp/tmp.uTOPQBnbwu" 00:35:40.534 } 00:35:40.534 }, 00:35:40.534 { 00:35:40.534 "method": "keyring_file_add_key", 00:35:40.534 "params": { 00:35:40.534 "name": "key1", 00:35:40.534 "path": "/tmp/tmp.uxnQgexrah" 00:35:40.534 } 00:35:40.534 } 00:35:40.534 ] 00:35:40.534 }, 00:35:40.534 { 00:35:40.534 "subsystem": "iobuf", 00:35:40.534 "config": [ 00:35:40.534 { 00:35:40.534 "method": "iobuf_set_options", 00:35:40.534 "params": { 00:35:40.534 "small_pool_count": 8192, 00:35:40.534 "large_pool_count": 1024, 00:35:40.534 "small_bufsize": 8192, 00:35:40.534 "large_bufsize": 135168 00:35:40.534 } 00:35:40.534 } 00:35:40.534 ] 00:35:40.534 }, 00:35:40.534 { 00:35:40.534 "subsystem": "sock", 00:35:40.534 "config": [ 00:35:40.534 { 00:35:40.534 "method": "sock_impl_set_options", 00:35:40.534 "params": { 00:35:40.534 "impl_name": "posix", 00:35:40.534 "recv_buf_size": 2097152, 00:35:40.534 "send_buf_size": 2097152, 00:35:40.534 "enable_recv_pipe": true, 00:35:40.534 "enable_quickack": false, 00:35:40.534 "enable_placement_id": 0, 00:35:40.534 "enable_zerocopy_send_server": true, 00:35:40.534 "enable_zerocopy_send_client": false, 00:35:40.534 "zerocopy_threshold": 0, 00:35:40.534 "tls_version": 0, 00:35:40.534 "enable_ktls": false 00:35:40.534 } 00:35:40.534 }, 00:35:40.534 { 00:35:40.534 "method": "sock_impl_set_options", 00:35:40.534 "params": { 00:35:40.534 "impl_name": "ssl", 00:35:40.534 "recv_buf_size": 4096, 00:35:40.534 "send_buf_size": 4096, 00:35:40.534 "enable_recv_pipe": true, 00:35:40.534 "enable_quickack": false, 00:35:40.534 "enable_placement_id": 0, 00:35:40.534 "enable_zerocopy_send_server": true, 00:35:40.534 "enable_zerocopy_send_client": false, 00:35:40.534 "zerocopy_threshold": 0, 00:35:40.534 "tls_version": 0, 00:35:40.534 "enable_ktls": false 00:35:40.534 } 00:35:40.534 } 00:35:40.534 ] 00:35:40.534 }, 00:35:40.534 { 00:35:40.534 "subsystem": "vmd", 00:35:40.534 "config": [] 00:35:40.534 }, 00:35:40.534 { 00:35:40.534 "subsystem": "accel", 00:35:40.534 "config": [ 00:35:40.534 { 00:35:40.534 "method": "accel_set_options", 00:35:40.534 "params": { 00:35:40.534 "small_cache_size": 128, 00:35:40.534 "large_cache_size": 16, 00:35:40.534 "task_count": 2048, 00:35:40.534 "sequence_count": 2048, 00:35:40.534 "buf_count": 2048 00:35:40.534 } 00:35:40.534 } 00:35:40.534 ] 00:35:40.534 }, 00:35:40.534 { 00:35:40.534 "subsystem": "bdev", 00:35:40.534 "config": [ 00:35:40.534 { 00:35:40.534 "method": "bdev_set_options", 00:35:40.534 "params": { 00:35:40.535 "bdev_io_pool_size": 65535, 00:35:40.535 "bdev_io_cache_size": 256, 00:35:40.535 "bdev_auto_examine": true, 00:35:40.535 "iobuf_small_cache_size": 128, 00:35:40.535 "iobuf_large_cache_size": 16 00:35:40.535 } 00:35:40.535 }, 00:35:40.535 { 00:35:40.535 "method": "bdev_raid_set_options", 00:35:40.535 "params": { 00:35:40.535 "process_window_size_kb": 1024 00:35:40.535 } 00:35:40.535 }, 00:35:40.535 { 00:35:40.535 "method": "bdev_iscsi_set_options", 00:35:40.535 "params": { 00:35:40.535 "timeout_sec": 30 00:35:40.535 } 00:35:40.535 }, 00:35:40.535 { 00:35:40.535 "method": "bdev_nvme_set_options", 00:35:40.535 "params": { 00:35:40.535 "action_on_timeout": "none", 00:35:40.535 "timeout_us": 0, 00:35:40.535 "timeout_admin_us": 0, 00:35:40.535 "keep_alive_timeout_ms": 10000, 00:35:40.535 "arbitration_burst": 0, 00:35:40.535 "low_priority_weight": 0, 00:35:40.535 "medium_priority_weight": 0, 00:35:40.535 "high_priority_weight": 0, 00:35:40.535 "nvme_adminq_poll_period_us": 10000, 00:35:40.535 "nvme_ioq_poll_period_us": 0, 00:35:40.535 "io_queue_requests": 512, 00:35:40.535 "delay_cmd_submit": true, 00:35:40.535 "transport_retry_count": 4, 00:35:40.535 "bdev_retry_count": 3, 00:35:40.535 "transport_ack_timeout": 0, 00:35:40.535 "ctrlr_loss_timeout_sec": 0, 00:35:40.535 "reconnect_delay_sec": 0, 00:35:40.535 "fast_io_fail_timeout_sec": 0, 00:35:40.535 "disable_auto_failback": false, 00:35:40.535 "generate_uuids": false, 00:35:40.535 "transport_tos": 0, 00:35:40.535 "nvme_error_stat": false, 00:35:40.535 "rdma_srq_size": 0, 00:35:40.535 "io_path_stat": false, 00:35:40.535 "allow_accel_sequence": false, 00:35:40.535 "rdma_max_cq_size": 0, 00:35:40.535 "rdma_cm_event_timeout_ms": 0, 00:35:40.535 "dhchap_digests": [ 00:35:40.535 "sha256", 00:35:40.535 "sha384", 00:35:40.535 "sha512" 00:35:40.535 ], 00:35:40.535 "dhchap_dhgroups": [ 00:35:40.535 "null", 00:35:40.535 "ffdhe2048", 00:35:40.535 "ffdhe3072", 00:35:40.535 "ffdhe4096", 00:35:40.535 "ffdhe6144", 00:35:40.535 "ffdhe8192" 00:35:40.535 ] 00:35:40.535 } 00:35:40.535 }, 00:35:40.535 { 00:35:40.535 "method": "bdev_nvme_attach_controller", 00:35:40.535 "params": { 00:35:40.535 "name": "nvme0", 00:35:40.535 "trtype": "TCP", 00:35:40.535 "adrfam": "IPv4", 00:35:40.535 "traddr": "127.0.0.1", 00:35:40.535 "trsvcid": "4420", 00:35:40.535 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:40.535 "prchk_reftag": false, 00:35:40.535 "prchk_guard": false, 00:35:40.535 "ctrlr_loss_timeout_sec": 0, 00:35:40.535 "reconnect_delay_sec": 0, 00:35:40.535 "fast_io_fail_timeout_sec": 0, 00:35:40.535 "psk": "key0", 00:35:40.535 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:40.535 "hdgst": false, 00:35:40.535 "ddgst": false 00:35:40.535 } 00:35:40.535 }, 00:35:40.535 { 00:35:40.535 "method": "bdev_nvme_set_hotplug", 00:35:40.535 "params": { 00:35:40.535 "period_us": 100000, 00:35:40.535 "enable": false 00:35:40.535 } 00:35:40.535 }, 00:35:40.535 { 00:35:40.535 "method": "bdev_wait_for_examine" 00:35:40.535 } 00:35:40.535 ] 00:35:40.535 }, 00:35:40.535 { 00:35:40.535 "subsystem": "nbd", 00:35:40.535 "config": [] 00:35:40.535 } 00:35:40.535 ] 00:35:40.535 }' 00:35:40.535 [2024-04-26 13:18:45.567787] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:35:40.535 [2024-04-26 13:18:45.567897] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67286 ] 00:35:40.796 EAL: No free 2048 kB hugepages reported on node 1 00:35:40.796 [2024-04-26 13:18:45.647901] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:40.796 [2024-04-26 13:18:45.699719] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:35:40.796 [2024-04-26 13:18:45.833615] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:41.366 13:18:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:35:41.366 13:18:46 -- common/autotest_common.sh@850 -- # return 0 00:35:41.366 13:18:46 -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:35:41.366 13:18:46 -- keyring/file.sh@120 -- # jq length 00:35:41.366 13:18:46 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:41.626 13:18:46 -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:35:41.626 13:18:46 -- keyring/file.sh@121 -- # get_refcnt key0 00:35:41.626 13:18:46 -- keyring/common.sh@12 -- # get_key key0 00:35:41.626 13:18:46 -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:41.626 13:18:46 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:41.626 13:18:46 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:41.626 13:18:46 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:41.626 13:18:46 -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:35:41.626 13:18:46 -- keyring/file.sh@122 -- # get_refcnt key1 00:35:41.626 13:18:46 -- keyring/common.sh@12 -- # get_key key1 00:35:41.626 13:18:46 -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:41.626 13:18:46 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:41.626 13:18:46 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:41.626 13:18:46 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:41.886 13:18:46 -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:35:41.886 13:18:46 -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:35:41.886 13:18:46 -- keyring/file.sh@123 -- # jq -r '.[].name' 00:35:41.886 13:18:46 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:35:41.886 13:18:46 -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:35:41.886 13:18:46 -- keyring/file.sh@1 -- # cleanup 00:35:41.886 13:18:46 -- keyring/file.sh@19 -- # rm -f /tmp/tmp.uTOPQBnbwu /tmp/tmp.uxnQgexrah 00:35:41.886 13:18:46 -- keyring/file.sh@20 -- # killprocess 67286 00:35:41.886 13:18:46 -- common/autotest_common.sh@936 -- # '[' -z 67286 ']' 00:35:41.886 13:18:46 -- common/autotest_common.sh@940 -- # kill -0 67286 00:35:41.886 13:18:46 -- common/autotest_common.sh@941 -- # uname 00:35:41.886 13:18:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:35:41.886 13:18:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67286 00:35:42.149 13:18:46 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:35:42.149 13:18:46 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:35:42.149 13:18:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67286' 00:35:42.149 killing process with pid 67286 00:35:42.149 13:18:46 -- common/autotest_common.sh@955 -- # kill 67286 00:35:42.149 Received shutdown signal, test time was about 1.000000 seconds 00:35:42.149 00:35:42.149 Latency(us) 00:35:42.149 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:42.149 =================================================================================================================== 00:35:42.149 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:35:42.149 13:18:46 -- common/autotest_common.sh@960 -- # wait 67286 00:35:42.149 13:18:47 -- keyring/file.sh@21 -- # killprocess 65562 00:35:42.149 13:18:47 -- common/autotest_common.sh@936 -- # '[' -z 65562 ']' 00:35:42.149 13:18:47 -- common/autotest_common.sh@940 -- # kill -0 65562 00:35:42.149 13:18:47 -- common/autotest_common.sh@941 -- # uname 00:35:42.149 13:18:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:35:42.149 13:18:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 65562 00:35:42.149 13:18:47 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:35:42.149 13:18:47 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:35:42.149 13:18:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 65562' 00:35:42.149 killing process with pid 65562 00:35:42.149 13:18:47 -- common/autotest_common.sh@955 -- # kill 65562 00:35:42.149 [2024-04-26 13:18:47.161999] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:35:42.149 13:18:47 -- common/autotest_common.sh@960 -- # wait 65562 00:35:42.410 00:35:42.410 real 0m10.903s 00:35:42.410 user 0m26.015s 00:35:42.410 sys 0m2.496s 00:35:42.410 13:18:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:35:42.410 13:18:47 -- common/autotest_common.sh@10 -- # set +x 00:35:42.410 ************************************ 00:35:42.410 END TEST keyring_file 00:35:42.410 ************************************ 00:35:42.410 13:18:47 -- spdk/autotest.sh@294 -- # [[ n == y ]] 00:35:42.410 13:18:47 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:35:42.410 13:18:47 -- spdk/autotest.sh@310 -- # '[' 0 -eq 1 ']' 00:35:42.410 13:18:47 -- spdk/autotest.sh@314 -- # '[' 0 -eq 1 ']' 00:35:42.410 13:18:47 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:35:42.410 13:18:47 -- spdk/autotest.sh@328 -- # '[' 0 -eq 1 ']' 00:35:42.410 13:18:47 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:35:42.410 13:18:47 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:35:42.410 13:18:47 -- spdk/autotest.sh@341 -- # '[' 0 -eq 1 ']' 00:35:42.410 13:18:47 -- spdk/autotest.sh@345 -- # '[' 0 -eq 1 ']' 00:35:42.410 13:18:47 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:35:42.410 13:18:47 -- spdk/autotest.sh@354 -- # '[' 0 -eq 1 ']' 00:35:42.410 13:18:47 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:35:42.410 13:18:47 -- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]] 00:35:42.410 13:18:47 -- spdk/autotest.sh@369 -- # [[ 0 -eq 1 ]] 00:35:42.410 13:18:47 -- spdk/autotest.sh@373 -- # [[ 0 -eq 1 ]] 00:35:42.410 13:18:47 -- spdk/autotest.sh@378 -- # trap - SIGINT SIGTERM EXIT 00:35:42.410 13:18:47 -- spdk/autotest.sh@380 -- # timing_enter post_cleanup 00:35:42.410 13:18:47 -- common/autotest_common.sh@710 -- # xtrace_disable 00:35:42.410 13:18:47 -- common/autotest_common.sh@10 -- # set +x 00:35:42.410 13:18:47 -- spdk/autotest.sh@381 -- # autotest_cleanup 00:35:42.410 13:18:47 -- common/autotest_common.sh@1378 -- # local autotest_es=0 00:35:42.410 13:18:47 -- common/autotest_common.sh@1379 -- # xtrace_disable 00:35:42.410 13:18:47 -- common/autotest_common.sh@10 -- # set +x 00:35:50.548 INFO: APP EXITING 00:35:50.548 INFO: killing all VMs 00:35:50.548 INFO: killing vhost app 00:35:50.548 INFO: EXIT DONE 00:35:53.089 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:35:53.089 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:35:53.089 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:35:53.089 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:35:53.089 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:35:53.089 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:35:53.089 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:35:53.089 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:35:53.089 0000:65:00.0 (144d a80a): Already using the nvme driver 00:35:53.089 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:35:53.089 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:35:53.089 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:35:53.089 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:35:53.089 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:35:53.089 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:35:53.089 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:35:53.089 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:35:57.296 Cleaning 00:35:57.296 Removing: /var/run/dpdk/spdk0/config 00:35:57.296 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:35:57.296 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:35:57.296 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:35:57.296 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:35:57.296 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:35:57.296 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:35:57.296 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:35:57.296 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:35:57.296 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:35:57.296 Removing: /var/run/dpdk/spdk0/hugepage_info 00:35:57.296 Removing: /var/run/dpdk/spdk1/config 00:35:57.296 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:35:57.296 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:35:57.296 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:35:57.296 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:35:57.296 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:35:57.296 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:35:57.296 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:35:57.296 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:35:57.296 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:35:57.296 Removing: /var/run/dpdk/spdk1/hugepage_info 00:35:57.296 Removing: /var/run/dpdk/spdk1/mp_socket 00:35:57.296 Removing: /var/run/dpdk/spdk2/config 00:35:57.296 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:35:57.296 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:35:57.296 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:35:57.296 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:35:57.296 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:35:57.296 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:35:57.296 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:35:57.296 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:35:57.296 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:35:57.296 Removing: /var/run/dpdk/spdk2/hugepage_info 00:35:57.296 Removing: /var/run/dpdk/spdk3/config 00:35:57.296 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:35:57.296 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:35:57.296 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:35:57.296 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:35:57.296 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:35:57.296 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:35:57.296 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:35:57.296 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:35:57.296 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:35:57.296 Removing: /var/run/dpdk/spdk3/hugepage_info 00:35:57.296 Removing: /var/run/dpdk/spdk4/config 00:35:57.296 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:35:57.296 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:35:57.296 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:35:57.296 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:35:57.296 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:35:57.296 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:35:57.296 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:35:57.296 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:35:57.296 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:35:57.296 Removing: /var/run/dpdk/spdk4/hugepage_info 00:35:57.296 Removing: /dev/shm/bdev_svc_trace.1 00:35:57.296 Removing: /dev/shm/nvmf_trace.0 00:35:57.296 Removing: /dev/shm/spdk_tgt_trace.pid3754758 00:35:57.296 Removing: /var/run/dpdk/spdk0 00:35:57.296 Removing: /var/run/dpdk/spdk1 00:35:57.296 Removing: /var/run/dpdk/spdk2 00:35:57.296 Removing: /var/run/dpdk/spdk3 00:35:57.296 Removing: /var/run/dpdk/spdk4 00:35:57.296 Removing: /var/run/dpdk/spdk_pid11368 00:35:57.296 Removing: /var/run/dpdk/spdk_pid11701 00:35:57.296 Removing: /var/run/dpdk/spdk_pid1660 00:35:57.296 Removing: /var/run/dpdk/spdk_pid18858 00:35:57.296 Removing: /var/run/dpdk/spdk_pid19188 00:35:57.296 Removing: /var/run/dpdk/spdk_pid21736 00:35:57.296 Removing: /var/run/dpdk/spdk_pid2374 00:35:57.296 Removing: /var/run/dpdk/spdk_pid28890 00:35:57.296 Removing: /var/run/dpdk/spdk_pid28971 00:35:57.296 Removing: /var/run/dpdk/spdk_pid35085 00:35:57.296 Removing: /var/run/dpdk/spdk_pid3547 00:35:57.296 Removing: /var/run/dpdk/spdk_pid37361 00:35:57.296 Removing: /var/run/dpdk/spdk_pid3753249 00:35:57.296 Removing: /var/run/dpdk/spdk_pid3754758 00:35:57.296 Removing: /var/run/dpdk/spdk_pid3755650 00:35:57.296 Removing: /var/run/dpdk/spdk_pid3756731 00:35:57.296 Removing: /var/run/dpdk/spdk_pid3757032 00:35:57.296 Removing: /var/run/dpdk/spdk_pid3758388 00:35:57.296 Removing: /var/run/dpdk/spdk_pid3758555 00:35:57.296 Removing: /var/run/dpdk/spdk_pid3759010 00:35:57.296 Removing: /var/run/dpdk/spdk_pid3760267 00:35:57.296 Removing: /var/run/dpdk/spdk_pid3761064 00:35:57.296 Removing: /var/run/dpdk/spdk_pid3761450 00:35:57.296 Removing: /var/run/dpdk/spdk_pid3761855 00:35:57.296 Removing: /var/run/dpdk/spdk_pid3762260 00:35:57.296 Removing: /var/run/dpdk/spdk_pid3762615 00:35:57.296 Removing: /var/run/dpdk/spdk_pid3762842 00:35:57.296 Removing: /var/run/dpdk/spdk_pid3763087 00:35:57.296 Removing: /var/run/dpdk/spdk_pid3763464 00:35:57.296 Removing: /var/run/dpdk/spdk_pid3764877 00:35:57.296 Removing: /var/run/dpdk/spdk_pid3768466 00:35:57.296 Removing: /var/run/dpdk/spdk_pid3768721 00:35:57.296 Removing: /var/run/dpdk/spdk_pid3769124 00:35:57.296 Removing: /var/run/dpdk/spdk_pid3769222 00:35:57.296 Removing: /var/run/dpdk/spdk_pid3769660 00:35:57.296 Removing: /var/run/dpdk/spdk_pid3769936 00:35:57.296 Removing: /var/run/dpdk/spdk_pid3770323 00:35:57.296 Removing: /var/run/dpdk/spdk_pid3770549 00:35:57.296 Removing: /var/run/dpdk/spdk_pid3770900 00:35:57.296 Removing: /var/run/dpdk/spdk_pid3771042 00:35:57.296 Removing: /var/run/dpdk/spdk_pid3771405 00:35:57.296 Removing: /var/run/dpdk/spdk_pid3771426 00:35:57.296 Removing: /var/run/dpdk/spdk_pid3772012 00:35:57.296 Removing: /var/run/dpdk/spdk_pid3772244 00:35:57.296 Removing: /var/run/dpdk/spdk_pid3772641 00:35:57.296 Removing: /var/run/dpdk/spdk_pid3773021 00:35:57.296 Removing: /var/run/dpdk/spdk_pid3773108 00:35:57.296 Removing: /var/run/dpdk/spdk_pid3773461 00:35:57.296 Removing: /var/run/dpdk/spdk_pid3773819 00:35:57.296 Removing: /var/run/dpdk/spdk_pid3774173 00:35:57.296 Removing: /var/run/dpdk/spdk_pid3774446 00:35:57.296 Removing: /var/run/dpdk/spdk_pid3774693 00:35:57.296 Removing: /var/run/dpdk/spdk_pid3774955 00:35:57.296 Removing: /var/run/dpdk/spdk_pid3775298 00:35:57.296 Removing: /var/run/dpdk/spdk_pid3775659 00:35:57.296 Removing: /var/run/dpdk/spdk_pid3776013 00:35:57.296 Removing: /var/run/dpdk/spdk_pid3776377 00:35:57.296 Removing: /var/run/dpdk/spdk_pid3776734 00:35:57.296 Removing: /var/run/dpdk/spdk_pid3776999 00:35:57.296 Removing: /var/run/dpdk/spdk_pid3777238 00:35:57.296 Removing: /var/run/dpdk/spdk_pid3777500 00:35:57.296 Removing: /var/run/dpdk/spdk_pid3777859 00:35:57.296 Removing: /var/run/dpdk/spdk_pid3778214 00:35:57.296 Removing: /var/run/dpdk/spdk_pid3778573 00:35:57.296 Removing: /var/run/dpdk/spdk_pid3778936 00:35:57.296 Removing: /var/run/dpdk/spdk_pid3779280 00:35:57.296 Removing: /var/run/dpdk/spdk_pid3779525 00:35:57.296 Removing: /var/run/dpdk/spdk_pid3779801 00:35:57.296 Removing: /var/run/dpdk/spdk_pid3780097 00:35:57.296 Removing: /var/run/dpdk/spdk_pid3780521 00:35:57.296 Removing: /var/run/dpdk/spdk_pid3785070 00:35:57.296 Removing: /var/run/dpdk/spdk_pid3882939 00:35:57.296 Removing: /var/run/dpdk/spdk_pid3888148 00:35:57.296 Removing: /var/run/dpdk/spdk_pid3899291 00:35:57.296 Removing: /var/run/dpdk/spdk_pid3905685 00:35:57.296 Removing: /var/run/dpdk/spdk_pid3910499 00:35:57.296 Removing: /var/run/dpdk/spdk_pid3911189 00:35:57.296 Removing: /var/run/dpdk/spdk_pid3928342 00:35:57.296 Removing: /var/run/dpdk/spdk_pid3928778 00:35:57.296 Removing: /var/run/dpdk/spdk_pid3934039 00:35:57.296 Removing: /var/run/dpdk/spdk_pid3940921 00:35:57.296 Removing: /var/run/dpdk/spdk_pid3944006 00:35:57.296 Removing: /var/run/dpdk/spdk_pid3956865 00:35:57.296 Removing: /var/run/dpdk/spdk_pid39592 00:35:57.296 Removing: /var/run/dpdk/spdk_pid3967694 00:35:57.296 Removing: /var/run/dpdk/spdk_pid3969728 00:35:57.296 Removing: /var/run/dpdk/spdk_pid3970873 00:35:57.296 Removing: /var/run/dpdk/spdk_pid3991525 00:35:57.297 Removing: /var/run/dpdk/spdk_pid3996206 00:35:57.297 Removing: /var/run/dpdk/spdk_pid4001422 00:35:57.297 Removing: /var/run/dpdk/spdk_pid4003414 00:35:57.297 Removing: /var/run/dpdk/spdk_pid4005653 00:35:57.297 Removing: /var/run/dpdk/spdk_pid4005780 00:35:57.297 Removing: /var/run/dpdk/spdk_pid4006117 00:35:57.297 Removing: /var/run/dpdk/spdk_pid4006129 00:35:57.297 Removing: /var/run/dpdk/spdk_pid4006954 00:35:57.297 Removing: /var/run/dpdk/spdk_pid4009717 00:35:57.297 Removing: /var/run/dpdk/spdk_pid4010700 00:35:57.297 Removing: /var/run/dpdk/spdk_pid4011209 00:35:57.297 Removing: /var/run/dpdk/spdk_pid4013914 00:35:57.297 Removing: /var/run/dpdk/spdk_pid4014625 00:35:57.297 Removing: /var/run/dpdk/spdk_pid4015340 00:35:57.297 Removing: /var/run/dpdk/spdk_pid4020390 00:35:57.297 Removing: /var/run/dpdk/spdk_pid4026873 00:35:57.297 Removing: /var/run/dpdk/spdk_pid4032791 00:35:57.297 Removing: /var/run/dpdk/spdk_pid4077847 00:35:57.297 Removing: /var/run/dpdk/spdk_pid4082594 00:35:57.297 Removing: /var/run/dpdk/spdk_pid4090002 00:35:57.297 Removing: /var/run/dpdk/spdk_pid4091539 00:35:57.297 Removing: /var/run/dpdk/spdk_pid4093246 00:35:57.297 Removing: /var/run/dpdk/spdk_pid4098510 00:35:57.297 Removing: /var/run/dpdk/spdk_pid4104021 00:35:57.297 Removing: /var/run/dpdk/spdk_pid41079 00:35:57.297 Removing: /var/run/dpdk/spdk_pid4113020 00:35:57.297 Removing: /var/run/dpdk/spdk_pid4113136 00:35:57.297 Removing: /var/run/dpdk/spdk_pid4118098 00:35:57.297 Removing: /var/run/dpdk/spdk_pid4118418 00:35:57.297 Removing: /var/run/dpdk/spdk_pid4118750 00:35:57.297 Removing: /var/run/dpdk/spdk_pid4119091 00:35:57.297 Removing: /var/run/dpdk/spdk_pid4119099 00:35:57.297 Removing: /var/run/dpdk/spdk_pid4120458 00:35:57.297 Removing: /var/run/dpdk/spdk_pid4122454 00:35:57.297 Removing: /var/run/dpdk/spdk_pid4124452 00:35:57.297 Removing: /var/run/dpdk/spdk_pid4126408 00:35:57.297 Removing: /var/run/dpdk/spdk_pid4128301 00:35:57.297 Removing: /var/run/dpdk/spdk_pid4130201 00:35:57.297 Removing: /var/run/dpdk/spdk_pid4137483 00:35:57.297 Removing: /var/run/dpdk/spdk_pid4138065 00:35:57.297 Removing: /var/run/dpdk/spdk_pid4138929 00:35:57.297 Removing: /var/run/dpdk/spdk_pid4140085 00:35:57.297 Removing: /var/run/dpdk/spdk_pid4146045 00:35:57.297 Removing: /var/run/dpdk/spdk_pid4149916 00:35:57.297 Removing: /var/run/dpdk/spdk_pid4156371 00:35:57.297 Removing: /var/run/dpdk/spdk_pid4162638 00:35:57.297 Removing: /var/run/dpdk/spdk_pid4171063 00:35:57.297 Removing: /var/run/dpdk/spdk_pid4171081 00:35:57.557 Removing: /var/run/dpdk/spdk_pid4193736 00:35:57.557 Removing: /var/run/dpdk/spdk_pid4302 00:35:57.557 Removing: /var/run/dpdk/spdk_pid43341 00:35:57.557 Removing: /var/run/dpdk/spdk_pid44813 00:35:57.557 Removing: /var/run/dpdk/spdk_pid5006 00:35:57.557 Removing: /var/run/dpdk/spdk_pid54877 00:35:57.557 Removing: /var/run/dpdk/spdk_pid55581 00:35:57.557 Removing: /var/run/dpdk/spdk_pid56313 00:35:57.557 Removing: /var/run/dpdk/spdk_pid5797 00:35:57.557 Removing: /var/run/dpdk/spdk_pid59756 00:35:57.557 Removing: /var/run/dpdk/spdk_pid60210 00:35:57.557 Removing: /var/run/dpdk/spdk_pid60776 00:35:57.557 Removing: /var/run/dpdk/spdk_pid65562 00:35:57.557 Removing: /var/run/dpdk/spdk_pid65709 00:35:57.557 Removing: /var/run/dpdk/spdk_pid67286 00:35:57.557 Removing: /var/run/dpdk/spdk_pid800 00:35:57.557 Clean 00:35:57.818 13:19:02 -- common/autotest_common.sh@1437 -- # return 0 00:35:57.818 13:19:02 -- spdk/autotest.sh@382 -- # timing_exit post_cleanup 00:35:57.818 13:19:02 -- common/autotest_common.sh@716 -- # xtrace_disable 00:35:57.818 13:19:02 -- common/autotest_common.sh@10 -- # set +x 00:35:57.818 13:19:02 -- spdk/autotest.sh@384 -- # timing_exit autotest 00:35:57.818 13:19:02 -- common/autotest_common.sh@716 -- # xtrace_disable 00:35:57.818 13:19:02 -- common/autotest_common.sh@10 -- # set +x 00:35:57.818 13:19:02 -- spdk/autotest.sh@385 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:35:57.818 13:19:02 -- spdk/autotest.sh@387 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:35:57.818 13:19:02 -- spdk/autotest.sh@387 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:35:57.818 13:19:02 -- spdk/autotest.sh@389 -- # hash lcov 00:35:57.818 13:19:02 -- spdk/autotest.sh@389 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:35:57.818 13:19:02 -- spdk/autotest.sh@391 -- # hostname 00:35:57.818 13:19:02 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-12 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:35:57.818 geninfo: WARNING: invalid characters removed from testname! 00:36:15.919 13:19:20 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:19.213 13:19:23 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:21.123 13:19:25 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:22.522 13:19:27 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:23.904 13:19:28 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:25.287 13:19:30 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:27.200 13:19:31 -- spdk/autotest.sh@398 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:36:27.200 13:19:31 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:27.200 13:19:31 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:36:27.200 13:19:31 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:27.200 13:19:31 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:27.200 13:19:31 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:27.200 13:19:31 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:27.200 13:19:31 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:27.200 13:19:31 -- paths/export.sh@5 -- $ export PATH 00:36:27.200 13:19:31 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:27.200 13:19:31 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:36:27.200 13:19:31 -- common/autobuild_common.sh@435 -- $ date +%s 00:36:27.200 13:19:31 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1714130371.XXXXXX 00:36:27.200 13:19:31 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1714130371.NgbJXw 00:36:27.200 13:19:31 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:36:27.200 13:19:31 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:36:27.200 13:19:31 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:36:27.200 13:19:31 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:36:27.200 13:19:31 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:36:27.200 13:19:31 -- common/autobuild_common.sh@451 -- $ get_config_params 00:36:27.200 13:19:31 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:36:27.200 13:19:31 -- common/autotest_common.sh@10 -- $ set +x 00:36:27.200 13:19:31 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk' 00:36:27.200 13:19:31 -- common/autobuild_common.sh@453 -- $ start_monitor_resources 00:36:27.200 13:19:31 -- pm/common@17 -- $ local monitor 00:36:27.200 13:19:31 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:27.200 13:19:31 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=79753 00:36:27.200 13:19:31 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:27.200 13:19:31 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=79755 00:36:27.200 13:19:31 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:27.200 13:19:31 -- pm/common@21 -- $ date +%s 00:36:27.200 13:19:31 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=79757 00:36:27.200 13:19:31 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:27.200 13:19:31 -- pm/common@21 -- $ date +%s 00:36:27.200 13:19:31 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=79760 00:36:27.200 13:19:31 -- pm/common@26 -- $ sleep 1 00:36:27.200 13:19:31 -- pm/common@21 -- $ date +%s 00:36:27.200 13:19:31 -- pm/common@21 -- $ date +%s 00:36:27.200 13:19:31 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1714130371 00:36:27.200 13:19:31 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1714130371 00:36:27.200 13:19:31 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1714130371 00:36:27.200 13:19:31 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1714130371 00:36:27.200 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1714130371_collect-vmstat.pm.log 00:36:27.200 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1714130371_collect-bmc-pm.bmc.pm.log 00:36:27.200 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1714130371_collect-cpu-load.pm.log 00:36:27.200 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1714130371_collect-cpu-temp.pm.log 00:36:28.140 13:19:32 -- common/autobuild_common.sh@454 -- $ trap stop_monitor_resources EXIT 00:36:28.140 13:19:32 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j144 00:36:28.140 13:19:32 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:28.140 13:19:32 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:36:28.140 13:19:32 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:36:28.140 13:19:32 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:36:28.140 13:19:32 -- spdk/autopackage.sh@19 -- $ timing_finish 00:36:28.140 13:19:32 -- common/autotest_common.sh@722 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:36:28.140 13:19:32 -- common/autotest_common.sh@723 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:36:28.140 13:19:32 -- common/autotest_common.sh@725 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:36:28.140 13:19:32 -- spdk/autopackage.sh@20 -- $ exit 0 00:36:28.140 13:19:32 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:36:28.140 13:19:32 -- pm/common@30 -- $ signal_monitor_resources TERM 00:36:28.140 13:19:32 -- pm/common@41 -- $ local monitor pid pids signal=TERM 00:36:28.140 13:19:32 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:28.140 13:19:32 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:36:28.140 13:19:32 -- pm/common@45 -- $ pid=79771 00:36:28.140 13:19:32 -- pm/common@52 -- $ sudo kill -TERM 79771 00:36:28.140 13:19:33 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:28.141 13:19:33 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:36:28.141 13:19:33 -- pm/common@45 -- $ pid=79773 00:36:28.141 13:19:33 -- pm/common@52 -- $ sudo kill -TERM 79773 00:36:28.141 13:19:33 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:28.141 13:19:33 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:36:28.141 13:19:33 -- pm/common@45 -- $ pid=79775 00:36:28.141 13:19:33 -- pm/common@52 -- $ sudo kill -TERM 79775 00:36:28.141 13:19:33 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:28.141 13:19:33 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:36:28.141 13:19:33 -- pm/common@45 -- $ pid=79774 00:36:28.141 13:19:33 -- pm/common@52 -- $ sudo kill -TERM 79774 00:36:28.141 + [[ -n 3633964 ]] 00:36:28.141 + sudo kill 3633964 00:36:28.151 [Pipeline] } 00:36:28.170 [Pipeline] // stage 00:36:28.175 [Pipeline] } 00:36:28.193 [Pipeline] // timeout 00:36:28.198 [Pipeline] } 00:36:28.213 [Pipeline] // catchError 00:36:28.217 [Pipeline] } 00:36:28.233 [Pipeline] // wrap 00:36:28.238 [Pipeline] } 00:36:28.252 [Pipeline] // catchError 00:36:28.259 [Pipeline] stage 00:36:28.261 [Pipeline] { (Epilogue) 00:36:28.273 [Pipeline] catchError 00:36:28.274 [Pipeline] { 00:36:28.288 [Pipeline] echo 00:36:28.289 Cleanup processes 00:36:28.294 [Pipeline] sh 00:36:28.581 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:28.581 79855 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:36:28.581 80328 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:28.595 [Pipeline] sh 00:36:28.879 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:28.879 ++ grep -v 'sudo pgrep' 00:36:28.879 ++ awk '{print $1}' 00:36:28.879 + sudo kill -9 79855 00:36:28.892 [Pipeline] sh 00:36:29.179 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:36:41.555 [Pipeline] sh 00:36:41.848 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:36:41.848 Artifacts sizes are good 00:36:41.864 [Pipeline] archiveArtifacts 00:36:41.872 Archiving artifacts 00:36:42.101 [Pipeline] sh 00:36:42.381 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:36:42.393 [Pipeline] cleanWs 00:36:42.402 [WS-CLEANUP] Deleting project workspace... 00:36:42.402 [WS-CLEANUP] Deferred wipeout is used... 00:36:42.409 [WS-CLEANUP] done 00:36:42.411 [Pipeline] } 00:36:42.430 [Pipeline] // catchError 00:36:42.442 [Pipeline] sh 00:36:42.730 + logger -p user.info -t JENKINS-CI 00:36:42.744 [Pipeline] } 00:36:42.759 [Pipeline] // stage 00:36:42.763 [Pipeline] } 00:36:42.777 [Pipeline] // node 00:36:42.781 [Pipeline] End of Pipeline 00:36:42.801 Finished: SUCCESS